Loading summary
Indiana University Announcer
Indiana University is proving how higher education can create solutions with industry. We're working side by side with industry partners to fuel economic growth that powers a future ready workforce. Explore IU's impact at iu.edu impact.
AI Expert / Industry Leader
Bloomberg Audio Studios Podcasts, Radio news A master's.
Interviewer / Podcast Host
In computer Science from Stanford, 1988. In artificial intelligence, I was a new partner a law firm called Lohmer, Cutler and Pickering in Washington doing international trade. What did you see in artificial intelligence at the time?
AI Expert / Industry Leader
It's honored to be with you all. I wish I had been a behavioral economist. That was my only real big regret in life. And so what did I see in AI, which is sort of related? What's the nature of intelligence? I think I saw a dream of understanding ourselves and our society a little better. And that was sort of the primary motivator. I will say that the AI that I studied was the equivalent of the sun goes around the Earth field. So we had a fundamental theory in the 1980s that was completely false and didn't work, but we didn't know it at the time. So me and many other people then fled to all kinds of other fields. And it's exciting now to see a new set of techniques really be transformative. And to Mike's reference on the news this week, you know that I'm enamored with subscription models. And now I've become more aware about tender offers, and I like TV channels. So we're announcing today, Mike, tender offer for Bloomberg. We hope you will consider it appropriately. Your board members are willing to debate the transaction, but unfortunately, as you would suspect as a board member, that's all I can say on the news of the week.
Interviewer / Podcast Host
So you're not going to answer my boss's question, who's going to win?
AI Expert / Industry Leader
I think it is all answered.
Interviewer / Podcast Host
So looking back on AI as you saw it then, and whatever you saw then, comparing it to now, I hear a range of things being said as extreme as this is as big an invention as the Industrial revolution, some people say, is the discovery of fire. Are we overestimating or underestimating the effect generative AI will have on us?
AI Expert / Industry Leader
Well, let's think back 30, 40,000 years ago when we're a bunch of Neanderthals and we're in, in the entrance to our cave and, you know, we've been the dominant humanoid species for a couple hundred thousand years. And the two of us look down and we see Homo sapiens. We're like, look at those guys. They're so skinny, hairless, but they're pitiful but those guys were really intelligent. And ultimately those Homo sapiens dominated the Earth and killed off us Neanderthals. So I would say intelligence per species has been highly selected for, and that Homo sapiens intelligence has allowed us to become the dominant species on Earth because we use our intelligence to make tools and do other things. So I think it's a lot different than, say, mechanizing muscle power. So if you think of bulldozers and how did that transform society? It did a lot, but it did it over 100 years. And although muscles are important, they're not the core human attribute. Thus the dislocation of the Neanderthals, which were much stronger than humans. So if AI develops to actually be super intelligence, then it will be a lot more profound, I think, than anything else else, and that we will have actually real species threats because the AI will keep getting smarter and smarter and smarter without limit. And natural selection works quite slowly in terms of making humans more intelligent. So then we have to sort of say, okay, how fast and how, you know, will the computers really think? Obviously, when we all use AI as consumers, we can find it's pretty miraculous in some cases, but in other cases, it's not very effective. But we're understanding more and more of the techniques. And again, one theory is it'll be like Moore's Law and just AI will get better and better and better and better. The other theory is it's like the war on cancer, where cancer developed over a very long time. It has lots of different etiologies, and. And we keep coming up with a solution for one cancer but not another. And our overall progress in society against cancer has been pretty steady, but flat and definitely not exponential. And so it may be that as AI gets better, it hits various walls and that we've got some time to deal with it, or it may be that it stays on this exponential. So we're just going to have to watch. But I think we need to be prepared for it to be on the exponent, in which case we're going to have a lot of societal stress over the next 20 years.
Interviewer / Podcast Host
So your analogy is very helpful, but a little disturbing because I don't see many Neanderthals around anymore. Right. We have a lot of Homo sapiens in your analogy. What can we do to make sure that we survive that, that this thing we're creating doesn't become so smart and perhaps not going to have our best interest at heart?
AI Expert / Industry Leader
Well, it's a big challenge. It's hard to slow down because unlike chemical Weapons, like we're kind of in a war with the west and Russia, and yet no one's using chemical weapons and no one's using nuclear weapons. So even in this incredible state of war, we're able to put some limits on what happens. The problem with AI is it's very continuous. Your thermostat is an AI thermostat. So there's no good way for even the major powers to agree not to use AI because it's so integrated through everything else that we do. So there's no real practical scenario to take a break as human society, even if we could get the politics to work on that. So instead we're in the situation where we in America better, well, damn, win the race. And so everyone acknowledges that. And so all the companies are going full out, both for their own competitive reasons. And so it is a race, hopefully to the top of what can AI do for us? And there will be amazing positive scenarios. All kinds of medical cures, all kinds of productivity. I certainly, six years ago, AI was very good at image analysis. That was one of the workloads that it first got good at. And in particular in radiology, it got significantly better than the typical radiologist pretty quickly and has been better than the typical radiologist for at least five years. And so I would say at weddings and with friends who are radiologists, what's good that your kids are not going in the business, because radiology is going to be the first profession that's wrecked by AI. Good for humanity, but not good for radiologists. Well, what happened is as radiology got better because of AI analysis that helped radiologists, the price of scans went down, and the number of scans turned out to be hugely elastic. Now you walk in, you've got a cough, boom, you get a scan. What's happened now is there's about 34,000 radiologists in the US which is a shortage. We now have a labor shortage of radiologists by about 5,000. So it's incredible story of, of course, elasticity of demand to improve lives, that is more scans. And so we may well see with AI that it makes Wall street analysts more productive, it makes software engineers more productive, and that in fact, the elasticity grows and that we grow the economy. And when people ask, you know, we're spending collectively as an industry, a half a trillion dollars on AI data center, how is that ever going to get paid back? Well, you know, add one or two points to GDP growth and it gets paid back fast. So, you know, I don't want to, like, guarantee that it's always going to be like radiology. I do think software engineering is particularly fascinating because all the major AI companies are working on it. And so it is the white collar canary in the coal mine. And if software engineering jobs go down a lot over the next five years, then that's probably going to happen to law and architecture and many other things. If in fact because of the increased productivity, people are building a lot more software, sort of the radiology example, then I think in many professions we will see a big expansion and we can be more confident of the high productivity. None of that really answers the long term question that you asked, which is how are we not the Neanderthals? So I think reasonable chance, high productivity rather than mass unemployment. But then ultimately, what if they're smarter and smarter and smarter than us? We're going to have to find ways, and I don't know what they are, to both continue to insist on alignment. And that's where you train the AIs to care about human beings so they are aligned with our values. Okay, but if somebody doesn't train somebody programs their AI to try to take over the world, we're going to have to enlist the other AIs on our defense to protect us. Okay? So, you know, there's a number of scenarios out there and probably for 10, 20 years we're not going to know how serious the threat is, but we will have tools. It's not just that the AI biological species, we have been selected for dominance to try to grow our species. So AI is not naturally trying to expand. Not naturally. It could be programmed for that, but it could be also programmed to keep humans on top. So it's not as scary as a super powerful human, which we all kind of intuit. A super powerful human would be hard to hold back from taking over the world. It's not as dire as that.
Interviewer / Podcast Host
Insofar as we do make progress on trying to reshape generative AI in a more positive direction. Does that come from outside the industry or inside the industry? Does that come from government regulation and agreement, or does it have to come from inside? Because as I talk to many people in the business, they're much more focused on the race you talk about than they are on the safety part. They don't want to get slowed down in that. I mean, you're on the board of Anthropic, which I know is trying to make a move in the other direction. But can we do it from the outside or does the industry itself somehow have to internalize the Risk?
AI Expert / Industry Leader
Well, I think lots of the industry is working on it. So there's different sides of safety. So there's when you're treating AI like a counselor and it helps you tie a noose, that's not a good thing. And so those cases across the industry are getting more and more watched for and eliminated. So there's inevitable safety bumps as any technology grows. So then there's more macro safety. Like none of the major AIs can be used to design chemical weapons or biological weapons. But those defenses in the AIs, you know, aren't perfect. And we have to constantly invest in them to prevent people from using this super powerful technology plus some CRISPR to do some really bad things. So there's active work across the whole industry on those scenarios because from a commercial standpoint, nobody wants their AI brand to be the brand that develops some bad virus. So there is an incentive there to protect against the negative cases. But the sort of big long term case is that the AIs get so smart that they're then we have to do things to make sure that they're aligned with human values, that they are programmed so that success is humans flourishing.
Interviewer / Podcast Host
Give us your view about where we are likely to see the biggest effect. You mentioned biomedicine, you mentioned computer software programs. Let's talk. Turn to the news today with the deal between Disney and OpenAI and the entertainment space that, you know, well, I mean, you managed to transition from my perspective at least from more traditional tv, cable broadcast through to streaming. What does the transition look like from streaming to through to AI?
AI Expert / Industry Leader
So just as an example, we can look at video creation, storytelling and sort of say, let's look at what AI is going to do and the mechanics of generating video and frankly, whether that's a news channel trying to illustrate a concept or entertainers trying to do amazing special effects, we're going to have higher quality special effects and that will shift to be AI generated instead of manually generated with sort of overseas visual effects companies. So there is a shift there. But the core thing of storytelling is very hard. How do you do long form character development? Creating tension, resolving tension. That's not a case that the AI does well today. Eventually, 10, 20 years, the AI may win the Booker prize, okay. And then if it does, it's going to be able to do a movie script also. But for now, the AI is really helping on sort of industrial aspects of what we do, like an amazing visual effect shot. And so it's important and we want to stay on the front as Disney does, but it's not doing the core thing, which is which story is going to captivate human attention.
Interviewer / Podcast Host
I'm not sure that says much about my job, about whether I keep my job in the AI world, because there's a lot of things that are done on television with news and other things that frankly, are not creating the next Lion King.
AI Expert / Industry Leader
Well, I think interpreting the world in your case is a broad, general wisdom, and so I think you'll have a relatively longstanding role in your success.
Interviewer / Podcast Host
Given my age, I'm good.
AI Expert / Industry Leader
Given your age, you're good. But it is a very big change. Again, people want to compare it to the Industrial Revolution that happened over 200 years. This is going to happen over 10 or 20. It might be that what's caused political polarization in the last decade or two is rate of change. So think about since nafta, the rise of globalization, the redefining of marriage equality, immigration, all kinds of change in society. And one view is enough of our fellow Americans. It's just too much, and it radicalizes them. And they're willing to vote for things they wouldn't normally vote for. And if that's fundamentally what's going on in US Society, as well as Brexit and a couple other places, then we're in for a pretty big storm, because the rate of change is not going to slow down. The rate of change of society, partially or maybe largely driven by AI is going to be large. And so that may be that. You know how everyone today is nostalgic for Reagan. You know, there may be a day when we're nostalgic for Trump that our polarization has really continued to increase because the rate of change has continued to increase. So that's why I think it's so important that leaders like all of you are thinking about how do we build bonds, how do we have people, Americans, care about each other, so that we can keep the society coherent and caring despite the rapid amounts of change generated by technology.
Interviewer / Podcast Host
I want to come back to something you referred to just to explore a bit more, and that is, how are we going to pay for it? There is so much money going into data centers, investment right now, and to some extent, I think it's been supporting the markets because there's been so much investment. How do you get a return on an investment without laying off an awful.
AI Expert / Industry Leader
Lot of people with GDP growth? So you can either automate away the jobs. That's one theory. And the other is that people will produce more, and I'm sure it'll be a Mix. But efficiency often generates more growth. We did the radiology example, so it's sort of that at larger scale. So the way we pay for it is more GDP growth.
Interviewer / Podcast Host
You can do GDP growth with fewer people. And so what happens to the people who are out of work or have to take much more menial jobs than they had before because it's the knowledge workers?
AI Expert / Industry Leader
Well, I think it's a great question. Compare it to globalization. Probably this is a room full of globalists who believe in the benefits of trade. And yet despite that, that really did deliver in terms of the overall economy. There was enough dislocation in enough of the countries that our politics has been shifted. So I think there probably will be things like that because of AI, but at least it's not. Again, it's not a mass layoff scenario, most likely. It's much more likely that there's a growth response and that we also see significant GDP growth increased beyond what we typically see because of the productivity of AI.
Interviewer / Podcast Host
What I hear from you is AI has enormous potential. A lot of it for good, some of it for decidedly not good. It's very complicated. It's coming very, very fast. Where will the leadership in directing AI come from? Are there people who understand this, who get it, who have the right values, who can help us go in the right direction, particularly given how fast it's coming?
AI Expert / Industry Leader
I would say that's an emerging area. I mean, I think this was likely to be a big political issue in the next couple cycles because many Americans will be concerned about the world changing too fast and too much. And I think it's up to our politicians to sort of understand that and channel it in some productive way. There's leaders in the industry like Geoffrey Hinton that spends full time now on these. How do we keep humans at the center of the society system? So there definitely are emerging leaders in that way. But again, it's a new area like nuclear energy or other DNA for a long time was very controversial.
Interviewer / Podcast Host
Timing is everything, and it comes. The developments you talk about come at a time when there's a decided rise in populism in the United States as well as in much of the Western world. Right now, the early things I'm hearing, at least about political from the people is it increases our energy costs and it reduces our jobs. We're against it. And some politicians are showing signs now of trading on that and saying we should be against actually data centers, we should be against them. How do you put together the rise in populism with the likely effects of Generative AI.
AI Expert / Industry Leader
Well, the heart of democracy is we're going to live in this country together and we're going to have different views. And so you want a system that's not civil war to work out those views. And I think a lot of people will have those concerns. There'll be those effects and we have to take them thoughtfully and seriously. And arguably during the era of globalization there was sort of lip service to retraining, but we didn't really understand or take seriously the devastation of a lot of communities. So I think, you know, the elites in this room as an example, do a better job at that, which is where is AI really providing benefits to people's health care, to their day to day lives? If we can do that, then there's more toleration of the dislocation, which is, as you said, cost of electricity, data centers, those things.
Interviewer / Podcast Host
Are we over investing?
AI Expert / Industry Leader
It's unlikely, but it's definitely possible. But again, the telecom boom in 2000, I mean it's a slight over investment, so it could be. But if you look at the, in the telecom one, everyone got so excited about the Internet that they forward invested and were disappointed for a while. But ultimately the Internet really did deliver. But if you look at the mobile phone, it never had a bubble, it came out and just grew and grew and grew in impact. So you can get both things that live up to the hype and things where the hype gets bigger than the actual technology in the short term and the market's a good way to work that out.
Interviewer / Podcast Host
On AI, does it matter who gets there first? I mean, is this like Beta and vhs where the one that gets there first gets to really set the standard?
AI Expert / Industry Leader
Well, if you're talking countries, it matters a lot.
Interviewer / Podcast Host
So US versus China, take that.
AI Expert / Industry Leader
Correct. If you're talking companies, I care a lot because I'm on the board of Anthropic. But from a society standpoint, it probably just matters that the technology is developed and deployed in great ways.
Interviewer / Podcast Host
But it does matter in the US versus China.
AI Expert / Industry Leader
Yeah, absolutely. I mean, if another country doesn't have to be, China gets ahead of us. There are significant scenarios referred to in a class of the Singularity, where the AI gets so good that it writes the next AI and that AI is much better and then that writes the next AI. And so being first by even just a year gives you an enormous advantage. So that's the exponential. But again that's a theory that's not yet proven. But even its slight possibility means we better darn get there first. And I think we're on a good path for that.
Interviewer / Podcast Host
Well, that's my question. If that were your one goal in the United States, what are the policies that get you there? What are the things we need to do or avoid doing to make sure we're as competitive as possible?
AI Expert / Industry Leader
I think we're doing a pretty good job on that. I mean, you would push the edge on light regulation, make it easy to build data centers. You know, I think the government's done open field running, so I can't think. I don't think we need like a government investment program or something like that. So there's a lot of progress and we are ahead of our competitors as far as we can tell. And again, we don't want to make it zero sum because you'd like. Just like we did in globalization, we would like to bring the rest of the world along with us so that we've got a stable. We have less likely chance of war. So, I mean, but that's happening.
Interviewer / Podcast Host
Okay, so let's talk about what you really love. This has been very entertaining. But let's talk what you really love. Snow, powder and mountain resorts. This is what you really love, right?
AI Expert / Industry Leader
You know, have a slight addiction to skiing. And on retiring from Netflix, rather than do a sailboat or something like that, I decided to take over a ski resort and try to make another Yellowstone Club, but one that was sort of more artistic. And I'm a visitor. Yellowstone, Not a member, but it's a great place. But to do something like that. Yeah. So I've been learning the real estate business. It's a real turnaround. It's a very different thing for me. But when you've got overwhelming capital relative to a project, things are pretty easy. I can't say it's going to work out great for customers. I'm not sure it'll be the best investment I ever made, but I do love it.
Interviewer / Podcast Host
But you get to snowboard a lot.
AI Expert / Industry Leader
But I get to snowboard all winter. Psychic benefit. Powder Mountain, Utah is a fantastic place.
Interviewer / Podcast Host
Check it out.
AI Expert / Industry Leader
Really?
Interviewer / Podcast Host
Thank you so much.
AI Expert / Industry Leader
Really, what an honor. Thank you all. Thank you all.
Indiana University Announcer
This podcast is brought to you by FedEx. The new power move. Hey, you know those people in your office who are always pulling old school corporate power moves? Like the guy who weaponizes eye contact. He's confident, he's engaged, he's often creepy. It's an old school power move. But this alpha dog laser gaze won't keep your supply chain moving across borders. The real power move, having a smart platform that keeps up with the changing trade landscape. That's why smart businesses partner with FedEx and use the power of digital intelligence to navigate around supply chain issues before they happen. Set your sights on something that will actually improve your business. FedEx the new power Move.
Date: December 11, 2025
Host: Bloomberg
Guest: Reed Hastings (Netflix Co-Founder, Board Member at Anthropic, AI Industry Leader)
In this episode, Reed Hastings, Netflix co-founder and influential figure in the AI industry, discusses the rapid advancements of artificial intelligence, its parallels with historic revolutions, societal impacts, the future of entertainment, and the pressing question of global competition—especially between the US and China. Drawing from his experiences in both tech and entertainment, he offers candid, nuanced thoughts on AI risks, economic growth, policy, and the responsibilities of industry and government.
On the obsolete early days of AI research:
"The AI that I studied was the equivalent of the sun goes around the Earth field." (AI Expert, 00:54)
On AI as a species-defining force:
“Intelligence per species has been highly selected for... Homo sapiens dominated the Earth and killed off us Neanderthals.” (AI Expert, 03:13)
On exponential risk:
“If AI develops to actually be super intelligence, then it will be a lot more profound… we will have actually real species threats.” (AI Expert, 04:17)
On intractability of global pauses/regulation:
“There’s no real practical scenario to take a break as human society.” (AI Expert, 06:09)
On job displacement vs. growth with AI (radiology):
“Now you walk in, you've got a cough, boom, you get a scan...Now is there's about 34,000 radiologists in the US, which is a shortage.” (AI Expert, 08:34)
On existential AI risk and alignment:
"If somebody... programs their AI to try to take over the world, we're going to have to enlist the other AIs on our defense to protect us." (AI Expert, 09:58)
On where safety and ethics happen:
“Nobody wants their AI brand to be the brand that develops some bad virus.” (AI Expert, 12:34)
On AI’s creative limitations (current):
"Creating tension, resolving tension. That's not a case that the AI does well today." (AI Expert, 13:34)
On pacing and polarization:
"What’s caused political polarization... is rate of change...The rate of change is not going to slow down." (AI Expert, 15:09)
On paying for massive AI investment:
"The way we pay for it is more GDP growth." (AI Expert, 17:23)
On job displacement risks:
"There probably will be things like that because of AI, but at least it’s not...mass layoff scenario, most likely..." (AI Expert, 17:53)
On populism and political backlash:
"Early things I'm hearing… is it increases our energy costs and it reduces our jobs. We're against it." (Interviewer, 19:38)
On the US–China AI race:
"If another country... gets ahead of us... being first by even just a year gives you an enormous advantage." (AI Expert, 22:17)
Reed Hastings approaches each topic with a blend of clear-eyed pragmatism, dry wit, and measured optimism. He draws on personal anecdotes, market insights, and a structural view of society and the tech industry. The host engages with timely, tough questions, pushing for actionable insights and long-term thinking.
Hastings paints AI as the next epochal force—potentially greater than any single revolution before, bringing immense promise and unprecedented risk. While much is unknown, he believes the coming decades will require sound leadership, adaptability, and a firm commitment to shared values. The conversation concludes on a personal note, underscoring the human side of even those at the leading edge of technology.