Podcast Summary: "They Warned About AI Before It Was Cool. They're Still Worried"
Podcast: Consider This from NPR
Host: Juana Summers, NPR reporters
Date: September 25, 2025
Duration: ~15 minutes
Overview
This episode of NPR's Consider This explores the evolving conversation around artificial intelligence—specifically, the fears and warnings from researchers who have long sounded the alarm about AI's potential dangers. The episode weaves together history, recent AI advancements, and varied perspectives—from "AI doomers" to students using AI in daily life—while examining whether society is ready (or willing) to put meaningful brakes on AI's rapid development.
Key Discussion Points & Insights
Science Fiction Origins & AI Doomerism
- AI in Sci-Fi:
The episode opens with iconic AI fiction references, like HAL from "2001: A Space Odyssey" and Skynet from "Terminator," setting the stage for fears of runaway machine intelligence.- "Skynet becomes self aware at 2:14am Eastern Time, Aug. 29... In a panic, they tried to pull the plug. Skynet fights back." — (00:26–00:36)
- The Singularity:
Keith Roedersheimer, a former fellow at the Singularity Institute (now MIRI), describes the classic "self-improving AI" scenario, emphasizing the danger of recursive, exponential self-enhancement.- "It's able to look at its own source code and say, ah, if I change this, I'm going to get smarter...that creates an extraordinarily intelligent thing." — Roedersheimer (01:05)
- "They called this the Singularity because that intelligence could grow so fast, our human minds might not be able to keep up." — Juana Summers (01:22)
AI in Everyday Life: Ambition, Irony, and Fatalism
- AI Developers’ Mindset:
At a San Francisco AI demo night, entrepreneur Jonathan Liu (creator of "Cupidly," an AI dating agent) embodies the duality among techies: hope for utopia clashing with genuine doomsday anxiety.- "I think once we do get super intelligence, hopefully we'll live in a utopia where nobody has to actually work ever again." — Jonathan Liu (05:09)
- "What's my P doom [probability of AI doom]? I would say around 50%." — Jonathan Liu (05:36)
- "And yet you're smiling about it." — Martin Kosti (05:40)
- "I'm smiling about it because there's nothing we can do about it." — Liu (05:42)
- Alignment Problem:
AI boomers and "doomers" alike wonder: if we create something smarter than humanity, how do we ensure it's aligned with our goals and values? The fundamental challenge is popularly called "the alignment problem."- "If they were to build something that's smarter than us, how would they keep it on our side? That problem is called alignment..." — Martin Kosti (05:44)
Extreme Caution: Warnings from Longtime Critics
- MIRI & Nate Soares:
In Berkeley, MIRI president Nate Soares gave up on technical "alignment" efforts and is now focused on a stark warning: stop building advanced AI altogether.- "I spent quite a number of years, maybe about 10 years, trying to figure out how to make AI go well. And for a bunch of reasons that’s been going poorly." — Nate Soares (06:40)
- "I would not call it AI safety. I would say, you know, safety is for seatbelts. And if you're in a car sort of careening towards a cliff edge, you wouldn't say, hey, let's talk about car safety here. You would say, let's stop going over the cliff edge." — Soares (07:10)
- MIRI’s new book is pointedly titled:
"The title of the book is 'If Anyone Builds It, Everyone Dies.'" — Soares (07:54)
- Timeline Anxiety:
Soares refuses to put a precise date on when catastrophe could strike, but he warns the window could be "a couple years, could be a dozen years." (08:04)
Pushback, Criticism, and Policy Inertia
-
Critics of AI Pessimism:
Some believe the concerns are exaggerated, arguing that current AI is still far from even human-level intelligence. There’s also the risk of doom-talk helping to inflate AI’s reputation.- "Some critics say it's overblown... Others say the doomers are unwittingly hyping AI." — Martin Kosti (08:15)
- The Atlantic labeled MIRI figures as "useful idiots" for making AI look more powerful than it is.
-
Government Regulation and International Race:
- Governmental efforts lag behind the rapid pace of AI development.
- "Right now the conversation on AI is still very, very early." — Mark Beale, AI Policy Network (08:46)
- Game theory dominates the larger conversation: No single nation or company is willing to pause for fear others will press ahead, creating a classic "race" to potentially catastrophic outcomes.
- "If I end up killing everyone, I've maybe taken off a couple of weeks because OpenAI would have done it a week later. And then Trump and Vance can say, yeah, maybe this will kill everyone, but if we don't do it, China will." — Jim Miller (09:23)
- Governmental efforts lag behind the rapid pace of AI development.
-
Personal Stakes:
Economist Jim Miller, so convinced by AI doom calculations, has postponed a risky brain surgery, betting on (or against) the arrival of a superhuman AI in the next few years. (09:42)
Younger Generation and Practical Risks
-
Campus Voices:
At UC Berkeley, student club members are skeptical of immediate doom but note that AI is already deeply integrated into daily life (assignments, automation of thinking), raising subtler concerns.- "I can't remember the last time I did an assignment without using AI." — Adi Mehta (10:38)
- "It's automating a lot of our thinking away, which personally, that's like a pretty big fear." — Student (10:38)
- "I think many things are possible but it seems like it's not the most likely scenario at this stage..." — Natalia Trounce (10:39)
-
Difficulty of Persuasion:
MIRI’s Soares laments the difficulty convincing people of existential risk when AI is so normalized.- "With AI already such a normal part of life here, it's hard to convince people that we're about to go over that cliff." — (11:25)
- But he hopes for a gradual enough progression to enable a warning and response:
"Maybe the AI is doing a little better, getting a little smarter, getting... more reliable. Maybe that'll make people a lot more spooked. I don't know." — Soares (11:25)
-
A Wish to Be Wrong:
- "And maybe, just maybe, he and his fellow AI doomers are wrong about the danger. He says he would love to be wrong, but he doubts he is." — Martin Kosti (11:37)
Notable Quotes & Memorable Moments
- "What's my P doom? I would say around 50%."
— Jonathan Liu (05:36) - "I would not call it AI safety. ... if you're in a car sort of careening towards a cliff edge, you wouldn't say, hey, let's talk about car safety. You would say, let's stop going over the cliff edge."
— Nate Soares (07:10) - "The title of the book is 'If Anyone Builds It, Everyone Dies.'"
— Nate Soares (07:54) - "If I end up killing everyone, I've maybe taken off a couple of weeks because OpenAI would have done it a week later."
— Jim Miller (09:23) - "I can't remember the last time I did an assignment without using AI."
— Adi Mehta (10:38) - "Maybe the AI is doing a little better, getting a little smarter, getting... more reliable. Maybe that'll make people a lot more spooked. I don't know."
— Nate Soares (11:25) - "He says he would love to be wrong, but he doubts he is."
— Martin Kosti on Soares (11:37)
Timestamps for Key Segments
- 00:00–01:22: Sci-fi fears, origin of AI doomerism, Singularity
- 04:21–05:44: AI demo night in San Francisco; Jonathan Liu’s optimism and fatalism
- 05:44–07:23: Alignment problem explained; interviews with researchers at MIRI (Nate Soares)
- 07:23–08:15: Why some have given up on alignment, “If anyone builds it, everyone dies”
- 08:15–09:12: Criticism of doomer stance; policy debate and regulatory inertia
- 09:12–09:42: The international AI race and personal stakes (Jim Miller’s health decision)
- 10:38–11:17: Student perspectives at UC Berkeley; everyday uses and emerging risks
- 11:25–11:52: Soares on gradual risk, hope to be proven wrong
Tone and Takeaway
Throughout, the tone is alternately wry, anxious, and urgent—reflecting both the uncertainty and the near resignation among some AI experts, offset by skepticism from younger users and critics. The episode leaves listeners with a sense of precariousness: the future of AI could be either mundane, utopian, or extinction-level disastrous, and the outcome may hinge as much on policy and social response as on technical breakthroughs.
The final word is a mix of hope and doubt: the doomers, as the host notes, "would love to be wrong, but [doubt] they are." (11:52)
