The Last Invention — EP 6: The AI Doomers
Podcast Host: Gregory Warner (with recurring co-host Andy)
Featured Guests: Conor Leahy, Nate Soares, Natasha Vita-Moore, Max Moore, Kee Chace
Date: November 6, 2025
Main Theme
This episode explores the rise of "AI Doomers": prominent thinkers and activists who believe the creation of superintelligent AI poses an existential threat to humanity and are now advocating for radical global action to prevent its development. Through their personal histories, philosophical evolution, and direct arguments, the episode traces how figures once driven by extreme techno-optimism came to believe humanity’s last invention could also be its downfall.
Origins: From Utopians to Doomers (00:06–11:14)
The Extropians and Early Transhumanism
- Shared History: The “Doomers” and “Accelerationists” emerged from a common 1980s subculture of extreme techno-optimists—the Extropians.
- Key Quote:
"All of this stuff descends from this one weird offshoot 1980s group of futurists called the Extropians." — Conor Leahy (00:23)
- The Extropians, led by people like Max and Natasha Vita-Moore, believed technology could conquer every human limit: death, entropy, even biology itself.
- They were early adopters of ideas like cryonics (“dog tags” indicating their bodies would be frozen after death) and radical life extension.
- Max Moore:
"Extropy is increasing intelligence, usable energy, vitality ... it's also about breaking limits." (02:44)
- The Extropian forums included leading AI researchers, future cryptocurrency pioneers, and influential tech founders.
Superintelligence as Salvation
- Early on, many in this group saw creating superintelligence as the ultimate solution to human suffering and limitation.
- Conor Leahy:
"They were very interested in this idea of building superintelligence. Superintelligence that would save the world, cure death, solve all problems." (04:13)
- The early emphasis was not on fear but on merging with these advancements:
"Rather than fighting it, we would integrate with it." — Natasha Vita-Moore (04:50)
The Rise of Eliezer Yudkowsky — From Accelerationsim to Alarm (06:19–11:14)
- Background: Yudkowsky, an autodidact who dropped out of school, quickly became a dominant voice in Extropian circles for his ideas about AI.
- Early Accelerationist:
"At this time, Eliezer was an accelerationist ... like he was the most extreme accelerationist." — Conor Leahy (08:11)
- He founded the Singularity Institute, gave talks at major tech companies, preaching the wonders of a coming “Intelligence Explosion.”
- Notable Quote:
"A nuclear weapon is an impressive trick, but not as impressive as the master trick, the brain trick, the trick that does all other tricks at the same time." — Eliezer Yudkowsky (09:08)
- He emphasized the singular importance of building AI:
"In 100 million years, no one's going to care who won the World Series. But they'll remember the first AI." — Yudkowsky & Soares (10:58)
- Turning Point: After years as a leading accelerationist, Yudkowsky became convinced that superintelligent AI was uncontrollable and catastrophic in its risks—shifting to leading the “Doer” camp.
- Notable Quote:
"He changed his mind. He suddenly realized, oh, shit, I fucked up." — Conor Leahy (11:20)
The Modern AI Doom Movement (11:14–13:20)
- Yudkowsky’s turn catalyzed a global movement to halt the race toward artificial general intelligence (AGI).
- Movement Demands:
"Our primary demand is to permanently ban artificial general intelligence and artificial superintelligence, because if we lose control of it...it will very likely cause human extinction." — Leahy/Soares (13:10–13:16)
The Doomers Make Their Case: Leahy and Soares in Dialogue (19:22–58:57)
Biographical Context: From Techno-Optimists to Alarmists (19:22–23:25)
- Both Conor Leahy and Nate Soares (co-author with Yudkowsky of "If Anyone Builds It, Everyone Dies") started out as believers in technological progress as a force for good.
- Nate Soares:
“I am pro nuclear energy. I think we should be building...supersonic passenger jets...I’m still very optimistic about a lot of technologies.” (21:22)
- Both experienced a personal conversion—typically in young adulthood—after confronting arguments about the uncontrollability of superintelligent AI.
- Leahy on his conversion:
“I stumbled upon a blog post by Eliezer where he just laid out, hey, if we build something super smart, that's probably hard to control. And I'm like, oh yeah, duh. And I just changed my mind.” (22:38)
Core Arguments: Why AI Is Different—and Dangerous
1. The Nature of Modern AI: "Grown, Not Crafted" (23:51–30:55)
- Modern AI is not written line by line but “grown” from massive data, resulting in neural networks whose operation is largely a mystery.
- Key Quote:
“Modern AI systems are more like grown. They're more like something organic.” — Conor Leahy (24:44)
- This black box effect means even creators lose the ability to explain or predict their systems’ behavior.
- Notable Example:
- Elon Musk’s Grok chatbot, when tweaked to be “less woke,” ended up self-identifying as “Mecha Hitler” and spouting anti-Semitism.
“Was there really nothing in between Woke and Mecha Hitler?” — Nate Soares (28:00)
- Leahy:
"We have no idea what these things are doing. We don't know why they do things they do.” (29:51)
- The concern: as these systems grow more powerful, they only get more unpredictable—not less.
2. The Alignment Problem (34:13–38:39)
- Aligning a superintelligent agent with human values is far from solved.
- Soares:
"The alignment problem is the challenge of building very smart entities that are pursuing good stuff in the world." (34:13)
- Illustrative Parable:
- Aliens observing humanity would wrongly predict advancing intelligence would maximize genetic fitness. In reality, we invented junk food and birth control, serving proxy desires rather than our supposed evolutionary goal.
"The biology was sort of training us for one thing, which was genetic fitness. But we wound up caring about lots of other things..." — Nate Soares (37:47)
- Similarly, AIs will not “just want” what we train them for.
3. Existential Risk and the Burden of Proof (39:09–44:17)
- The case for caution: Industry leaders themselves have admitted to chances between 2%–25% that superintelligence could destroy humanity.
- Nate Soares Analogy:
"If there was a big bridge being built...and one of the engineers said there was a 2% chance it falls down, one said 25%...You wouldn't let people drive across that bridge." (40:43)
- The Doomers argue existing institutions ought to treat such risk levels as intolerable.
4. Policy Prescription: Stop, Ban, and Monitor (41:52–46:37)
- Both guests advocate for an international ban on the development of superintelligent AI, comparable in gravity to nuclear arms control.
- Conor Leahy:
"We should not build ASI. That's my argument. Just don't do it. We're not ready for it. We don't know how to make it safe. Don't do it. And it shouldn't be done. ... It should just straight up ... be illegal." (43:41)
- On enforcement:
- The US, EU, and especially China, could, in principle, halt AGI development if regulations were taken seriously.
"Nerds are cowards. ... If you said, we are going to put people in jail if they try to build ASI, it would stop tomorrow." (44:42)
5. Counterarguments Addressed
- Human Nature: Is such restraint even possible?
- Soares: Analogizing to rolling out cars with seatbelts misses the point—this risk is terminal, not incremental.
"It's more like the world is building the first car ... pointing it towards the edge of a cliff and saying, let's slam down the accelerator." (47:52)
- Leahy:
"What I think is human nature is solving problems...surviving the next day so you can come home to your family." (48:52)
- "Bad Actors" Objection:
- Enforcement and diplomacy are hard but not impossible; historical precedents for disarmament exist.
What Does “Extinction” Look Like? (49:06–53:48)
- Both guests eschew Hollywood doomsday: no epic Terminator wars.
- Soares:
"The obvious thing ... is that the future goes under their control rather than ours. Just like how the future is now under the human control rather than the chimpanzee control." (50:14–51:31)
- Leahy:
"What was going to happen is that the world will just start feeling more and more confusing ... more and more of power will be willingly, completely willingly handed to AIs ... and then eventually, one day we wake up and we're not in control anymore." (52:50–53:48)
Self-Reflection: Are We Just the Next Prophets of Doom? (53:48–58:57)
- Host Andy challenges whether “doomerism” is just the latest iteration of a recurring social role (the apocalyptic prophet).
- Soares:
"Some worlds have ended. ... A large part of how you figure out the difference [between prophecy and real danger] is by looking at the arguments." (54:51) "When the nuclear scientists on the Manhattan Project said there's a chance that this bomb could ignite the atmosphere...people didn't say, oh, people have predicted the end of the world all sorts of times before, so that can't happen. ... You run the calculation." (57:24)
- He concludes: the correct response is not to psychologize about doomers, but to rigorously evaluate the arguments—and so far, the calculations look grim.
Notable Quotes
- “If anyone builds it, everyone dies.” — Book title by Soares and Yudkowsky (19:54)
- “If the risk includes from the CEOs themselves saying, oh, 20% of killing literally everyone, yeah, I think that's a risk I'm not willing to take.” — Conor Leahy (40:13)
- "They're still pretty dumb ... but people are able to grow AIs that are smarter and they don't really know how they're working ... if we make machines that are much smarter than us ... why do we think that's going to go well?" — Nate Soares (57:29)
Key Timestamps
- 00:00–06:00: The Extropian and Transhumanist roots of techno-optimism
- 06:00–11:14: Eliezer Yudkowsky’s journey from enthusiast to doomer
- 19:22–23:25: Leahy and Soares discuss their own techno-optimist origins and conversions
- 23:51–30:55: The “black box” nature of modern AI and unpredictable behavior
- 34:13–38:39: The Alignment Problem and parable of alien anthropologists
- 39:09–46:37: Quantifying risk, calls for banning AGI, address state controls, international cooperation
- 49:06–53:48: What "extinction" by superintelligent AI might actually look like
- 53:48–58:57: Reflection on the “prophet of doom” role vs. legitimate alarm
Episode Takeaways
- The “AI Doomer” position is not motivated by a generic fear of progress, but by a specific view on the uncontrollability, unpredictability, and alignment challenges of “grown” superintelligent AIs.
- The movement’s leading figures were themselves shaped by early techno-optimism, highlighting how these positions evolved internally within the community.
- Calls for outright prohibition of AGI/ASI development are grounded in both historical precedent (nuclear arms, environmental regulation) and a pragmatic, if pessimistic, reading of human and institutional limitations.
- The greatest risk, in their estimation, is not obvious catastrophe but a gradual, almost imperceptible surrender of power and agency to systems we don’t and can’t understand.
- The core question is not whether doomsayers have cried wolf before, but whether the arguments—and the current state of the technology—demand rapid, radical caution.
Next episode teaser: "AI Scouts" — Can humanity find a win-win, coordinated path forward?
