Making Sense with Sam Harris
Episode #435 — The Last Invention
Date: October 2, 2025
Episode Overview
This episode previews the new podcast series "The Last Invention"—a limited-run, documentary-style exploration of artificial intelligence, produced by award-winning journalists Andy Mills and Gregory Warner (Longview). This installment investigates the controversy, optimism, and existential dread surrounding the rise of Artificial General Intelligence (AGI), featuring interviews with leading thinkers and insiders, including Sam Harris, Nick Bostrom, Geoffrey Hinton, Kevin Roose, William MacAskill, Conor Leahy, and others.
The episode introduces the cultural and ethical stakes of AI's rapid advancement, the conflicting factions within tech (accelerationists vs. “doomers” vs. “scouts”), and the fundamental question: Could AI be humanity’s last and most transformative invention—or its ultimate undoing?
Key Discussion Points & Insights
1. A Conspiracy and the Real Stakes of AI
[04:03 - 09:00]
- The episode opens with a real-world conspiracy theory: a whistleblower (Mike Brock) alleges a “faction” in Silicon Valley plotted to overtake the U.S. government by automating it with AI, starting with initiatives like the Department of Government Efficiency (DOGE) under Elon Musk.
- Andy Mills investigates the claim, discovering that while some elements are unsubstantiated, the broader reality may be more consequential: there are powerful figures openly aiming to replace not just bureaucrats, but jobs and institutions wholesale with artificial intelligence.
Quote [08:22] - Sam Harris:
“The world as you know it is over...I believe it’s going to change the world more than anything in the history of mankind, more than electricity.”
2. Accelerationists: AGI as Humanity’s Leap Forward
[09:00 - 12:50]
- Some technologists see AGI as an inevitable, positive revolution: a tool promising unprecedented abundance, solutions to disease, climate, poverty, and even immortality.
- This group—labelled Accelerationists—includes key industry figures (e.g., Bill Gates, Sam Altman, Mark Zuckerberg).
- The technology is viewed not as mere software, but as a new “intelligent species”—potentially capable of exceeding all human cognitive abilities.
Quote [11:28] - Kevin Roose:
“Many of these people believe that the human brain is just a kind of biological computer...if you could just build a computer that simulated that, you could essentially create a new kind of intelligent being.”
Quote [12:19] - Kevin Roose:
“It wouldn’t be a computer program...It wouldn’t be a human...It would be this sort of digital supermind that could do anything a human could and more.”
3. From Dreams to AGI: The Shifting Timelines
[13:07 - 15:12]
- Formerly, AGI was considered a distant, even laughable goal; now, frontline AI developers predict AGI may arrive within 3–5 years.
- Vast investment and a “race” for AI supremacy reflect how imminent the breakthrough appears to insiders.
Quote [14:19] - Kevin Roose:
“I think the overwhelming majority view...is that it would be surprising to them if it took more than about three years for AI systems to become better than humans at almost all cognitive tasks...certainly within the next five.”
4. The Doomers: Existential Risk and the Call to Halt Development
[15:16 - 27:16]
- A rival faction—dismissively dubbed “Doomers” (who prefer ‘Realists’)—warns that AGI is uniquely dangerous.
- Figures like Eliezer Yudkowsky, Nick Bostrom, Geoffrey Hinton, and Conor Leahy explain that a self-improving AGI could quickly progress to ASI (artificial superintelligence), which could render humans obsolete or worse.
- Once an AGI starts designing successors, each iteration grows exponentially more intelligent, consolidating control.
- The risk is likened not to an evil AI, but to indifference; ASI may regard humans as inconveniences, comparable to the way we regard ants.
Quote [17:34] - Geoffrey Hinton:
“It really is an existential threat...I want to explain to people it’s not science fiction, it’s very real: the risk that we’ll develop an AI that’s much smarter than us and it will just take over.”
Quote [23:25] - William MacAskill:
“It’s not like we hate ants...but if ants get in the way of our interests, then we’ll fairly happily kind of destroy them.”
Quote [21:18] - Conor Leahy:
“By default, these systems will be more powerful than us...unless they have a very good reason for keeping humans around, I expect that by default, they will simply not do so, and the future will belong to the machines, not to us.”
- The “Doomer” perspective: once AGI/ASI is released, its goals may diverge from ours, with irreversible consequences.
- Some advocate for strict, even militarized prevention of AGI’s emergence—including, in principle, outlawing certain computing endeavors or physically disabling datacenters.
Quote [24:50] - Conor Leahy:
“We should not build ASI. Just don’t do it. We’re not ready for it and it shouldn’t be done...I think it should be legally illegal for people and private corporations to attempt even to build systems that could kill everybody.”
5. The Scouts: Preparation and Global Cooperation
[27:36 - 32:14]
- A third camp—the “Scouts”—don’t believe stopping AGI is feasible or necessarily desirable; instead, they advocate maximal preparation, prudent regulation, and global cooperation to align AI's evolution with human values and survival.
- Proponents, such as William MacAskill and Geoffrey Hinton, argue that regulation (including safety tests, whistleblower protections), governance frameworks, and international collaboration are urgent and necessary.
- The metaphor: We’re on a “tightrope walk” with no practice—a single misstep could be fatal, so every action must be as careful and coordinated as possible.
Quote [32:32] - Sam Harris:
“There’s every reason to think that we have something like a tightrope walk to perform successfully now...and we’re edging out onto the tightrope in a style of movement that is not careful...we’re like, racing out there in the most chaotic way, flailing our arms...off-balance already, looking over our shoulder, fighting with the last asshole we met online, and we’re leaping out there.”
- International cooperation is possible: Hinton suggests not even rival nations are interested in ceding power to machines, enabling shared research and standards.
Quote [30:25] - Geoffrey Hinton:
“No government wants [AI] to take over, so governments will be able to collaborate on how to deal with that...we have to make it not want to take over, and the techniques you need...are different from the techniques you need for making it more intelligent.”
Quote [28:39] - William MacAskill:
“We should be really focusing a lot right now on...what are all the obstacles we need to face along the way and what can we be doing now to ensure that that transition goes well.”
Notable Quotes & Moments
- [12:56] Gregory Warner: “I guess this is where people get worried about jobs getting replaced…”
- [15:50] Sam Harris: “I am worried about the AI that is smarter than us. I’m worried about the AI that builds the AI that is smarter than us and kills everyone.”
- [18:37] Geoffrey Hinton: “What I’m talking about is the existential threat of this kind of digital intelligence taking over from biological intelligence…for that threat, all of us are in the same boat.”
- [23:04] Andy Mills: “If you’re going to build a new house…you’re not going to be concerned about the ants that live on that land that you’ve purchased.”
- [33:35] Sam Harris: “Imagine we received a communication from elsewhere in the galaxy from an alien civilization…People of Earth, we will arrive on your lowly planet in 50 years. Get ready…That is what we’re building—that collision and that new relationship.”
Segment Timestamps
- (00:21–03:55): Sam Harris introduces the series and its creators
- (03:55–09:00): The Silicon Valley “conspiracy” and Mike Brock’s warnings
- (09:00–12:50): Accelerationists and their vision for AGI
- (12:50–15:12): How the timeline for AGI has rapidly shifted
- (15:16–27:16): The existential risk camp (“Doomers”), recursive self-improvement, and why some fear even continued AI research
- (27:36–32:14): The Scouts' approach—preparing for AGI through regulation and cooperation
- (32:14–33:35): The “tightrope” analogy and urgency of careful progress
- (33:35–End): Recap, teasers for future episodes, and conclusion
Tone & Language
- The tone is urgent, balanced, and exploratory; the hosts blend skepticism, empathy, and a willingness to engage with all sides.
- Notable is the repeated use of vivid analogies (ants, tightrope walk, alien visitors) to ground abstract risks in relatable terms.
Summary for New Listeners
This episode delivers an essential, nuanced map of the ideological battlegrounds shaping the AI revolution, spotlighting the hopes, anxieties, and philosophies driving key players. Whether you’re worried about job loss, societal upheaval, or the extraordinary promise (and peril) of a machine intelligence smarter than humankind, this episode provides a clear, engaging, and comprehensive entry point into the debate.
To follow the series, search for “The Last Invention” from Longview wherever you get your podcasts.
