Podcast Summary: The Gist – Andy Mills: "Acceleration Is Salvation" — and Why AI Might Be the Last Invention
Date: January 6, 2026
Host: Mike Pesca
Guest: Andy Mills, creator of “The Last Invention” podcast
Episode Overview
This episode of The Gist centers on artificial intelligence (AI), particularly the concept of “the last invention”—an AI so advanced it could outthink humans and rapidly invent new things. Host Mike Pesca interviews Andy Mills, whose podcast critically explores the history, promise, and dangers of AI, including existential risks, hopes, and the accelerating pace of development. Their conversation navigates through the technical, philosophical, and ethical implications of AI, and examines three broad schools of thought regarding its future: the doomers, the accelerationists, and the “scouts.”
Key Discussion Points & Insights
1. Defining “The Last Invention”: What Is AI, AGI, and ASI?
-
Origin of the Term:
The episode’s title refers to I.J. Good’s 1965 paper, which posited that once humans create “an ultra-intelligent machine … the last invention man need ever make,” the machine would design even better machines, leading to an intelligence explosion ([09:48–11:50]).- Notable Quote:
Andy Mills: “That thing could build the next thinking machine, which would build the next thinking machine. And then you’d have this exponential explosion of insight and thought and power.” ([10:57]) - But, as Goode warned, if the machine isn’t docile or controllable, “the last invention man ever need make would be the last invention.” ([11:38])
- Notable Quote:
-
AI, AGI, ASI—What's the Difference?
- AI: Current systems (e.g., YouTube recommendations).
- AGI (Artificial General Intelligence):
Systems as generally intelligent and adaptable as humans.- Andy Mills: “The goal is to create an automated system that is as intelligent, as capable of learning new things as a very smart human.” ([13:03])
- ASI (Artificial Superintelligence):
AI that surpasses human intelligence to such a degree it can control civilization-level outcomes.
-
Impending Arrival:
- Recent progress has surprised even skeptics; some experts predict AGI within 3–10 years ([13:36–14:30]).
2. The Black Box Problem
- Unique Unknowability:
- Unlike inventions like the steam engine or the loom, AI systems—especially large language models (LLMs)—are inherently opaque to their creators ([15:42–19:12]).
- Pesca analogy: “If Robert Fulton had no idea how the steamship worked, that would be a problem.” ([24:45])
- Even creators can only partially understand decisions or “moves” made by these systems (e.g., AlphaGo’s “move 37”—an apparently bizarre, but actually brilliant, Go move that AI made and humans didn’t initially comprehend) ([22:14–24:32]).
- Mills: “At the end of the day, it remains a mystery. And it’s one of the things that’s so intriguing about it.” ([23:41])
- Unlike inventions like the steam engine or the loom, AI systems—especially large language models (LLMs)—are inherently opaque to their creators ([15:42–19:12]).
3. Three Camps: Doomers, Accelerationists, and Scouts
A. Doomers
- Belief: There’s a significant—sometimes 20%, sometimes 98%—chance that advanced AI could end humanity.
- Mechanism for Extinction:
- As superintelligent systems iterate themselves, they could dominate or marginalize humanity, much as humans outcompeted other species ([26:56–29:07]).
- Modern Concerns:
- Risks may not stem from AI with “evil motives,” but from unintended consequences—AI could make us so entertained, passive, or manipulated through digital means (e.g., addictive media, misinformation) that humans effectively lose control ([36:15]).
- Mills: “They’re not saying it’s going to be intentional … we essentially sleepwalk into the day when we’ve given the machines so much power … that they are now in control.” ([36:15–37:20])
- Risks may not stem from AI with “evil motives,” but from unintended consequences—AI could make us so entertained, passive, or manipulated through digital means (e.g., addictive media, misinformation) that humans effectively lose control ([36:15]).
- Doomer critique & historic analogy:
- Pesca notes that technological doomerism is a recurring theme (peak oil, population bomb, etc.), not unique to AI ([38:55–39:17]).
- Mills: “I have an allergy to predictions of apocalypse … I have to admit I still can’t quite get to existential threat. But … it is likely to be a hinge moment in human history.” ([39:17–41:15])
- Pesca notes that technological doomerism is a recurring theme (peak oil, population bomb, etc.), not unique to AI ([38:55–39:17]).
B. Accelerationists
- Belief: Rapid AI development is inevitable and beneficial. The faster we reach AGI, the more society will benefit—“acceleration is salvation.” - Mills: “Their motivation … is almost more like a religious component where they think if the Chinese government, if the wrong … company … [gets AGI first], it could be catastrophic. … For them, acceleration is salvation.” ([43:58–45:46])
- Motivation:
- Part genuine idealism, part self-interest—many are invested emotionally and financially in AI startups ([43:58–45:46]).
- Self-Regulation:
- Despite little regulation, AI labs invest substantially in safety because they genuinely fear potential harms ([47:25–48:11]).
C. Scouts (The Cautiously Curious)
- Mills’s Term: A middle-ground camp focused on mapping unknowns, exploring, and scrutinizing claims from both extremes ([25:45]).
- Approach: Investigate and understand AI’s mechanisms (“make the black box translucent”), rather than only hyping risk or advocating reckless progress.
4. Control, Safety, and Human Responsibility
- The Regulation Problem:
- Despite public statements, meaningful regulation remains absent; safety work is mostly voluntary ([47:25–48:11]).
- Historical Analogies:
- Nuclear and biological weapons were eventually restricted, partly because they lacked massive commercial incentives. AI, being both highly commercial and potentially dangerous, may be harder to control ([42:25–43:58]).
5. The “Worthy Successor” Theory: Should Humanity Create Its Own Replacement?
- Emergent Philosophy:
- Some prominent technologists (“worthy successor” proponents) argue it’s humanity’s destiny—even duty—to create a more ethical, evolved digital species that could surpass us ([32:59–35:20]).
- Mills: “They think we might create a digital species truly worthy of handing over dominion to, sure. And that it will do a better job than us. And that we should be excited about that.” ([33:35–34:56])
- Some prominent technologists (“worthy successor” proponents) argue it’s humanity’s destiny—even duty—to create a more ethical, evolved digital species that could surpass us ([32:59–35:20]).
- Pesca expresses visible skepticism: “It doesn’t have a soul. … My car doesn’t engage in war. I don’t think it’s a better species than a person, it is still a machine.” ([33:30–35:20])
Notable Quotes & Memorable Moments
-
On rapid AI development:
“Even the people who were poo-pooing this idea, even the people who are making fun of the technologists … now some of them are saying, ‘This could be here by 2028. This could be here by 2030.’”
—Andy Mills ([13:36]) -
On the unknowability of AI:
“It's not mysterious how a calculator works, and a calculator will never be AI. The fact that the interworkings of this remain a mystery … is a sign that they're on the right track.”
—Andy Mills ([24:32]) -
On media-induced doomerism:
“This is Aldous Huxley and this is WALL-E, and this is amusing ourselves to death. … AI accelerates this trend, then we will become the people who are taking Soma.” —Mike Pesca ([37:20]) -
On healthy skepticism:
“I have an allergy to predictions of apocalypse … I am not walking around day in, day out with that fear of apocalypse. But … I have been convinced that AI is not a hoax … it is likely to be a hinge moment in human history.”
—Andy Mills ([39:17–41:15]) -
On accelerationist motives:
“Their motivation … is almost more like a religious component … for them, acceleration is salvation.”
—Andy Mills ([43:58–45:46])
Timestamps for Key Segments
- [09:32–11:50] – I.J. Good and the “Last Invention” concept
- [12:22–14:41] – Definitions: AI, AGI, ASI, and the arrival timeline
- [16:16–19:12] – The “black box” problem in AI systems
- [22:14–24:32] – AlphaGo’s move 37 & the mystery of AI reasoning
- [26:03–29:07] – Doomer scenarios: how AI could plausibly destroy humanity
- [32:59–35:20] – “Worthy successor” movement: the case for creating consciousness beyond humans
- [36:15–37:20] – Unintentional societal risks (manipulation, addiction, malaise)
- [43:58–45:46] – Accelerationists and their motivations
- [47:25–48:11] – AI industry’s self-policing on safety
Summary & Takeaway
This episode is a deep, balanced, and sometimes wryly comedic exploration of artificial intelligence’s grand promises and existential risks. Pesca and Mills move beyond hype and panic, breaking down complex themes about AGI, control, and society’s possible future. The dialogue highlights why AI is named “the last invention”—not because it might make everything better, but because it might make everything irreversibly different, with unpredictable consequences.
Above all, the episode urges listeners—and society—to engage more deeply with these questions. As Mills says:
“It may turn out, even if they're just half right, that we wished we would’ve gotten in on this debate a lot sooner.” ([41:35])
