The Last Invention – EP 4: Speedrun
Podcast Host: Longview
Date: October 16, 2025
Episode Overview:
This episode explores how the race to build artificial general intelligence (AGI) accelerated due to a paradoxical dynamic: the very people most concerned about AI's risks became the ones competing to build it fastest. Through deep dives into pivotal moments—Elon Musk’s alarms, Google’s DeepMind, OpenAI’s origin, and the development of super-powered models like ChatGPT—the episode traces rivalries, transitions from nonprofit idealism to for-profit pragmatism, and the immense pressures fueling a breakneck "speedrun" in AI development.
Main Theme:
The AI Speedrun—How Warnings Became a Race
The episode unpacks the core paradox driving the current AI landscape: the people most anxious about superintelligent AI (Elon Musk, Sam Altman, Demis Hassabis, Dario Amodei, et al.) concluded the only way to prevent catastrophe was to build transformative AI themselves—faster than anyone else.
Key Discussion Points & Insights
1. The Musk Paradox: Sounding the Alarm, Starting the Race
-
2017 National Governors Association: Elon Musk warns governors that AI is an "existential risk" and says proactive regulation is necessary—otherwise robots will eventually "do everything better than us" ([02:43]).
- Quote:
- "AI is a fundamental risk to the existence of human civilization. ... we need to be proactive in regulation instead of reactive. Because by the time we are reactive ... it's too late." – Elon Musk [02:43]
- Quote:
-
Backstory—The DeepMind Meeting:
- 2012 meeting (organized by Peter Thiel) between Musk and Demis Hassabis (DeepMind). Hassabis claims AI could follow humans to Mars, which unsettles Musk ([06:26]).
- "My AI will be able to follow you to Mars." – Attributed to Demis Hassabis (as recounted by interviewee) [05:06]
- Musk invests in DeepMind to "keep tabs" but fails to prevent Google’s acquisition.
- Musk ramps up public warnings about AI after losing out on DeepMind ([07:32]).
- “AI is far more dangerous than nukes. ... the single biggest existential crisis that we face.” – Elon Musk [07:32]
- 2012 meeting (organized by Peter Thiel) between Musk and Demis Hassabis (DeepMind). Hassabis claims AI could follow humans to Mars, which unsettles Musk ([06:26]).
-
Musk Tries to Rouse Action:
- Unsuccessful lobbying with President Obama and lawmakers; hosts "AI safety" dinners with tech leaders ([09:12]).
- At one such dinner, Sam Altman (then Y Combinator president) convinces Musk to create OpenAI as a nonprofit counterweight to Google ([12:15]).
2. OpenAI: The Nonprofit Ideals & Internal Fissures
-
Founding OpenAI (2015):
- Born out of existential fear: "If you want to stop a dangerous AI ... we need to make a safe AI before they make a dangerous one." – Sam Altman [12:15]
- Nonprofit, open source, for humanity ([13:13])
- "OpenAI is structured as a 501c3 nonprofit to help spread out AI technology so it doesn't get concentrated in the hands of a few." – Elon Musk [13:13]
-
Recruiting ‘Believers’:
- OpenAI’s early team: AGI doomers (AI could kill us) and accelerationists (AGI will bring utopia) ([15:02])
- "They were both there together in that one lab. ... The doomers and the accelerationists are just two sides of the same coin." – Sam Altman [15:44]
- OpenAI’s early team: AGI doomers (AI could kill us) and accelerationists (AGI will bring utopia) ([15:02])
-
First Setbacks & DeepMind’s Go Triumph:
- OpenAI starts with scattered research, no major gains.
- DeepMind’s AlphaGo defeats Go champion Lee Sedol in 2016—a landmark moment ([17:50]).
- Move 37’s impact: AlphaGo invents a creative move no human ever played ([24:54])
- "That move was what got everyone to say ... oh, the computer, like this machine can be creative, it can be intuitive." – Expert/Analyst [26:03]
- Move 37’s impact: AlphaGo invents a creative move no human ever played ([24:54])
- Sparks Musk’s frustration—demands more focus and action.
- “There obviously needs to be immediate and dramatic action, or else everyone except for Google will be consigned to irrelevance.” – Musk, quoted from internal OpenAI email [28:25]
3. Nonprofit to For-profit: The Musk–Altman Rift
- Tensions Over OpenAI’s Direction:
- To compete with DeepMind/Google, OpenAI needs more computing power—proposes a for-profit arm ([29:25]).
- Musk wants to be CEO if OpenAI turns for-profit, threatening to leave if not given control ([30:48]).
- "If you're going to turn that nonprofit into a for profit, then I should be the head of that." – Interviewer/Commentator paraphrasing Musk [30:27]
- Others resist, citing original mission to "avoid an AGI dictatorship" ([31:16]).
- "The goal of OpenAI is to make the future good and avoid an AGI dictatorship... So it is a bad idea to create a structure where you could become a dictator if you choose to." – Ilya Sutskever, in email to Musk [31:16]
- Musk leaves; Altman & team proceed without him ([32:06]).
4. New Directions: Language Models & the Power of Scale
-
OpenAI’s Breakthroughs Post-Musk:
- Shift from games to language models marks a turning point.
- "DeepMind had to copy paste OpenAI." – Andrej Karpathy quoted by Keech Hagee [38:33]
- Team focuses on scaling (bigger models, more data), leading to the creation of GPT models ([39:12]).
- Shift from games to language models marks a turning point.
-
Dario Amodei’s Influence:
- Pioneer in "mechanistic interpretability" and evangelist for the ‘scaling’ approach: more data + bigger models → more intelligence ([42:26]).
- Major Microsoft partnership supplies OpenAI with unprecedented GPU resources ([44:52]).
- “Guys, what if next time ... we take this model and we crank it up to 10,000 GPUs?” – Recounted from Dario Amodei’s proposal [45:17]
-
The Data Grab and Ethical Tensions:
- To match their compute, OpenAI ingests massive swaths of internet data—copyright and ethical debates arise ([49:01]).
- “They just started dumping big chunks of the Internet into their AI ... better to ask for forgiveness instead of permission.” – Interviewer/Commentator paraphrasing [49:33]
- To match their compute, OpenAI ingests massive swaths of internet data—copyright and ethical debates arise ([49:01]).
5. The Accelerationist Paradox & the Birth of Anthropic
-
Dario Amodei’s Departure:
- As safety advocates become accelerationists (“to ensure safety, you must build first”), Dario Amodei and his safety team leave OpenAI, founding Anthropic due to both philosophical and practical differences ([54:20]).
- “If you want to do cutting edge AI safety research on very powerful systems, you need to actually build those powerful systems.” – Narrator/Host [51:47]
- As safety advocates become accelerationists (“to ensure safety, you must build first”), Dario Amodei and his safety team leave OpenAI, founding Anthropic due to both philosophical and practical differences ([54:20]).
-
The Race Intensifies:
- Now, multiple players (OpenAI and Anthropic) seek to bring safe superintelligence to market first.
- "To make AGI safe, you need to make it fast." – Interviewer/Commentator paraphrasing [53:29]
- Fear of missing “winner-take-most” rewards and being perceived as second movers compels OpenAI to release ChatGPT preemptively ([57:09]).
- Now, multiple players (OpenAI and Anthropic) seek to bring safe superintelligence to market first.
6. The Unlikely Launch and Mass Adoption of ChatGPT
- ChatGPT Releases Almost by Accident:
- Initially considered a “low key research preview,” ChatGPT is released to stake claim before Anthropic ([58:38]).
- “The highest bet was 100,000 [users]. So that’s how many users they provisioned their servers for.” – Sam Altman [58:49]
- ChatGPT’s viral explosion shocks even its creators ([58:49]–[59:42]).
- Initially considered a “low key research preview,” ChatGPT is released to stake claim before Anthropic ([58:38]).
7. A Sobering Reflection
- Celebration and Dread:
- AI scientists, in the wake of recognition and achievement, reflect on the anxiety that comes with rapid progress.
- "I thought, I've achieved the greatest prize ... and my career has been so rewarding ... But ... aren't we going to build machines that we don't control and could potentially destroy us?" – Narrator/Host, quoting a leading AI scientist [60:58]
- AI scientists, in the wake of recognition and achievement, reflect on the anxiety that comes with rapid progress.
Notable Quotes & Memorable Moments
-
Elon Musk warning governors:
- “Until people see robots going down the street killing people, they don't know how to react.” – Elon Musk [02:20]
-
On Move 37 (AlphaGo):
- "The computer can be creative, it can be intuitive, it can sort of like master this thing that I always thought was a human task." – Expert/Analyst [26:03]
- "Maybe he just can show humans something we never discovered. Maybe it's beautiful." – Interviewer/Commentator [26:16]
-
On AI safety:
- “The only way to stop a bad guy with a powerful AI is a good guy with a powerful AI.” – Narrator/Host [52:41]
Timestamps for Important Segments
- Musk’s 2017 public warning: [02:20]–[03:34]
- Musk–Demis DeepMind meeting: [05:06]–[06:26]
- AlphaGo vs. Lee Sedol (the Go match and Move 37): [17:52]–[26:16]
- OpenAI’s internal emails: Musk’s frustration: [28:25]–[31:15]
- Musk leaving OpenAI: [31:47]–[32:06]
- Birth of language model focus at OpenAI: [39:12]–[41:00]
- Massive scaling hypothesis (10,000 GPUs): [45:17]
- OpenAI’s data collection tactics and legal/ethical debate: [49:01]–[50:07]
- Dario Amodei's departure and Anthropic’s founding: [54:20]–[56:04]
- Rumor of Anthropic’s chatbot, ChatGPT pre-emptive launch: [56:42]–[59:05]
- ChatGPT’s viral reception: [59:25]–[59:45]
- AI leaders reflect on progress and anxiety: [60:54]–[62:31]
Tone, Language, and Atmosphere
In the voices of the commentators, executives, and AI pioneers, this episode weaves urgency, competition, and barely-contained existential dread. The tone shifts from Musk’s prophetic urgency, through Silicon Valley’s mythmaking and rivalry, to a sobering, often self-aware anxiety as the pace of AI outstrips the capacity of its creators to ensure safe outcomes.
Conclusion
This episode of The Last Invention vividly maps the origins of the AI “speedrun”—the drive to build AGI faster and “safely,” led paradoxically by those who most feared its dangers. Rivalries, existential panic, and once-unthinkable technical achievements—all fueled by an arms-race mentality—have made AI’s present and future as chaotic as it is promising. The final reflection is both triumphant and uneasy, as the builders acknowledge how little time may be left to answer the existential questions they themselves set in motion.
