The Last Invention – EP 4: Speedrun
Podcast Host: Longview
Date: October 16, 2025
Episode Overview:
This episode explores how the race to build artificial general intelligence (AGI) accelerated due to a paradoxical dynamic: the very people most concerned about AI's risks became the ones competing to build it fastest. Through deep dives into pivotal moments—Elon Musk’s alarms, Google’s DeepMind, OpenAI’s origin, and the development of super-powered models like ChatGPT—the episode traces rivalries, transitions from nonprofit idealism to for-profit pragmatism, and the immense pressures fueling a breakneck "speedrun" in AI development.
Main Theme:
The AI Speedrun—How Warnings Became a Race
The episode unpacks the core paradox driving the current AI landscape: the people most anxious about superintelligent AI (Elon Musk, Sam Altman, Demis Hassabis, Dario Amodei, et al.) concluded the only way to prevent catastrophe was to build transformative AI themselves—faster than anyone else.
Key Discussion Points & Insights
1. The Musk Paradox: Sounding the Alarm, Starting the Race
-
2017 National Governors Association: Elon Musk warns governors that AI is an "existential risk" and says proactive regulation is necessary—otherwise robots will eventually "do everything better than us" ([02:43]).
- Quote:
- "AI is a fundamental risk to the existence of human civilization. ... we need to be proactive in regulation instead of reactive. Because by the time we are reactive ... it's too late." – Elon Musk [02:43]
-
Backstory—The DeepMind Meeting:
- 2012 meeting (organized by Peter Thiel) between Musk and Demis Hassabis (DeepMind). Hassabis claims AI could follow humans to Mars, which unsettles Musk ([06:26]).
- "My AI will be able to follow you to Mars." – Attributed to Demis Hassabis (as recounted by interviewee) [05:06]
- Musk invests in DeepMind to "keep tabs" but fails to prevent Google’s acquisition.
- Musk ramps up public warnings about AI after losing out on DeepMind ([07:32]).
- “AI is far more dangerous than nukes. ... the single biggest existential crisis that we face.” – Elon Musk [07:32]
-
Musk Tries to Rouse Action:
- Unsuccessful lobbying with President Obama and lawmakers; hosts "AI safety" dinners with tech leaders ([09:12]).
- At one such dinner, Sam Altman (then Y Combinator president) convinces Musk to create OpenAI as a nonprofit counterweight to Google ([12:15]).
2. OpenAI: The Nonprofit Ideals & Internal Fissures
3. Nonprofit to For-profit: The Musk–Altman Rift
- Tensions Over OpenAI’s Direction:
- To compete with DeepMind/Google, OpenAI needs more computing power—proposes a for-profit arm ([29:25]).
- Musk wants to be CEO if OpenAI turns for-profit, threatening to leave if not given control ([30:48]).
- "If you're going to turn that nonprofit into a for profit, then I should be the head of that." – Interviewer/Commentator paraphrasing Musk [30:27]
- Others resist, citing original mission to "avoid an AGI dictatorship" ([31:16]).
- "The goal of OpenAI is to make the future good and avoid an AGI dictatorship... So it is a bad idea to create a structure where you could become a dictator if you choose to." – Ilya Sutskever, in email to Musk [31:16]
- Musk leaves; Altman & team proceed without him ([32:06]).
4. New Directions: Language Models & the Power of Scale
-
OpenAI’s Breakthroughs Post-Musk:
- Shift from games to language models marks a turning point.
- "DeepMind had to copy paste OpenAI." – Andrej Karpathy quoted by Keech Hagee [38:33]
- Team focuses on scaling (bigger models, more data), leading to the creation of GPT models ([39:12]).
-
Dario Amodei’s Influence:
- Pioneer in "mechanistic interpretability" and evangelist for the ‘scaling’ approach: more data + bigger models → more intelligence ([42:26]).
- Major Microsoft partnership supplies OpenAI with unprecedented GPU resources ([44:52]).
- “Guys, what if next time ... we take this model and we crank it up to 10,000 GPUs?” – Recounted from Dario Amodei’s proposal [45:17]
-
The Data Grab and Ethical Tensions:
- To match their compute, OpenAI ingests massive swaths of internet data—copyright and ethical debates arise ([49:01]).
- “They just started dumping big chunks of the Internet into their AI ... better to ask for forgiveness instead of permission.” – Interviewer/Commentator paraphrasing [49:33]
5. The Accelerationist Paradox & the Birth of Anthropic
6. The Unlikely Launch and Mass Adoption of ChatGPT
- ChatGPT Releases Almost by Accident:
- Initially considered a “low key research preview,” ChatGPT is released to stake claim before Anthropic ([58:38]).
- “The highest bet was 100,000 [users]. So that’s how many users they provisioned their servers for.” – Sam Altman [58:49]
- ChatGPT’s viral explosion shocks even its creators ([58:49]–[59:42]).
7. A Sobering Reflection
- Celebration and Dread:
- AI scientists, in the wake of recognition and achievement, reflect on the anxiety that comes with rapid progress.
- "I thought, I've achieved the greatest prize ... and my career has been so rewarding ... But ... aren't we going to build machines that we don't control and could potentially destroy us?" – Narrator/Host, quoting a leading AI scientist [60:58]
Notable Quotes & Memorable Moments
Timestamps for Important Segments
- Musk’s 2017 public warning: [02:20]–[03:34]
- Musk–Demis DeepMind meeting: [05:06]–[06:26]
- AlphaGo vs. Lee Sedol (the Go match and Move 37): [17:52]–[26:16]
- OpenAI’s internal emails: Musk’s frustration: [28:25]–[31:15]
- Musk leaving OpenAI: [31:47]–[32:06]
- Birth of language model focus at OpenAI: [39:12]–[41:00]
- Massive scaling hypothesis (10,000 GPUs): [45:17]
- OpenAI’s data collection tactics and legal/ethical debate: [49:01]–[50:07]
- Dario Amodei's departure and Anthropic’s founding: [54:20]–[56:04]
- Rumor of Anthropic’s chatbot, ChatGPT pre-emptive launch: [56:42]–[59:05]
- ChatGPT’s viral reception: [59:25]–[59:45]
- AI leaders reflect on progress and anxiety: [60:54]–[62:31]
Tone, Language, and Atmosphere
In the voices of the commentators, executives, and AI pioneers, this episode weaves urgency, competition, and barely-contained existential dread. The tone shifts from Musk’s prophetic urgency, through Silicon Valley’s mythmaking and rivalry, to a sobering, often self-aware anxiety as the pace of AI outstrips the capacity of its creators to ensure safe outcomes.
Conclusion
This episode of The Last Invention vividly maps the origins of the AI “speedrun”—the drive to build AGI faster and “safely,” led paradoxically by those who most feared its dangers. Rivalries, existential panic, and once-unthinkable technical achievements—all fueled by an arms-race mentality—have made AI’s present and future as chaotic as it is promising. The final reflection is both triumphant and uneasy, as the builders acknowledge how little time may be left to answer the existential questions they themselves set in motion.