"The Last Invention" – EP 8: The Accelerationists (November 20, 2025)
Overview
This episode of The Last Invention, hosted by Andy Mills and Gregory Warner, dives into the world of AI accelerationism at the AI for Good conference in Geneva. Exploring the ideas, hopes, and controversy surrounding accelerationism—the belief that advancing AI quickly is not only necessary but beneficial—the hosts speak with key figures including legendary investor Reid Hoffman and “effective accelerationist” Beth Jesos. The show examines who the accelerationists are, their arguments, the polarization within tech, and the tension between hopes for progress and the fears of AI’s risks.
Key Discussion Points and Insights
1. The AI For Good Conference: The Buzz of AI Optimism
- [02:53–10:20]
- The hosts attend the UN-backed conference in Geneva, bustling with world leaders, tech companies, and demos of AI applications.
- Optimism dominates: Most attendees express excitement about AI's potential to address huge problems—climate change, poverty, medical advances (e.g., age-reversing pills), and accessibility.
- Highlights include:
- [04:33–05:15] David Sinclair and a team working with AI on anti-aging medicine envisioning people routinely living to 120+.
- [06:11–07:32] French neuroscientist Olivier Ulier demonstrates a brainwave-reading headset that allows the paralyzed Rodrigo Mendez to drive a Formula 1 car using only his mind.
- [07:53–10:10] Dutch roboticist Guido de Kron showcases insect-sized drones for precision agriculture, suggesting a future where “farms will buzz with robotic insects helping end world hunger.”
Quote
"We are the generation that is determined, ladies and gentlemen, determined to shape AI for good."
—Andy Mills [03:34]
2. Who Are the Accelerationists?
- [16:00–21:00]
- Accelerationists are techno-optimists who argue that rapid development and deployment of AI will drive prosperity and solve pressing global issues.
- The movement encompasses a spectrum—from capitalist investors (Sam Altman, Peter Thiel, Marc Andreessen) to socialist academics (Alex Williams), each united by the belief in pushing AI forward, but differing on their reasons and ultimate visions.
- Some, like AI lab leaders, accept AGI could be risky but see preparedness and speed as vital for positive outcomes—and for beating global competitors.
- Others downplay existential risks as overblown, regarding doomerism as a drag on progress and prosperity.
Quote
"Our entire civilization, our entire culture is predicated on accelerating technological change."
—Peter Thiel (quoted) [18:31]
3. Two Faces of Accelerationism: Hoffman and Jesos
- [21:01–31:14]
- Reid Hoffman: Legendary investor, OpenAI board member, early “PayPal mafia” member. He's enthusiastic about AI's promise but dislikes the accelerationist label. He sees the era as a “Cambrian explosion” level inflection point.
- Beth Jesos (Xtropic co-founder, Effective Accelerationist leader): Advocates a proactive, almost ideological, campaign to “flood the culture” with techno-optimism to combat spreading pessimism and regulatory “safetyism.”
- Jesos’ “effective accelerationism” is a deliberate countermovement, driven by the idea that society’s best shot is to bet big on AI now.
Memorable Quotes
“Accelerate or die—and those are our choices.”
—Beth Jesos [22:02]
“This is perhaps the most important moment in human history—maybe past the invention of fire.”
—Reid Hoffman [21:01]
4. Why Speed? Promises of the Accelerated Future
- [23:28–25:35]
- Beth Jesos lays out the benefits: ultra-cheap, personalized education; universal telemedicine; AI-driven industrial booms. Accelerationists see rampant fears about job loss as backward; instead, they forecast robust growth and prosperity.
- Both Jesos and Hoffman see AI as supercharging human capabilities, especially in language-driven domains.
Key Segment
- [23:52] Reid Hoffman: “If everyone had the equivalent of a doctor in their pocket, that was free...”
5. Accelerationism as a Response to Pessimism and Over-Regulation
- [26:00–31:14]
- Jesos discusses the concept of “hyperstition”—the idea that beliefs and memes can shape reality. He argues pessimism, overregulation, and “safetyism” choke innovation, and that flooding the discourse with optimism is a form of societal self-defense.
- Excess safety culture, in his view, leads to stagnation by letting centralized institutions accumulate unchecked power.
Quote
“Anxiety is a state where you give up more agency, you give up your freedoms... In a way, the more you are scared, the more power you give up to centralized institutions.”
—Beth Jesos [28:10]
6. Balancing Risk and Reward: Hoffman's Portfolio View
- [35:39–39:39]
- Hoffman critiques viewing AI as a singular, standalone existential risk. Instead, he views it as part of humanity’s larger “risk portfolio” (nuclear war, pandemics, etc.). He argues AI on balance will decrease existential risk through its problem-solving capacity.
- On the feared mass job displacement, Hoffman says we never transition easily into new eras but the net upside will be enormous—likening AI to an “industrial revolution of the mind.”
Quote
“Existential risk for humanity is a portfolio... even as [AI] is going, it has a much higher likelihood or probability of decreasing the overall existential risk portfolio.”
—Reid Hoffman [37:22]
7. AI “Scouts” and the Limits of AI Safety
- [41:14–44:38]
- Jesos acknowledges some value in AI safety research but believes it's overemphasized, likening the “AI alignment” community to a cottage industry arguing for its own existence.
- He doubts humanity will ever have complete control over increasingly complex AI systems, likening control to adjusting pretty basic system “knobs,” as in monetary policy, rather than total steering.
- Hoffman confirms OpenAI once had, perhaps, “moderately too much fear” among team members and highlights the futility of calls to halt progress—people who care will slow down, but adversaries (or profit-motivated competitors) won’t, making such efforts self-defeating.
8. The End Game: Opportunity, Agency, and Societal Upheaval
- [49:05–52:14]
- The conversation takes a hopeful turn: Jesos says accelerationism can be a unifying, inspirational movement, promising that a “rising tide floats all boats” if optimism and agency are embraced.
- Both Jesos and Hoffman see the coming era as deeply transformative—potentially tumultuous, echoing the Industrial Revolution with all its chaos and opportunity, but not necessarily doomed to repeat history’s worst mistakes.
Quotes
“If you believe you can change things, you will actually change things.”
—Beth Jesos [51:04]
“You have to be iterating, you know, kind of iterate deployment and be thinking about it and navigating as you're kind of going.”
—Reid Hoffman [39:58]
9. Reflection: The Debate's Stakes and The Road Ahead
- [53:23–54:48]
- Hoffman tempers the optimism: transitions will be hard, but humanity has more tools (including AI itself) to smooth the ride.
- “[AI] will be more impactful” than the industrial revolution, with potentially religious fervor for some, and deep skepticism for others.
Quote
“As we gather this afternoon, we're still in the earliest days of one of the most important technological revolutions in the history of the world.”
—Reid Hoffman [54:20]
Notable Quotes & Moments with Timestamps
- “Accelerate or die—and those are our choices.” (Beth Jesos, 22:02)
- “Our entire civilization, our entire culture is predicated on accelerating technological change.” (Peter Thiel, 18:31)
- “Existential risk for humanity is a portfolio... [AI] has a much higher likelihood or probability of decreasing the overall existential risk portfolio.” (Hoffman, 37:22)
- “If you believe you can change things, you will actually change things.” (Beth Jesos, 51:04)
- “Fear is always the easy one. Like in any new technology, fear will be more rhetorically compelling.” (Reid Hoffman, 48:26)
Timestamps for Key Segments
- [02:53–10:20] – AI For Good conference and optimism
- [16:00–22:00] – Introducing the accelerationists, their diversity and motives
- [21:01–31:14] – Hoffman & Jesos: motivations, backgrounds, different accelerationisms
- [23:28–25:35] – Visions of benefits: education, healthcare, jobs
- [26:00–31:14] – Hyperstition, safetyism, the campaign for optimism
- [35:39–39:39] – Hoffman's risk framework and historical analogies
- [41:14–44:38] – Skepticism on AI safety research, self-defeating “slowing down” arguments
- [49:05–52:14] – Accelerationism as hope and agency, societal transformation reflection
- [53:23–54:48] – How tough the transition might be, lessons of past revolutions
Conclusion
The Accelerationists episode offers a deep, candid, and often provocative look into one of the central ideological divides at the heart of the AI revolution. It introduces listeners to the main arguments for AI acceleration—more prosperity, liberation from misery, and potentially a less risky world—and juxtaposes these with enduring fears about control, safety, and the unpredictable nature of rapid technological change. Listeners are left with challenging and timely questions: Should society embrace rapid change and risk? Or try to rein in the pace at the expense of potential upsides? As the podcast hints, the argument has only just begun.
