Podcast Summary: The Last Invention – EP 7: The Scouts (Nov 13, 2025)
Overview
In this episode of The Last Invention, host Gregory Warner investigates the birth of the “AI Scouts”—a group united by the belief that we not only can, but perhaps should, pursue superintelligence, but only if we also come together to avoid catastrophic risks. The discussion starts with the historic 2015 Puerto Rico meeting that brought together both optimists and skeptics in AI, leading to the formation of foundational principles for the industry. Warner then explores the current landscape through in-depth conversations with two leading “Scouts,” poker champion and game theorist Liv Boeree and philosopher William MacAskill, as they debate the peril and promise of AGI, the dangers of race dynamics, and the need for new institutions, wisdom, and long-term thinking.
Key Discussion Points & Insights
1. The 2015 Puerto Rico Summit: Birth of the AI Scouts
-
Gathering the Camps
Max Tegmark (co-founder, Future of Life Institute) organized an unprecedented closed-door meeting of AI leaders and critics using a clever, lighthearted invitation:“The invitation I sent out… had a photo of a guy shoveling his car out from three feet of snow next to a photo of the beach by the hotel… where would you rather be on this date?” (Max Tegmark, [03:05])
-
Disconnection and Dysfunction
The summit aimed to bridge bitter divides between engineers optimistic about AGI and critics worried about existential risks:“The conversation happening was completely dysfunctional… they both thought that the other ones were crazy or reckless or morally unscrupulous…” (Max Tegmark, [03:53])
-
Finding Common Ground
Personal interaction—lunches, debates, drinks—helped diverse attendees find each other more reasonable than presumed:“…to see people who both thought the other one was crazy… over lunch and some wine… both updated to think, ‘oh wow, this other person is actually much more reasonable than I thought.’” (Tegmark, [07:06])
-
Lasting Legacy
Elon Musk’s $10M commitment at the summit catalyzed the first major grant program for AI safety ([08:00]). Tegmark notes,“[It] went a very long way to mainstreaming AI safety in academia.” ([08:14])
Safety became a mainstream research topic—no longer taboo. -
Principles & “Race Avoidance”
Key outcome: signing the Asilomar AI Principles, including #5, “race avoidance,” and a call for superintelligence to be developed for the common good ([09:53], [10:31]).
Tegmark laments that many commitments have since “been compromised by the current race to be the company that makes it first” ([10:47]) and observes competitive pressures have overcome those early ideals.
2. Meet the Scouts: Liv Boeree & William MacAskill
a. Reasons for Optimism—and Catastrophic Risk
Liv Boeree (game theorist, astrophysicist, ex-poker champion):
Asserts the win-win future is possible but “I try not to be a naive optimist… I’m extremely concerned that the current trajectory we are on is actually on a lose-lose path.” ([17:20])
William MacAskill (philosopher, effective altruism leader):
“The attitude is one of taking really seriously the potential benefits of highly advanced AI… but also appreciating… enormous challenges and… we should be preparing now.” ([18:18])
-
Risks Outlined:
- Catastrophe from rogue AI or global pandemics ([20:16])
- Institutional, environmental, and mental health collapse ([20:52])
- Intense concentration of power, potentially a “world government” ruled by the US or China ([30:22])
- Threats to personal freedom in a world where humans have no economic value ([31:13])
-
Rewards Outlined:
- Potential to eradicate poverty, disease, and even war:
“One is just the ability to make better decisions, to think better, to have more knowledge… If we have superintelligence, then we can get superintelligent advice.” (MacAskill, [21:30])
- “Enormous abundance, such that everyone in the world… could be millionaires many times over.” ([23:02])
b. The AGI Race Trap
-
The Geopolitical “Race”
Both Scouts express alarm over the intensifying US–China AI race:“It’s just a race to who can go off the cliff the fastest. No one wins such a thing…” (Boeree, [27:38])
- Fear of “value lock-in” if one country or company monopolizes AGI ([28:55])
- MacAskill warns: “leaders of AI companies could themselves stage a coup if they wanted to… that’s quite a lot more extreme than merely an erosion of democracy.” ([33:27])
-
Historical Echoes: The Trap of Moloch
Boeree explains the "Moloch" metaphor for competition gone malignant:“It’s the personification of game theory gone wrong… sacrificing other important values to win a quick thing… when everybody does it… creates these race to the bottom dynamics.” ([38:11])
-
Current Incentive Structures
Boeree describes how even well-intentioned lab leaders are pressured into risky, rushed releases:“People I know who work at some of these labs… would love to take more time before releasing their products… but if they don’t… they run the risk of losing their engineers… It’s an incredibly tricky situation.” ([44:09])
3. Solutions: Wisdom, Institutions, and Longtermism
a. Diplomacy Over Arms Race
- Boeree on Avoiding the AI Arms Race:
“My advice… is… for people to put much more energy into diplomacy… No one is trying to do the diplomacy route, it feels like, in the US and China.” ([35:22], [35:47])
- Suggests inspiration from nuclear arms reduction treaties:
“The nuclear arms race, at its peak, had over 60,000 warheads… through really clever incentives and diplomacy… they managed to break the game theoretic stalemate.” ([36:13])
- Suggests inspiration from nuclear arms reduction treaties:
b. Building Long-Term Perspectives
- MacAskill's “longtermism” ([45:31]):
"Longtermism is the view that we should be doing much more… to improve the lives of future generations."
- Emphasizes humanity is in its infancy; our actions now will affect “billions and billions” yet unborn ([47:01])
- “There’s only one moment at which [AGI] first gets developed… that is something that is unique about our current situation.” ([48:58])
c. Institutions, Wisdom & Reinventing Society
- Accelerate Wisdom, Not Just Tech
Boeree advocates for building “new social structures that manage these incredibly powerful technologies” ([56:09]):“If you let the technology drive the social structure… that's where you get Moloch. But if you flip that stack… come up with the good memes… the high philosophies… then you get the win-win outcomes.” ([57:43])
- Calls herself a “wisdom accelerator”:
“We need to build the wisdom alongside the power. So, I'm a wisdom accelerator… How do we accelerate the wisdom and social structures that support that wisdom?” ([59:12])
- Calls herself a “wisdom accelerator”:
Notable Quotes & Memorable Moments
-
On the Puerto Rico Summit’s Impact:
“Once people realized AI safety doesn’t just mean shouting from rooftops ‘Stop! Stop!’ but actually means… doing concrete hands-on work… much of the taboo kind of melted away.” (Max Tegmark, [08:12]) -
On the Dangers of Incentive Traps:
“It's this act of sacrificing other important values to win a quick thing… when everybody does it… creates these race to the bottom dynamics. Unfortunately, that's what I see going on in the AI world now.” (Boeree, [38:11]) -
On our Place in History:
“A typical member of human or human-originating civilization will be far in the distant future… they will think of us as people from the distant past… who had enormous responsibility.” (MacAskill, [47:01]) -
On the Need for New Institutions: "I 100% think we have to build new institutions. I think most of our institutions are grossly outdated. They're crumbling… I want to accelerate the building of new social structures that manage these incredibly powerful technologies…” (Boeree, [56:09])
Timestamps for Important Segments
- [02:22] - Introduction to “The AI Scouts” and the origin of the camp
- [03:05] - Max Tegmark describes organizing the Puerto Rico AI summit
- [05:00] - Nick Bostrom on being an early “AI risk” advocate
- [08:00] - Elon Musk’s $10M pledge for AI safety
- [09:53] - Reading from the Asilomar AI Principles
- [11:33] - How competitive pressures eroded early ideals
- [17:20] - Introduction of Liv Boeree and William MacAskill (“the Scouts”)
- [20:16] - MacAskill on the spectrum of AI risks
- [23:02] - The promise of abundance from superintelligence
- [27:38] - Boeree on the catastrophic nature of the AI race
- [31:13] - MacAskill on AI’s threat to democracy and freedom
- [35:22] - Boeree on the need for diplomacy, not escalation
- [38:11] - “Moloch”—the metaphor for self-destructive competition
- [42:11] - The AI “Moloch trap” and high-profile AI mistakes (Sydney, image gen, etc.)
- [45:31] - MacAskill introduces longtermism
- [47:01] - We are living at the hinge of history
- [50:58] - MacAskill’s AGI timeline: “early 2030s”
- [52:25] - MacAskill on hopefulness due to AI safety becoming mainstream
- [56:09] - Boeree on accelerating new social institutions
- [57:43] - Building wisdom-based societies to drive win-win outcomes
Conclusion
The "AI Scouts" episode weaves together the history of AI safety, the high stakes of current geopolitical competition, and the urgent moral imperatives posed by AGI. It frames the challenge as not only technical but as deeply social and philosophical, advocating for diplomacy, new social institutions, and a commitment to future generations—reminding listeners that the narrow path to a win-win future demands wisdom, collective action, and building structures and norms as powerful as the technology itself.
