Podcast Summary: "We’re Not Ready for AGI" (Future of Life Institute Podcast with Will MacAskill)
Date: November 14, 2025
Host: Gus Docker (Future of Life Institute)
Guest: Will MacAskill (Senior Research Fellow at Forethought)
Main Theme & Purpose
This episode explores the challenges humanity faces as we approach transformative artificial intelligence (AGI). Philosopher Will MacAskill argues that current efforts to shape AGI’s future are insufficient, that a narrow focus on existential catastrophe overlooks the equally pressing challenge of ensuring a flourishing future, and that path dependence and institutional lock-in could make early decisions (or mistakes) persist for millennia. The discussion is rooted in MacAskill’s “Better Futures” essay series, which urges both philosophical rigour and near-term action to address AGI’s risks and opportunities.
Key Discussion Points and Insights
1. Two Dimensions of Shaping the Future
-
Existential Catastrophe vs. Flourishing Futures
- Most longtermist discourse centers on preventing existential risks (human extinction or loss of potential).
- MacAskill advocates for equal (or greater) focus on making the post-risk future as good as possible.
“What I'm arguing...is that better futures, namely trying to make the future better conditional on there being no catastrophe, is in at least the same ballpark of priority as reducing existential catastrophe itself.” (Will, 02:41)
-
Scale, Neglectedness, Tractability Framework
- Even with a 20% risk of catastrophe, the quality of the future remains astronomically important.
- The neglectedness of issues like space governance and digital rights means little social capital is spent on them, despite their enormous potential impact.
“How much do people today care about how governance of outer space goes? Probably just not very much at all.” (Will, 04:56)
2. Why Great Futures Are Hard
-
Moral Catastrophe & Utopian Traps
- Societies that seem utopian from one angle (e.g., abundance) might lock in serious moral failures (e.g., slavery or animal suffering).
- Abundance for humans has come with massive increases in animal suffering, hinting at how easy it is to overlook harms.
“You could have a society that's really quite utopian in general, but just makes even just one major moral mistake and thereby loses out on most of the value it could have had.” (Will, 10:13)
-
AI and Conscious Beings
- The overwhelming majority of future beings may be artificial rather than biological because these are more easily replicated.
- Their moral status and well-being—and issues like population ethics—become crucial and contested.
“If you end up with an authoritarian country getting to super intelligence, probably that means you get authoritarianism forever. And probably that means you lose out on almost everything of value.” (Will, 00:44, 87:52)
3. The Limits of Existing Institutions
-
Liberal Democracy’s Limits
- While democracy and markets have self-corrected in the past, MacAskill is skeptical they are robust enough to handle AGI’s radically changed society, especially as power concentrates.
“One of the big better futures challenges…is ensuring that we do get something more egalitarian than, than really intense concentration of power. Where AI...enables a single person...to just control the entire workforce, the entire military…” (Will, 17:17)
-
Risks of Lock-In and Path Dependence
- Technologies (constitutions, AI-enforced treaties, or world governments) can persist and ossify, entrenching today’s values or errors for centuries or longer.
“There could be decisions that happen that really do have kind of indefinitely long lasting effects.” (Will, 77:09)
4. AGI Alignment, Pluralism, and Trade
-
Beyond “One Right Way”
- Moral convergence (everyone agreeing on what’s right) is unlikely.
- Instead, the future may depend on compromise and “moral trades” between groups/AI agents with non-overlapping values, enabled by post-AGI abundance.
“I have like tentative optimism that if...well designed, most groups with different moral views could end up getting most of what they want because...abundance is just so great and people want somewhat different things.” (Will, 44:57)
-
Risk-Averse AI
- Proposes engineering AI with strong risk aversion, making cooperation with humans more likely than risky power grabs.
“You can really just have AIs...that prefer $4,400 over a 50% chance of takeover because they really care about that $4,400 and they don’t really care about much more.” (Will, 50:55)
-
Moral Trades—Easier for AIs?
- Moral trades (e.g., you become vegetarian, I recycle) don’t happen among humans due to lack of trust, weak impartial concern, and “sacred” non-negotiable values.
- AIs, with better knowledge and stronger impartial values, might be more likely to engage in such trades.
“I do expect...impartial moral preferences to become just a bigger and bigger feature of the world because...most...needs...will just get completely taken care of given post AGI abundance.” (Will, 59:48)
5. Reflections on Value Drift and Lock-In
-
Changing Values and the Problem of “Moral Progress”
- Preference change across generations (value drift) is hard to judge—in the past, “progress” meant things we might now abhor.
- Post-AGI, it could become possible to “lock in” our current values permanently. MacAskill advocates against this, favoring mechanisms that preserve option value and intergenerational diversification.
“If we were the Iron Age people...we should want to lock in our values. And I am on the side that we shouldn’t be trying to do that...we should actually let the process of moral progress continue, even if that ends up in a world that I...would find repugnant.” (Will, 41:28)
-
Institutional Lock-In and World Government
- Persistent institutions may prevent future improvement and adaptability, especially if coupled with AGI/automation.
- Worries about authoritarian countries reaching AGI first: “If you end up with an authoritarian country getting to super intelligence, probably that means you get authoritarianism forever. And probably that means you lose out on almost everything of value.” (Will, 00:44, 87:52)
6. Near-Term Action and Research Agendas
-
Urgency of AGI Timelines
- Expected AGI within 5–10 years means limited time for robust philosophical or institutional groundwork.
- Immediate priorities: technical alignment, reducing coup/power grab risks, governance mechanisms, and model “specs” (how AIs should reason about ethics).
“...There is some urgent stuff we could be doing right now to reduce the risk of AI enabled power grabs. Yeah, there’s also urgent stuff...to help improve the model spec in ways that I think really might be quite path dependent.” (Will, 93:17)
-
Space Governance Is Coming Fast
- SpaceX and declining launch costs make extraterrestrial resource governance newly urgent; US/China stances are being set now, potentially shaping the next century.
“At the moment, lots of stuff is happening in...space law...mainly because of SpaceX just completely changing the game...very little in the way of people just...standing up for what’s right...” (Will, 96:31)
-
Model Spec for AI Moral Reflection
- Today’s AIs are either naively subjectivist or default to refusal on controversial ethical questions.
- Better models would ask clarifying questions and facilitate ethical reflection, much like a good teacher or friend.
“...If the model did say...let’s walk through some of the, like, you know, arguments that people have made on either side...that’s PR fine...that’s clearly not the model, like, imposing its own values.” (Will, 71:23)
-
Institutional Barriers to Governmental AI Uptake
- Governments are slower to integrate AI compared to private firms due to bureaucracy, privacy, and public perception.
“I am worried about a world where everything is moving 10, 100 times as fast. Private companies are extremely empowered and the government is just left behind.” (Will, 104:08)
Notable Quotes & Memorable Moments
-
On the Scope of the Future:
“Almost all beings that exist will not be biological, they will be artificial. Because it's very easy to replicate artificial intelligences.” (Will, 00:35, 12:59)
-
On Lock-In and Authoritarian Risk:
“There could be decisions that happen that really do have kind of indefinitely long lasting effects.” (Will, 77:09) “If you end up with an authoritarian country getting to super intelligence, probably that means you get authoritarianism forever.” (Will, 00:44, 87:52)
-
On Moral Uncertainty:
“Imagine you learn that in 40 years time you've become this person with very different political and moral views than you do now. Like, how do you feel about that? Should you think like, oh cool…I’ve changed? Or do you think, no, my future self, man, the classic thing is obviously becoming more conservative over time. Actually, maybe I got biased. Very hard to specify.” (Will, 40:16)
-
On the Limits of Convergence:
“[Moral convergence]—unfortunately I end up...feeling fairly pessimistic about the idea of sufficiently close moral convergence that I don't think that's something we can really bank on as a way of getting to a truly great future.” (Will, 36:55)
-
On AI Model Behavior:
“It is, in fact, the case that people are relying on the AIs as guides, as therapists, as advisors. That will naturally extend and happen for ethical reflection, too. And I am worried about people just getting stuck in whatever beliefs they started with.” (Will, 68:39)
-
On Institutional Path Dependence and Constitutions:
“The best example of this is the American Constitution...what it was locking in is this very general process that’s about the distribution of political power, about ensuring the best ideas winning out over time.” (Will, 90:20)
-
On the “Golden Age” of Philosophy:
“I feel like we're entering this golden age of philosophy where suddenly there’s all of these topics that are so important and, you know, with exceptions, academic philosophy is just sleeping on it for them.” (Will, 106:36)
Major Timestamps
- 00:00–05:00 — The problem of focusing only on existential risk; why “better futures” matter
- 10:05–15:14 — Moral catastrophes, history’s failed utopias, and the coming scale of digital consciousness
- 17:16–22:05 — Power, democracy, and the unique risks of AI-era manipulation and persuasion
- 24:55–27:40 — Market incentives vs. regulatory or design steering for AI “character” and user interaction
- 33:32–37:19 — Value drift, convergence, and preserving “option value” in social and ethical systems
- 50:55–54:49 — Risk-averse AIs, “moral deals,” and how to reduce power grab risks
- 76:06–84:09 — The new technology of institutional lock-in and world government
- 96:22–101:19 — Space governance and the urgency of setting norms for extraterrestrial resources
- 104:08–106:10 — Institutional challenges in government adoption of transformative AI
- 108:20–113:19 — Research priorities, creating institutions and model specs to manage AGI risks
- 118:44–123:04 — Reflections on technological progress, intelligence, and what may make humanity’s gamble “worth it”
Flow and Tone
- The conversation mixes deeply philosophical inquiry with urgent, practical recommendations. MacAskill and Docker both challenge “common sense” optimism and push for richer thinking—particularly about the unprecedented speed and scale of contemporary transitions.
- Tone is direct, sometimes wryly skeptical, rooted in longtermist moral philosophy but always referencing looming near-term decisions.
Takeaways for New Listeners
- Humanity is in a unique, risky, and unprepared place with AGI approaching; current governance, philosophy, and activism lag behind the enormity of choices ahead.
- Path dependence—of institutions, values, and technologies—means early choices could become “locked in” for millennia.
- Key levers: urgent technical work (AI alignment, prevention of coups, model specs), philosophical clarity on future societies, and the prevention of lock-in to any single set of values or systems.
- The future may be defined not by moral convergence but by unprecedented trades, compromises, and pluralisms—especially among new digital beings.
- The window to design and steer these outcomes is closing quickly.
For more, see Will MacAskill’s “Better Futures” essay series. This episode is a clarion call to both think bigger and act faster—and an invitation to participate in defining the future before AGI does it for us.
