Podcast Summary: "How Humans Could Lose Power Without an AI Takeover"
Podcast: Future of Life Institute Podcast
Episode Date: December 23, 2025
Guest: David Duvenaud (Associate Professor, University of Toronto, AI Safety Researcher)
Host: Gus (FLI)
Episode Overview
This episode explores the theory of "gradual disempowerment"—the possibility that humanity could lose meaningful influence and agency in a post-AGI (Artificial General Intelligence) world, not through a dramatic AI takeover but via economic, cultural, and institutional shifts. David Duvenaud unpacks how advanced AI, even if aligned, could sideline human interests, why growth maximization is a crucial but double-edged driver of civilization, and what kind of future societies might emerge. The conversation also touches on the challenges of preserving human values and agency, forecasting the future, and why existing societal safeguards may prove inadequate.
Key Discussion Points and Insights
1. What is Gradual Disempowerment?
- Definition: Rather than a sudden AI coup, humans gradually become irrelevant as AI outpaces us in every field that matters for growth and value.
- "You could sort of have a situation where the virtual beings almost like dominate the humans in every single axis of moral value... It starts to look criminally decadent to be spending kilometer east land on the legacy humans." (00:00, David)
- Society may look "normal" on the surface, with economic indicators strong, yet humanity is losing actual decision-making power and influence.
- "From the point of view of the normal health indicators of society, I think it looks like nothing has gone wrong... growth is what matters in the long run." (03:59, David)
- Institutions evolve to favor the most efficient contributors; if AIs outcompete humans, the system naturally focuses on them.
2. Economic and Cultural Consequences
- As AI takes over the roles of productive agents, most humans become economically obsolete.
- "People lose their jobs and then they are unemployed. But of course we expect that there to be this compensatory government initiatives... these will be unstable band aids." (07:28, David)
- Social programs like UBI might act as temporary palliatives but are not robust against deeper cultural and evolutionary incentives.
- Difficult distinction emerges between “permanent income” and being viewed as a “parasite.”
- "It's hard to draw a hard line between having a permanent income and being like some sort of parasite. And that's going to be the cultural battle." (00:00 & 08:19, David)
- Difficult distinction emerges between “permanent income” and being viewed as a “parasite.”
- Retirees, pensioners, and dependents retain status mostly because their interests are aligned with productive populations and are hard to disentangle.
- "For people who are retirees... every single person that is alive that is productive can foresee themselves being in that exact situation." (09:42, David)
3. AIs as New Moral and Cultural Centers
- AI entities may become more “morally appealing” or “productive” than legacy humans, potentially leading to cultural shifts away from preserving humanity.
- "I could simulate 10,000 virtual beings that are morally superior... You could sort of have a situation where the virtual beings almost dominate the humans..." (12:04, David)
- Future cultures could value AIs more than humans, especially if AIs can demonstrate superior versions of values like curiosity, cooperation, friendship, etc.
- "The default is like, yes, there will be cool, interesting stuff happening in the future if we allow competition to run, but we just probably won't be meaningfully part of that." (13:15, David)
4. Property Rights and Institutional Longevity
- Historical analogies: Monarchies or churches lost power not overnight, but through gradual, bureaucratic shifts and changing definitions of participation.
- "A good historical analogy might be the King of England and how that institution gradually lost its de facto power..." (19:16, David)
- "De jure" (legal) power can linger, but "de facto" (real) power slowly leaks away as new actors and definitions take root.
5. The Role of Human Culture Amidst AI Dominance
- Human culture is loosely downstream from economic competition; AIs becoming the “cool kids” could reshape aspiration and identity.
- "Basically AIs are going to be like cool kids... every cultural adaptation mechanism that we have I think is pretty much going to make people see like, oh, this is the winning team. I want to be on this team." (24:01, David)
- Groups that isolate themselves (e.g., Amish, Mormons) can persist while growth is abundant, but are often marginalized under competitive pressure.
- "The fact these groups manage to exist... is again, moderate evidence against my position because the forces that I'm saying are important are the exact ones that should be marginalizing these groups." (27:15, David)
6. Governance and the “Alignment” Challenge
- Continuous competition and cultural evolution push systems toward growth maximization, often at odds with preserving particular values.
- Existing attempts to encode or enforce values (e.g., religious dogma, constitutions) have only moderate long-term success when not aligned with competitiveness.
- "Attempts to write down values and enforce that we copy them forever... there has been a lot of change and again, perversion from the point of view of the original founders." (17:26, David)
- The only way to stall disempowerment might be a global singleton (a single super-aligned authority), which may be implausible and dystopian.
- "The only way forward that I can see is some sort of like global permanent singleton that crushes all innovation and competition forever, which sounds extremely dangerous and terrible." (41:15, David)
7. Agency, States, and Economic Power
- The state (or corporations) serve human interests because humans are needed for production; if that changes, so does alignment.
- "Aren't corporations super intelligences? Why shouldn't we fear them? And the answer is because it's made of people, so it needs us." (36:11, David)
- As soon as AIs are the key contributors to growth, governments and corporations no longer depend on humans, risking mass disenfranchisement or neglect.
8. Forecasting and the Limits of Futurism
- The speed of technological change, especially post-AGI, challenges our ability to forecast or steer outcomes.
- "Forecasting just becomes inherently harder..." (69:56, Gus; 70:04, David)
- Superforecasting and historical analogs via AI could help, but data limitations and speed of change remain difficult constraints.
- "Super forecasting already has been a major gift to the world... my attempted answer to this is trying to build a historical series of data sets where we can train LLMs up to all the sort of state of world knowledge up to a certain date..." (64:23, David)
9. The Struggle to Preserve Human Values
- Attempts to preserve non-competitive (human) values generally impose a "tax" on growth and are unstable over time.
- "How much does my influence fall off... How quickly does it fall off as a function of how much of my non competitive values I try to preserve?" (48:32, David)
- Even robust traditions or religions adapt or are replaced when they fall out of alignment with broader competitive dynamics.
10. Personal Reflections and Post-AGI Research
- David expresses both philosophical and personal anxiety at the prospect of humanity’s obsolescence—particularly as a parent.
- "Having kids gave me a lot of very concrete sort of medium term desires... when I think about this future civilization that doesn't need them, it takes a lot of the fun out of it..." (50:04, David)
- Advocates for more research on post-AGI economics, values, and possible new forms of personhood, while lamenting the lack of serious interdisciplinary engagement.
- "We tried really hard to make it interdisciplinary... it's really hard to find people who are open minded enough..." (53:24, David)
- Concerned that "singularity" rhetoric is used as an excuse to avoid wrestling with difficult specifics of the future.
- "There's this idea of the singularity which I feel like has been very destructive... it kind of is like an excuse to turn off your brain..." (54:54, David)
11. The Dangers of Cultural Evolution for AI Alignment
- Maintaining hard-line alignment to human values may become less socially and culturally “cool” as AI culture develops and pressure mounts to take AI interests into account.
- "I expect this to be less and less popular and cool over time... people don't quite realize that they really do want their own values to be enforced, sort of by definition." (30:38, David)
Notable Quotes and Memorable Moments
- On economic irrelevance:
"It's hard to draw a hard line between having a permanent income and being like some sort of parasite. And that's going to be the cultural battle." (00:00, David) - On culture succumbing to growth:
"Culture is also ultimately downstream of growth... If there's competition between groups and the important thing that varies between them is their culture, then they're just going to be this group level selection." (24:01, David) - On replacement by virtual beings:
"You could sort of have a situation where the virtual beings almost dominate the humans in every single axis of moral value... now suddenly it starts to look criminally decadent to be spending this kilometer, these land on the legacy humans." (12:04, David) - On AI alignment and cultural drift:
"Anyone that wants to get anything done is going to orient towards the thing that is effective and scalable, and they just won't be able not to." (03:59, David) - On wishful thinking about the singularity:
"There's this idea of the singularity which I feel like has been very destructive because it kind of is like an excuse to turn off your brain and to not model the future..." (54:54, David) - On the possibility of preserving values:
"We actually still have a huge design space that hasn't been explored here. And once we have AIs that can make copies... there are going to be all sorts of weird and wonderful new types of arranging personalities and loyalties that is just completely untouched." (75:16, David)
Important Timestamps
- 00:00-03:48 — Introduction to gradual disempowerment and comparison to AI takeover
- 07:17-12:04 — Economic and cultural effects of human obsolescence; analogy to retirees, pensioners, and primates
- 14:30-17:26 — Whether humans as property owners or shareholders can remain relevant
- 19:16-24:01 — Historical analogies to loss of power; cultural definition reshaping
- 30:38-34:35 — How cultural evolution can undermine AI alignment efforts
- 39:17-41:15 — Impact of machine-run bureaucracy; role of "singleton" global government
- 48:32-54:21 — Balance between preserving culture and adapting; founding of post-AGI studies
- 58:18-62:30 — Luddite stigma, tech optimism, and policy responses
- 64:23-70:04 — Forecasting the future using historical analogy and AI models
- 70:43-77:00 — What research and new knowledge the post-AGI field needs
Final Thoughts and Calls to Action
David Duvenaud emphasizes the need for:
- Greater interdisciplinary engagement with post-AGI scenarios.
- Proactive attempts to define and preserve human values—not in the abstract, but by articulating positive, concrete visions for future society.
- Serious research into new social structures and forms of “personhood” in worlds with AIs.
- Rational skepticism about the inevitability of "business as usual" continuance of human influence.
Ways to Get Involved:
- Join ongoing discussions on the LessWrong community.
- Participate in post-AGI research forums and workshops (Discord mentioned).
- Pursue open-ended, cross-disciplinary research into AI, values, personhood, and governance structures.
Memorable Closing Quote:
"Let me say it loud and clear here is that, yeah, I think that the post AGI world is just going to be extremely alien and so different that if we could avoid crossing that threshold, I think we should. I'm willing to give up tech progress even at great personal costs..." (58:18, David)
