Podcast Summary
Podcast: Conversations With Coleman
Host: Coleman Hughes (presented by The Free Press)
Guest: Will MacAskill (Associate Professor of Philosophy, Oxford; co-founder of Effective Altruism)
Episode: Humanity in a Thousand Years with Will MacAskill (S3 Ep.29)
Date: September 4, 2022
Main Theme
The episode explores humanity's responsibilities to future generations, as laid out in Will MacAskill's book What We Owe the Future. Drawing from philosophy, effective altruism, population ethics, and the practicalities of economic and technological progress, Coleman and Will dig into how our present choices will shape the flourishing—or the demise—of countless future humans, and consider concrete ways individuals can act with true long-term impact.
Key Discussion Points & Insights
1. Why Care About the Future? (03:48–08:27)
- Tenet Analogy: Coleman opens with a "Tenet" movie analogy—will future generations have cause to “hate” us for our failures? Will responds that while not inspired by the film, he is motivated by the question: "What will future generations think of us?" (03:48)
- Our Influence: Will reflects on past and present legacies—the gifts and harms left by earlier humans—and emphasizes our capacity to shape either a "wonderful" or "dystopian" future. (04:38)
- Moral Perspective: Both agree it matters how many people are affected by our decisions—citing the scope of influence over future trillions versus today’s 8 billion. (06:31)
Notable Quote:
"We are really at the beginning of history and the number of people in the future just are really vast compared to the number of people alive today. … When we take a moral perspective…, what's of greatest importance are those things that will impact the entire course of the future."
— Will MacAskill (07:22)
2. The Ethics of Space and Time (08:27–11:28)
- Space vs. Time: Coleman suggests we're more attuned to spatial injustice (stealing from another nation) than temporal injustice (stealing from the future). Will argues there is "no difference"—harm is harm, whenever it occurs. (09:18)
- The Voiceless Future: Will highlights that future people are "very literally voiceless in the world today," making altruistic advocacy crucial. (10:25)
- The Power of the Early Generations: Will points out that the earlier you are in history, the more you can shape the future—by setting customs, values, and institutions. (11:28)
3. The Repugnant Conclusion & Population Ethics (14:14–25:30)
- Intuitive Arithmetic: Both agree it’s obviously worse to harm more people ("two lives are twice as important as one"), but run into the classic repugnant conclusion from Derek Parfit: Is a large population living so-so lives better than a small, blissful one? (14:58)
- Paradoxes and Trade-offs: Will explains key technical concepts (scope sensitivity, dominance addition, non-anti-egalitarianism, transitivity) and the fundamental paradoxes of population ethics. All possible positions lead to some unintuitive or "devastating" consequence. (18:02–21:38)
- Continuum Fallacy: Coleman raises the issue of drawing lines (when do you go from a good world to a repugnant one, or from "not bald" to "bald"?). Will shares that philosophers do grapple with this analogy and sometimes propose discontinuities in value. (22:02–25:30)
Notable Quote:
"The issue of population ethics is one of the hardest areas of moral philosophy. You can show … that any view that you have has devastatingly unintuitive consequences. So … we just have to face a paradox."
— Will MacAskill (17:47)
4. Abortion, Personhood, and Future Value (25:30–33:58)
- Abortion and Future Persons: Coleman explores whether valuing future people is compatible with a pro-choice stance. Will affirms that he’s pro-choice and distinguishes "preventing a life" from killing an existing person, stating that not bringing a hypothetical person into existence isn't comparable to taking a life. (26:52)
- Continuum Again: They consider, philosophically, how far back "preventing a life" might matter—down to contraception or abstaining from intercourse. Both agree it’s muddled and that the abortion debate is partly psychological attachment, not just logic. (31:21)
Notable Quote:
"The feelings of this being a moral issue … tend to just come from a more psychological place of just like building up attachment to a being and a future being, that which is kind of independent of the morality of it per se."
— Will MacAskill (33:45)
5. Economic Growth, Redistribution & Stagnation (33:58–45:50)
- Economic Progress as a Gift: Coleman introduces climate change and Tyler Cowen's “stubborn attachments” thesis—current growth rates deeply affect future wealth. (35:25)
- Growth Must Plateau: Will responds, arguing economic growth can't compound forever without exceeding every possible physical limit ("trillion times as much economic output per atom" in 10,000 years). Thus, the focus should be on avoiding stagnation rather than maximizing growth for its own sake. (36:55)
- Existential Risks: Growth-vs-redistribution debates are less relevant to the long term than risks that have permanent effects, such as extinction or value lock-in (e.g., through AI or entrenched dictatorships). (42:27)
Notable Quotes:
"At some point in the future [growth] will plateau … if you advance economic growth, you're not making a long term difference… you're speeding up how quickly we get to the destination."
— Will MacAskill (37:55)
"The clearer [long-term] priority is reducing the risk of extinction … Humanity might live for billions or even trillions of years. If we were to go extinct in the next few centuries, then that entire future is cut off."
— Will MacAskill (44:03)
6. Moral Progress and its End (45:50–53:36)
- Shifting Values: Coleman and Will discuss how radically values have changed (e.g., slavery), and the dangers if those changes ever halt.
- Threats to Progress: Will warns about possible "value lock-in" via totalitarian control or advanced AI—technologies could forever fix an ideology, preventing further ethical progression. Real-world examples include the Nazis’ or Stalin’s ambitions; the technologies of the future, such as AI-driven armies or immortal leaders, might allow "moral ossification." (46:25)
- Homogeneity: Will notes the world is increasingly culturally homogeneous, which could itself stall progress, citing all countries’ similar COVID-19 policies as a microcosm. (47:45)
Notable Quote:
"Imagine if moral progress had stagnated with the Roman Empire, where slavery was just utterly accepted … That would be an enormous loss of value."
— Will MacAskill (49:48)
7. China, Liberty, and Global Futures (50:33–52:20)
- Experimentation vs. Rights: Will advocates for cultural experimentation but only if paired with individual freedom—free migration is essential. (51:12)
- Authoritarian Lock-In: He's critical of models (e.g., China) with limited liberty, especially for their risk to future progress if such regimes cement into the long-term future.
8. Artificial Intelligence and the “End of History” (53:14–61:35)
- AGI as End Point: AGI (artificial general intelligence) could “ossify” values by setting up immortal, unchanging rulers (be they institutions or machine intelligences).
- Good vs. Bad AGI Scenarios:
- Best Case: AI accelerates progress, eliminates disease, vastly improves welfare.
- Worst Case: AI disempowers humanity, perhaps suddenly wiping us out.
- Misuse: One power leverages AGI for domination.
- Comparison to Fission: Will likens AI's dual potential to nuclear technology—capable of immense good (nuclear power) or bad (nuclear weapons). (60:10)
Notable Quote:
"The creation [of AGI] gives a kind of end point for history … The causes of moral change over time disappear, and we could have kind of moral ossification."
— Will MacAskill (55:23)
9. What Can Listeners Do? (61:35–63:13)
- Read the Book: Start with What We Owe the Future for a comprehensive, accessible introduction.
- Effective Giving: For those wanting a clear action, donate (e.g., via Giving What We Can or the Long-Term Future Fund).
- Career Choice: For those wanting to go further, consider career advice focused on maximizing positive impact (e.g., via 80,000 Hours).
Notable Quote:
"If there's a single action that you take to make the world better, that is making an enormous contribution [by giving]."
— Will MacAskill (62:22)
Memorable Moments & Quotes
- Coleman: "How is it any less evil to be selfish at the expense of people in 200 years than to be selfish at the expense of a nation across the globe?" (08:27)
- Will: "People in the future … are literally voiceless in the world today… disenfranchised. That means … altruistically minded people [must] stand up for them." (10:23)
- Will: "Growth can't continue forever…[it's] not an enormous priority" (38:30)
- Will: "If a dictatorship falls, it's because its leader dies. But if … future rulers are digital, they never have to die." (47:21)
- Will (on AI): "In the optimistic scenario, science is just sped up a hundredfold; in the worst, you get sick and die one day, and that's true for everyone in the world." (57:42–58:36)
Timestamps for Major Segments
- Intro & Tenet Analogy: 00:00–05:42
- Long-term scope & moral implications: 05:42–14:14
- Population ethics & Parfit: 14:14–25:30
- Abortion, personhood, and value of futures: 25:30–33:58
- Growth, climate, and future priorities: 33:58–45:50
- Moral progress & end-of-history scenarios: 45:50–53:36
- Cultural diversity, freedom, and China: 50:33–52:20
- AGI & lock-in threats: 53:14–61:35
- Actions for listeners: 61:35–63:13
Final Thoughts
The episode offers a rich, clear distillation of longtermist thought for a general audience, balancing intellectual rigor with accessible examples. Through both abstract puzzles (the repugnant conclusion) and practical risks (AI, climate), MacAskill and Hughes probe what it really means to take the future seriously, while giving listeners actionable steps toward making their own lives a lever for good in the eons to come.
