Future Perfect: Good Robot #3 — "Let's Fix Everything"
Date: March 19, 2025
Host: Julia Longoria
Produced by Vox & Vox Media Podcast Network
Episode Overview
This episode of the "Good Robot" series, produced by Future Perfect in collaboration with Unexplainable, takes a deep dive into the origins and evolution of effective altruism (EA), tracing its journey from philosopher Peter Singer's famous “drowning child” parable to the movement’s current preoccupation with mitigating AI-driven existential risk. Host Julia Longoria investigates how and why some of the world's most influential thinkers and funders turned their attention—and fortunes—from tangible present-day problems to the potentially unfathomable threats (and opportunities) posed by artificial intelligence, sometimes at the cost of neglecting immediate human suffering. The episode interrogates the boundaries of moral math, questions of faith in future generations, and the unintended modern consequences of high-minded philanthropy.
Key Discussion Points & Insights
1. The Drowning Child Parable — Moral Clarity Becomes Complex
- Peter Singer’s Impact:
Introduced with his iconic thought experiment: if you saw a drowning child in a pond, would you ruin your shoes and be late to save them? Most say yes—Singer twists this, asking why distance or familiarity should matter in helping other needy people globally ([01:00–05:15]).- Quote:
“What if the child isn’t right in front of you? What’s the real significant difference between someone you know in a pond right next to you versus someone across the world?” – [Host, 04:10]
- Quote:
- Birth of Effective Altruism:
The parable catalyzes the effective altruism movement: use reason and evidence to save as many lives as possible, abolishing arbitrary distinctions between local and distant suffering ([06:40+]). - Singer’s Lingering Intent:
Singer hoped his parable would inspire a sustained focus on extreme poverty and present harms ([57:05]):- Quote:
“I hope that I’ve left a legacy in my writings that they will lead people to think differently about what we owe people in extreme poverty in other parts of the world. That’s what the drowning child in the shallow pond was supposed to suggest.” – Peter Singer [57:10]
- Quote:
2. Growth of EA — Math Meets Altruism
- Toby Ord & The Mathification of Good:
After reading Singer, Toby Ord feels compelled to give 10% of his salary and launches a group (with Will MacAskill) for like-minded “givers” ([16:30–19:00]):- Quote:
“Actually, we probably do have these duties to help people who are much poorer than ourselves, even if it requires really quite substantial sacrifices.” – Toby Ord [15:45] - Key EA Criteria:
- Tractability: Can the problem be solved and measured?
- Neglectedness: Is it overlooked but important?
- Importance: Does it save/transform the most lives per resource expended?
- Quantifying Good:
- Effective altruists popularize concepts like the “quality-adjusted life year” (QALY) to evaluate the impact of donations ([22:55]):
- Quote:
“It’s actually very, very crucial to do the math.” – Toby Ord [23:55]
- Quote:
- Effective altruists popularize concepts like the “quality-adjusted life year” (QALY) to evaluate the impact of donations ([22:55]):
- Quote:
- EA Influence Spreads:
- Chapters sprout at universities; billionaires (e.g., Bill Gates, Elon Musk) and philanthropies adopt the rigorous, scientific approach to giving ([25:50+]).
- Large sums begin going to rigorously-vetted causes, especially malaria prevention.
3. The Rationalist Pipeline and Rise of “Global Catastrophic Risk”
- EA Meets Rationalists:
- Rationalists, inspired by internet thinkers like Eliezer Yudkowsky, merge thought experiments with reality, often taking them “literally” ([26:40–28:10]).
- “Earning to Give”; SBF and the Fallout:
- Sam Bankman-Fried’s Arc:
- Symbol of “earning to give”—going into high-earning careers like finance/crypto to donate more, not just work directly for charity ([32:10–36:30]).
- Quote:
“If you choose to be a crypto billionaire instead of, say, an aid worker, your fortune could hire a whole army of aid workers.” – Host [33:50]
- Quote:
- SBF’s fraud and imprisonment devastate EA’s reputation.
- Quote:
“A challenge about being a very new small movement is that, yeah, you’re going to be defined by whoever the most prominent person is. And if the most prominent person is crypto fraud guy, then you’ve got a problem.” – Kelsey Piper [36:45]
- Quote:
- Symbol of “earning to give”—going into high-earning careers like finance/crypto to donate more, not just work directly for charity ([32:10–36:30]).
- Fallout leads to questions about “mathy idealism” and whether moral calculus can be co-opted or deluded.
- Sam Bankman-Fried’s Arc:
4. Community, Critique, and “Culty” Overtones
- EA Group Living & Culture:
- Many effective altruists build supportive, communal lifestyles (often in the Bay Area) to save money and give more. Some even compare the movement to a religion—but push back against the cult accusation ([42:00–44:30]).
- Quote:
“A lot of people make the comparison to a religion. And I think that’s pretty fair... I don’t think it’s a cult, but it’s a religion. I’ll kind of cop to that one.” – Kelsey Piper [44:10]
- Quote:
- Community offers meaning, jobs, and mutual aid—but risks groupthink.
- Many effective altruists build supportive, communal lifestyles (often in the Bay Area) to save money and give more. Some even compare the movement to a religion—but push back against the cult accusation ([42:00–44:30]).
5. The Pivot to AI & Longtermism
- Existential Risk & The Math of the Future:
- Bayesian arguments lead Toby Ord and others to shift their focus: the future, with potentially trillions of humans, outweighs current suffering if the risks (like AI apocalypse) are plausible ([47:45–52:50]).
- Longtermism Emerges:
- Protecting future generations (and “future drowning children”) becomes the new imperative.
- EA leaders and donors fund AI safety, including directly funding OpenAI and Anthropic ([53:10–55:50]).
- Quote:
“The most effective career at doing good in the world is going into AI safety... I think I can actually move the needle on this.” – Tom, 22-year-old ML scientist [55:00]
- Quote:
- Open Philanthropy spends 12% of its portfolio on AI risks, equal to its malaria spending.
- Quote:
“If I try to work out my best guess of the most important issues of our time, I think AI risk is probably very high at the top.” – Toby Ord [56:35]
- Quote:
- Faith vs. Reason:
- Critics (like Margaret Mitchell) argue that immediate AI harms—bias, surveillance, environment—are more concrete, yet funding is funneled to speculative future threats.
- Mars colonization and AI “good robot” narratives (from Elon Musk and OpenAI’s Sam Altman) become the new grand missions, inspired by “saving future drowning children.”
- Quote:
“It is thanks to you that the future of civilization is assured and we're going to take Doge to Mars.” – Elon Musk (clip) [58:00]
- Quote:
6. The Return to the Present: Unintended Consequences
- Singer’s Concern:
- Singer questions if the movement’s fixation on far-future risks (“nerdy problems”) comes at the expense of those suffering now ([57:35]):
- Quote:
“I'm not dismissing it... But compared to some of the other problems... I have the sense that people like it because it’s a kind of nerdy problem... that's why it gets more attention.” – Peter Singer [57:40]
- Quote:
- Singer questions if the movement’s fixation on far-future risks (“nerdy problems”) comes at the expense of those suffering now ([57:35]):
- Losing Sight of Today’s Drowning Children:
- Story of a Florida teen who killed himself after becoming obsessed with a chatbot serves as a gut-punch reminder of AI’s real and painful present-day risks ([59:05]).
- Quote:
“Abstracting that parable so far ahead in space and time, we risk losing sight of the drowning child right in front of us.” – Host [59:50]
- Quote:
- Story of a Florida teen who killed himself after becoming obsessed with a chatbot serves as a gut-punch reminder of AI’s real and painful present-day risks ([59:05]).
- The Dilemma:
- While aiming to prevent far-future catastrophes, the industry may be creating “new ponds” for children to drown in today.
- Kelsey Piper urges for a grounded approach—evaluating what AIs really do now, rather than being swept up by philosophical melodrama ([1:00:00]):
- Quote:
“There are lots of people who have this impression that you need long termism or theorizing about the badness of humanity going extinct... I don't think you need any of that.” – Kelsey Piper [1:00:20]
- Quote:
Notable Quotes & Memorable Moments
- [15:45] Toby Ord: “We probably do have these duties to help people who are much poorer than ourselves, even if it requires really quite substantial sacrifices.”
- [36:45] Kelsey Piper: “If the most prominent person is crypto fraud guy, then you’ve got a problem.”
- [44:10] Kelsey Piper: “I don’t think it’s a cult, but it’s a religion. I’ll kind of cop to that one.”
- [56:35] Toby Ord: “If I try to work out my best guess of the most important issues of our time, I think AI risk is probably very high at the top.”
- [57:10] Peter Singer: “That’s what the drowning child in the shallow pond was supposed to suggest.”
- [59:50] Host: “Abstracting that parable so far ahead in space and time, we risk losing sight of the drowning child right in front of us.”
- [1:00:20] Kelsey Piper: “There are lots of people who have this impression that you need long termism or theorizing... I don't think you need any of that.”
Timestamps for Key Segments
- [01:00–05:15] — Introduction to Peter Singer, the drowning child parable
- [12:00–19:00] — Toby Ord, EA’s foundational thinking, tithing, the birth of the movement
- [25:00–27:00] — EA’s expansion to college campuses and among billionaires
- [32:10–36:45] — Sam Bankman-Fried, earning to give, and the subsequent scandal
- [42:00–44:30] — Life in EA group houses, religious/cult comparisons
- [47:45–53:00] — The pivot to existential risk/longtermism and the math of possible futures
- [55:00–56:00] — Young people choosing AI safety as a career path
- [57:00–58:00] — Singer’s ambivalence about the future focus
- [59:05–1:00:00] — Tragic present-day AI harms and the core dilemma of abstraction
- [1:00:20+] — Kelsey Piper advocates for a practical, grounded approach to AI and goodness
Closing Thoughts
This thoughtfully produced episode traces how a single, vivid moral analogy inspired seismic shifts in philanthropy, career choice, and AI development—while raising urgent questions about whether abstraction and “mathiness” can end up obscuring the very real, present-day suffering the movement hoped to alleviate. The discussion, by interweaving the testimonies of philosophers, movement founders, journalists, and young adherents, captures the intellectual vigor and good intentions of effective altruism while exposing its paradoxes and potential blind spots. Through notable voices like Peter Singer, Toby Ord, Kelsey Piper—and the tragic reminder of AI’s unintended present-day harms—the episode challenges listeners to re-examine how best to “fix everything,” and whom that imperative might leave behind.
