Making Sense with Sam Harris
Episode #467 — EA, AI, and the End of Work
Date: March 30, 2026
Guests: Sam Harris (Host), Will MacAskill (Philosopher, Effective Altruism Advocate)
Episode Overview
In this episode, Sam Harris is joined for the fourth time on the main podcast by philosopher Will MacAskill to reflect on a decade of Effective Altruism (EA), explore recent turbulence in the movement, and critically examine the boundaries and future directions for charitable impact. The conversation addresses the shifting landscape of philanthropy, the implications of global political retrenchment, the enduring power and potential pitfalls of focusing on quantifiable suffering, and how advances in artificial intelligence could radically alter humanity’s prospects and our ethical landscape.
Key Discussion Points and Insights
1. State of the Effective Altruism Movement
Timestamps: 00:36–04:24
- Will MacAskill reflects on 10 years since his book Doing Good Better, detailing updates in the new edition, including revised statistics and a new foreword addressing objections and lessons from the past decade.
- Sam brings up the blow to EA’s public image from scandals such as Sam Bankman-Fried/FTX, asking about lasting impacts.
- MacAskill: Despite the setbacks, EA’s growth—measured by money moved (approaching $2 billion annually), Pledge participation, and conference engagement—continues robustly.
"If you look at how much money is being moved to effective nonprofits... it actually grew just kind of steadily even through these periods of drama and cryptocurrency implosions." (03:00, MacAskill)
- Sam highlights the psychological and life benefits of pre-committing to the 10% pledge, while raising questions about conceptual boundaries of "effectiveness” in EA.
2. The Canonical Causes of Effective Altruism
Timestamps: 04:24–15:53
A. Global Health and Development
- MacAskill underscores that global health and development remains EA’s main philanthropic focus due to its proven, scalable impact.
- Sam articulates the common critique: why prioritize distant suffering over “our own” (Americans), especially as cynicism mounts in tech and political circles against traditional philanthropy.
- Quote:
"Philanthropy doesn’t really work. Sending money to Africa is just kind of foolish... These are just all criminals who are wasting our money over there." (06:24, Harris)
- MacAskill counters that, though some aid is wasteful, the most effective interventions are distinguished by robust, randomized-controlled-trial evidence:
"The donations that have gone via GiveWell have saved... over 340,000 lives. Now this is at a cost of about $5,000 per life. Whereas in the United States... it's about $50,000." (08:30, MacAskill)
- Sam points to a Lancet study suggesting millions may die from reduced US aid—a stark illustration of the stakes involved. Even taking conservative estimates, the humanitarian losses are staggering.
B. Animal Welfare
- MacAskill: Factory-farmed animals (especially chickens) number in the tens of billions and endure tremendous suffering.
- Small investments have generated huge change; e.g., corporate cage-free campaigns have meaningfully improved the lives of billions.
“Factory farming is one of the worst atrocities that humanity is committing today.” (10:20, MacAskill)
- Sam raises EA’s “weirdness frontier”: focusing on shrimp welfare, digital mind suffering, etc., could alienate many and create a “moral bankruptcy” reaction among would-be sympathizers.
- MacAskill defends rigorous moral exploration—even when it appears eccentric—drawing parallels with past social advances once seen as absurd, like abolitionism and women’s rights.
“Those people who pushed early on, looking like weirdos... being what I call a moral weirdo... at least some groups need to be in the business of really trying to figure this out.” (15:30, MacAskill)
C. Pandemic Preparedness
- Once “speculative,” pandemic risk is now central post-COVID-19.
“It's just not that unlikely, to me maybe one in three, that we will just see waves and waves of new pandemics as a result of people tinkering with viruses... and it leaking out.” (17:38, MacAskill)
- MacAskill details straightforward interventions: stockpiles, sterilizing air, monitoring wastewater. Lab leaks—already common—pose growing risks as biotech tools get cheaper and knowledge spreads.
- Cites both accidental leaks and deliberate threats (state-level or terroristic) as plausible, pressing dangers.
D. AI Risk and Progress
- Both agree: what seemed speculative now feels near-term and urgent.
“Now we can do experiments on AI systems to get a sense of how they act, what the risks are, what the potential benefits are, and we can have a lot more confidence.” (21:38, MacAskill)
- Exponential growth in AI capacity is highlighted, especially the moment when AIs automate further AI research, possibly triggering rapid, disruptive leaps in capability.
- Sam notes rapid advances:
"As of 2022, AI researchers forecast that it wouldn't be until... 2025 that AI would win the Math Olympiad... but that's exactly what happened." (20:17, Harris)
3. The Bias Toward "Negatively Valenced" Ethical Targets
Timestamps: 22:58–26:10
- Sam critiques the EA tendency to focus mainly on prevention of suffering and existential risk, rather than the actualization of human flourishing.
- There's a discussion on medicine’s goal of returning people to “normal” rather than aiming for superlative well-being.
- MacAskill suggests contingency, not necessity, in EA's suffering-prevention focus; future generations will likely look back at present-day living as impoverished in currently unimagined ways.
- Quote:
“I do think that future generations will look back at our lives today and think, ‘Oh my God, they missed out, they didn’t have X, Y, Z…’” (25:05, MacAskill)
4. Hard-to-Quantify but Pivotal Social Problems
Timestamps: 26:10–29:14
- Sam drills down on EA’s bias toward quantifiable interventions—cost-per-life-saved, DALYs, RCTs, etc.—worrying that some catastrophe-scale opportunity costs arise from intractable, culture-wide problems:
- e.g., the social-media-fueled "manosphere" pivot to Trump, podcast normalization of anti-progressive politics, lack of skepticism in influential forums.
- The speculative but plausible claim: influencing the influencers could have higher expected value than classic EA priorities, given consequences like U.S. global retreat impacting myriad cause areas (pandemics, climate, nuclear proliferation).
- Quote:
“If we could have done something in advance to have inoculated the tech bro manosphere podcasters against the charms of Trump and Trumpism... that was one of those things... arguably more important than anything on GiveWell's website right now.” (28:09, Harris)
- Sam offers a challenge: can EA thinking and philanthropic strategy be expanded to target diffuse, "wicked" societal problems that resist clean measurement?
Notable Quotes & Memorable Moments
-
MacAskill on the resilience of the EA movement:
“That was a huge hit, but the underlying ideas are very good... people are still being convinced by the importance of giving more and giving more effectively or using their career to do good.” (02:42)
-
Harris on the psychological impact of the 10% pledge:
“Just deciding in advance to give a certain percentage of money... knowing that on some level that money isn’t even mine... That’s just an enormous psychological change.” (03:15)
-
MacAskill on factory farming:
“The conditions they live in are truly atrocious... when those animals die, that's the best thing that happened to them because their lives are full of such suffering.” (10:13)
-
Harris on the strangeness of the moral frontier:
“If we're going to push the conversation to a place where we're asking people to care about how Nvidia's latest chips feel in some configuration... even the Dalai Lama is not going to be able to shed a tear about digital minds.” (12:25)
-
MacAskill on moral progress:
“Looking back at ideas that we now think of as utterly morally common sense... these are things you would have been mocked for... I think at least some groups need to be in the business of really trying to figure this out.” (15:05)
-
Harris on the quantifiability challenge:
“I worry that we're sort of blind to obvious problems... [where] intervention would be hard to quantify, but they're blocking everything.” (26:35)
Important Timestamps
- 00:36 – Will MacAskill joins the discussion, reflecting on a decade of EA and recent controversies.
- 03:03 – The impact and momentum of the Giving What We Can pledge, and $30 million moved by listeners.
- 06:24 – Critique of global health philanthropy from modern cynics and response with data.
- 10:13 – The case for farm animal welfare as “the biggest atrocity” and massive return-per-dollar.
- 12:25 – Debates about the plausibility and public receptivity of atypical suffering: shrimp, digital minds, and animal consciousness.
- 15:05 – The importance of “moral weirdos” and the historical arc of once-radical ideas.
- 17:38 – Pandemic preparedness and the specter of bioterror or unregulated synthetic biology.
- 20:17 – How fast AI has outpaced expert timelines.
- 21:38 – The stability and implications of exponential progress in AI, and the looming feedback loop.
- 25:05 – Discussing the pursuit of positive goods for future generations, not just the absence of suffering.
- 28:09 – Harris’s call to expand philanthropic imagination beyond current EA quantifiability, to more diffuse but consequential risks.
Overall Tone
The episode is contemplative and probing, blending Sam’s philosophical skepticism and MacAskill’s analytical optimism. Both wrestle—intellectually and practically—with the limits, promise, and future direction of EA in a world of accelerating technological and political flux.
