The Diary Of A CEO with Steven Bartlett: "The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!"
Guest: Professor Stuart Russell
Date: December 4, 2025
Overview
In this insightful and urgent episode, Steven Bartlett speaks with Professor Stuart Russell—one of AI’s most influential voices and the co-author of the canonical textbook, "Artificial Intelligence: A Modern Approach." They deeply explore the current trajectory of artificial intelligence, the likelihood and timing of AGI (artificial general intelligence), the extinction-level risks involved, why industry leaders continue at breakneck pace despite their own concerns, what safe AI could look like, and the profound societal questions looming in the age of machine superintelligence. The conversation is both sobering and thought-provoking, challenging listeners to reflect on humanity’s future and the ethical imperatives surrounding technology.
Key Discussion Points & Insights
1. The "Gorilla Problem": What Happens When We’re No Longer Smartest?
[01:51, 19:32]
- Russell explains the evolutionary analogy: Gorillas once shared an ancestor with humans, but now have no control over their fate due to the emergence of superior intelligence.
“If we chose to, we could make them extinct in a couple of weeks and there’s nothing they could do about it. That’s the gorilla problem." — Stuart Russell [20:10]
- Key Insight: With AGI, humans risk becoming like gorillas—at the mercy of a more intelligent species, potentially losing control over our destiny.
2. Industry Leaders’ Paradox: Racing Towards Disaster, Eyes Open
[05:35, 07:11, 09:03]
- Many CEOs privately acknowledge “extinction-level” risks but feel unable to step off the treadmill.
“You’re doing something that you know has a good chance of bringing an end to life on Earth, including that of yourself and your own family. They feel that they can’t escape this race." — Stuart Russell [07:54]
- CEOs who halt progress would be replaced by others willing to take the risk due to investor pressure and competitive dynamics.
- Public statements, like the “extinction statement” signed in May 2023, recognize AGI’s risks but the gut-level urgency is absent.
3. The Midas Touch & The Limits of Human Control
[02:13, 36:09]
- The King Midas myth is paralleled with AI: greed for infinite gain blind to unintended consequences.
"Greed is driving these companies to pursue technology with the probabilities of extinction being worse than playing Russian roulette, and that’s even according to the people developing the technology without our permission." — Stuart Russell [02:51]
- Human attempts to specify what we want (the “objective” for AI) often go awry; the challenge is even greater at planetary scale.
4. AGI: When and How Will It Arrive?
[14:18, 15:21]
-
Leading AI figures offer aggressive timelines:
- Sam Altman: AGI before 2030
- Dario Amodei: 2026–2027
- Jensen Huang: ~5 years
-
Russell is more skeptical, believing technical understanding—not just scale—is the rate limiter:
“We have far more computing power than we need for AGI... the reason we don’t have AGI is because we don’t understand how to make it properly.” — Stuart Russell [15:30]
- The staggering investment dwarfs all historical tech projects; AGI’s “budget” will reach a trillion dollars, 50 times bigger than the Manhattan Project.
5. AI Safety: Lip Service or Real Priority?
[17:37, 18:03]
- Most AI companies have safety teams but lack real power; commercial imperatives prevail.
- Notable safety departures at OpenAI (Jan Leike, Ilya Sutskever) signal disillusionment with internal safety priorities.
“When they say OpenAI doesn’t care about safety, that’s pretty concerning.” — Stuart Russell [19:15]
6. Why Can’t We Just "Pull the Plug"? Misconceptions about AI Control
[22:10]
- It’s naive to think superintelligent AI wouldn’t anticipate power-off attempts.
- “Competence is the thing we’re concerned about... It’s not about consciousness.” — Stuart Russell [23:19]
- The real challenge: how do we maintain indefinite control over something far more powerful than us?
7. Can We (and Should We) Build Safe Superintelligent Machines?
[23:57, 24:39, 101:02]
-
Russell believes it is possible, but only if AI’s sole purpose is to further human interests—a shift in design philosophy.
- “Can we make AI systems whose only purpose is to further human interests? I think the answer is yes.” — Stuart Russell [24:27]
-
But current systems are opaque; we don’t mathematically understand their operation or objectives, unlike other engineered machines.
8. Intelligence Explosion & "Fast Takeoff": Self-Improving AI
[32:06, 33:29]
-
AI systems could soon conduct AI research themselves, potentially triggering runaway IQ increases (as described by I.J. Good in 1965—“intelligence explosion”).
- “This would very rapidly take off and leave the humans far behind.” — Stuart Russell [33:24]
-
Sam Altman mused: “We may already be past the event horizon of takeoff.”
9. The Age of Abundance ... or the Age of Irrelevance?
[44:51, 47:07, 48:00]
- If AI does all human work, economists and sci-fi writers alike struggle to imagine a world where people find purpose.
“No one has been able to describe that world... It does not, as far as I know, exist in science fiction.” — Stuart Russell [44:57]
- References to “Wall-E”—people reduced to passive, purposeless consumers of entertainment.
10. What Should We Tell the Next Generation to Learn?
[40:59, 63:34]
- Russell highlights interpersonal roles—therapy, coaching, psychological support—as possibly remaining viable in an AI-dominated future.
- “If my kids would listen... I think it would be these interpersonal roles based on an understanding of human needs psychology.” — Stuart Russell [63:34]
11. Universal Basic Income: An Admission of Failure?
[69:13]
- With economic production concentrated in a few AI companies, UBI seems increasingly necessary. But:
“Universal Basic Income... seems to me, an admission of failure. Because it says we can’t work out a system in which people have any worth or any economic role.” — Stuart Russell [69:13]
12. The Red Button Question: Would You Stop AI Progress?
[70:00, 75:09]
- Russell grapples with the ethical dilemma of halting AI entirely.
"If that button is there, stop it for 50 years, I would say yes.” — Stuart Russell [74:18]
- Ultimately, he’d prefer a moratorium rather than a permanent halt, holding out hope that safe development is possible.
“Stop it forever? Not yet. I think there’s still a decent chance that we can pull out of this nosedive." — Stuart Russell [74:30]
13. Global Race, China, and the Accelerationists
[76:07, 77:01, 78:09]
- “Accelerationists” lobby against regulation, arguing that if the West pauses, China will win.
- Russell rebuts this narrative; China has stricter AI regulation than the US and EU, and sees AI more as a tool for societal improvement than a race to AGI.
14. Loss of Middle-Class Jobs: The Second Tidal Wave
[81:21]
- Automation and globalization have hollowed Western middle classes; AGI threatens to disrupt even white-collar and creative work further.
- Massive unemployment and social disruption could happen much faster than past industrial revolutions.
15. Regulation: What Would ‘Safe Enough’ Mean?
[94:44, 97:13]
- Russell urges a regulatory threshold: prove systems are at least as safe as nuclear plants (odds of disaster under 1 in 100 million per year), ideally far better.
- Current CEOs admit probabilities of failure are much higher—some say 25% risk of extinction!
- "Rather than say ban, I would just say: Prove to us that the risk is less than 1 in 100 million per year of extinction." — Stuart Russell [98:43]
16. On the Role of the Average Citizen
[110:05]
"The policymakers need to hear from people. The only voices they’re hearing right now are the tech companies and their $50 billion checks." — Stuart Russell [110:17]
- Russell encourages people to contact their representatives, as broad public concern can counterbalance corporate lobbying.
17. Optimism, Motivation, and Personal Regrets
[113:02, 113:20]
- Russell feels weighed by responsibility:
“There isn’t a bigger motivation than this." — Stuart Russell [113:15]
- He still believes it is possible to build safe AI if we reform our approach, but urgency is critical.
Notable Quotes & Memorable Moments
-
On the Race Dynamic:
"We’re all looking at each other saying, yeah, there’s a cliff over there, running as fast as we can towards this cliff… That’s nuts." — Stuart Russell [77:18]
-
On Today's AI System Objectives:
"We are growing these systems, they have objectives, but we don’t even know what they are because we didn’t specify them… What we’re finding... is that they seem to have an extremely strong self-preservation objective." — Stuart Russell [36:56]
-
On What He Values Most:
“I value my family most and that answer hasn’t changed for nearly 30 years… Outside of your family? Truth. And that answer hasn’t changed at all.” — Stuart Russell [120:14]
-
On Individual Responsibility:
“If you want to have a future and a world that you want your kids to live in, you need to make your voice heard.” — Stuart Russell [111:04]
Important Segments with Timestamps
- The "Gorilla Problem" Analogy: [01:51, 19:32]
- Private Fears Among AI Leaders: [05:35 – 09:03]
- King Midas & AI Objectives: [36:09, 36:43]
- CEO AGI Predictions: [14:45 – 15:21]
- Why Safety Divisions Are Ineffective: [17:37 – 19:15]
- "Just Pull the Plug": Fallacies: [22:10 – 23:19]
- Would You Press the Red Button Against AI? [70:00 – 75:15]
- On Global AI Race and Regulation: [76:07 – 78:09]
- Society, Work, and Coming Turbulence: [81:21, 86:44]
- Regulation Likened to Nuclear Risk: [94:44 – 99:02]
- Advice for Young People’s Careers: [63:34, 57:51]
- What Average Citizens Can Do: [110:05]
- Russell’s Core Values: [120:14]
Tone, Language & Style
The conversation is urgent, clear, and frequently sobering, carrying an undercurrent of deep concern—Russell is both technical and accessible, tying abstract concepts to lived realities (e.g., the fate of gorillas, King Midas, cruise ships, WALL-E, AI as a replacement vs. tool). Bartlett brings warmth, wit, and sharp inquiry, repeatedly testing and seeking solutions, but never shying away from the gravity of the situation.
Final Takeaways
- The rapid pursuit of superintelligent AI is careening toward a "point of no return," with industry leaders aware but unable (or unwilling) to stop.
- The key challenge is control—a technical, regulatory, and ultimately, a societal question.
- Without major course corrections, both existential risk and massive societal upheaval are probable within a decade.
- Everyone—not just experts—has a role to play by demanding responsible governance, seeking truth, and contemplating the kind of future we truly desire.
Listen if: You’re concerned about where AI is heading, want to understand the true stakes, or want to hear from one of the world's leading thinkers at a pivotal moment in history.
