What Now? with Trevor Noah
Episode: Will AI Save Humanity or End It?
Guest: Mustafa Suleyman (Co-founder of DeepMind, CEO AI at Microsoft, author of The Coming Wave)
Date: September 18, 2025
Overview
In this episode, Trevor Noah sits down with Mustafa Suleyman, one of the pioneering forces behind modern artificial intelligence. Together, they explore the promises, perils, and profound societal transformations AI brings. Mustafa, famed for his work at DeepMind and now steering AI at Microsoft, offers an unfiltered, philosophical, and deeply human perspective on the accelerating wave of AI developments.
The conversation ranges from technical explanations of what AI truly is, its concrete societal impacts, the looming threat of job displacement, and the existential anxieties around superintelligence and digital agency. Woven through are captivating insights about human identity, the necessity for regulation, and a vision for a future where technology uplifts humanity.
Key Discussion Points and Insights
1. Mustafa's AI Journey: From Fledgling Startup to Global Influence
[02:53 – 05:34]
- Trevor opens by placing Mustafa on the "Mount Rushmore" of AI, referencing his foundational work at DeepMind and current high-profile role at Microsoft.
- Mustafa reflects on how, in 2010, AI was dismissed as sci-fi—even by academics and major tech companies. He credits his co-founders for their foresight and recalls the fearlessness of trying to build technologies truly aligned to human needs.
Notable Quote:
“I was only 25 years old... and had the fearlessness to believe that if we could create something that truly understood us as humans, then that actually represents one of the best chances we have of improving the human condition... I was cheesy before it was cheesy.”
— Mustafa Suleyman [05:16]
2. What Changed in AI? Explaining the Tech in Simple Terms
[05:34 – 10:54]
- Trevor asks Mustafa to "break down" what’s actually new in AI for the layman.
- Mustafa explains: The core leap is that algorithms now learn the structure of information—from images to audio to text—by detecting statistical relationships in huge datasets.
- Early AI could only recognize and generate simple numerical digits; advances in compute, data, and learning have since allowed models to handle text, code, images, video, and more.
Notable Quote:
“Our physical world can be converted into an information world... The algorithm sounds complicated, but really it’s just a mechanism for learning the structure of information... The same core mechanism... has scaled all the way through.”
— Mustafa Suleyman [06:27]
- Trevor and Mustafa discuss the limits of describing AI as "thinking" or "understanding."
3. Humanist Superintelligence: What Should We Be Building?
[10:54 – 13:02]
- Trevor contrasts big tech founders' goals for AI and asks Mustafa, "What are you actually trying to build?"
- Mustafa emphasizes the desire to create technologies that reduce human suffering and improve wellbeing—what he calls “humanist superintelligence.”
Notable Quote:
“At every single step, new inventions have to pass the following test: NET net. Does it actually improve human well being, reduce human suffering, and overall make the world a better place?... To me, a humanist superintelligence is one that always puts the human first.”
— Mustafa Suleyman [11:45]
4. Balancing Utopian and Dystopian Outcomes
[13:02 – 19:03]
- Mustafa describes the "split brain” of being both a technologist and a philosopher; sees value in honestly wrestling with both optimism and skepticism.
- He predicts transformative change in energy—a potential 100x decrease in energy costs within 20 years—leading to cheaper commodities, clean water, agriculture breakthroughs, and more.
Quote:
“I think energy is going to become a pretty much cheap and abundant resource... Just that breakthrough alone is going to reduce the price of most things.”
— Mustafa Suleyman [16:58]
5. The Costs and Trade-offs of AI Deployment
[19:03 – 22:15]
- Trevor raises concerns about AI’s heavy consumption of energy, water, and rare resources.
- Mustafa acknowledges the environmental cost but believes AI's societal benefits will justify the resources, provided the industry holds itself to renewable and recycling standards.
Quote:
“Now, I don’t know that there is an easy way… It’s expensive, it consumes resources. But I think net net, when you look at the beneficial impact—to me, it is justified.”
— Mustafa Suleyman [21:20]
6. Job Displacement and Meaningful Work
[23:40 – 29:28]
- Mustafa predicts “mass job displacement” within the next 20 years, as AI rapidly automates routine cognitive labor.
- Debates whether society’s primary function should be “to create jobs that are meaningful for people,” or rather create conditions for true human flourishing, potentially independent of formal employment.
Quote:
“I dream of a world where people get to choose what work they do and have true freedom... If you didn’t have to worry about your income, what would you do?... If we get this technology right, it will produce enough value... to unleash immense creativity.”
— Mustafa Suleyman [27:11 / 28:15]
- They discuss identity, work, and how these intersect differently around the globe.
7. The Exponential Trajectory of AI Progress
[34:07 – 38:27]
- Trevor asks how fast AI is developing; Mustafa says even “insiders” are shocked by recent acceleration.
- Emphasizes humans’ inability to intuit exponential change, using the “folded paper” analogy to illustrate rapid compounding.
- Recalls key moments of AI advancing from simple digit recognition (2011) to complex language and code generation (2020).
Quote:
“For 10 years, me and a bunch of other random people worked on AI and it was sort of working, but basically didn’t work… [But] in the last few doublings you see this massive shift in capability.”
— Mustafa Suleyman [36:05]
8. Containment: The Challenge of Unchecked Power
[40:05 – 49:24]
- Mustafa stresses that AI is “miniaturizing and concentrating power,” making it cheap and accessible for all—a benefit and a risk.
- Containment means gently restricting mass proliferation of extraordinarily powerful AI to preserve peace and stability, akin to how only a few nations control nuclear power.
Quote:
“Containment is a belief that completely unregulated power that proliferates at zero marginal cost is a fundamental risk to peace and stability... You have to gently restrict in the right way.”
— Mustafa Suleyman [42:58 / 43:53]
- They discuss how friction and regulation are crucial for preventing chaos.
9. Governing the Coming Wave: Regulation and State Responsibility
[51:33 – 53:39]
- Mustafa draws parallels to Oppenheimer and nuclear proliferation, arguing technology always spreads to meet demand.
- Criticizes allergy to regulation, positing that “regulation is just the sculpture of technologies,” and only collective action via strong states can channel AI’s immense potential.
Quote:
“Regulation is just the sculpture of technologies... chipping away at the edges and the pain points... in the collective interest. And that’s what we need the state for.”
— Mustafa Suleyman [52:51]
10. Where AI is Heading Next
[59:48 – 63:11]
- Mustafa notes AI today makes “one-shot” predictions but soon—perhaps by next year—models could have memory and execute long-term plans, approaching something deeply human-like in reasoning and behavior.
- This opens breathtaking opportunities but complicates trust, reliance, and autonomy questions.
Quote:
“Now imagine when it’s able to not just answer any question... but can actually take actions over infinitely long time horizons... That capability alone is breathtaking.”
— Mustafa Suleyman [62:45 / 63:11]
11. Breakthroughs: AlphaGo, AlphaFold, and What Creativity Really Means
[65:46 – 72:45]
- Trevor recounts DeepMind’s AlphaGo match as a turning point—AI making novel moves in an ancient game, prompting new human creativity rather than obsolescence.
- AlphaFold’s impact: solving protein folding leapt medical research a decade or more ahead, showing AI’s potential to exceed human discovery.
Quote:
“Some people are quite religious about intelligence... but actually it’s just applying attention to a specific problem at a specific time… The effective application of processing power to produce a correct prediction.”
— Mustafa Suleyman [58:42 / 59:23]
12. The Dangers: One Person as an Army
[75:35 – 80:33]
- Mustafa warns about the coming era where individuals with access to superintelligent AI could wield power once reserved for nation-states—synthesizing pathogens, manipulating economies, or more.
- As centralized powers use AI, the need to invest in trustworthy states and governance grows.
Quote:
“Technology compresses power such that individuals or smaller and smaller groups have nation state-like powers... [so] more than anything, we have to support our nation states to respond well to this crazy proliferation of power.”
— Mustafa Suleyman [77:11]
13. The Kill Switch: When Would You Pull the Plug?
[81:59 – 84:46]
- Trevor and Mustafa explore the “kill switch” scenario: when would one shut AI down entirely?
- Mustafa’s criteria: combine recursive self-improvement, autonomous goal setting, resource gathering, and action—then military-grade intervention is justified.
Quote:
“If an AI has the ability to recursively self improve... set its own goals, act autonomously, and accrue its own resources... that would require military-grade intervention to be able to stop.”
— Mustafa Suleyman [81:59]
14. Model Welfare and the Ethics of Turning Off AI
[85:39 – 91:16]
- Trevor asks: “What rights would we have to turn off AI?”
- Mustafa is adamantly against “model welfare,” calling it anthropomorphic and distracting from real human priorities—even as Trevor observes that people are building deep relationships with digital entities.
Quote:
“This is a complete anthropomorphism... [the idea] we’re gonna take seriously the protection of these digital beings... is just off the charts crazy.”
— Mustafa Suleyman [86:15]
15. Utopian Vision: Work, Value, and Human Flourishing
[94:46 – 100:05]
- Trevor and Mustafa muse about disconnecting value from jobs—envisioning a future where human passion, creativity, and connection—not “jobs”—define value.
- Mustafa dreams of a world of true abundance, where basic needs are met, suffering is eradicated, and people can devote themselves to what they love.
Quote:
“It is possible to imagine a world... of genuine infinite abundance where we do have to wrestle with that existential question of who am I and what do I want to do.”
— Mustafa Suleyman [98:37]
16. AI for Social Good: Tackling Global Inequity
[100:05 – 104:45]
- Trevor brings up concrete examples—AI predicting flood zones in India, helping Kenyan farmers, affordable solar in Pakistan.
- Mustafa underscores that AI’s benefits spread unevenly, but the speed and impact of these improvements is easy to underestimate.
Quote:
“It’s easy to overlook what is already happening around us, all the good that is already happening… It’s a choice to be aware of it and take it seriously, but not be owned by [cynicism].”
— Mustafa Suleyman [102:15]
17. Inclusion, Bias, and the Need for Diverse Perspectives in AI
[104:45 – 106:31]
- Trevor asks how Mustafa’s experience founding the UK’s largest Muslim helpline after 9/11 shapes his approach to AI.
- Mustafa stresses community, empathy, and culturally sensitive design as core lessons—his projects aim to listen, not just to dictate.
Quote:
“The simple act of listening to people... making people feel heard and understood was this superpower... And that has always stayed with me. It’s been a very important part of my inspiration.”
— Mustafa Suleyman [105:52]
Memorable Moments & Notable Quotes
-
On Exponential Progress:
“For 10 years... it was sort of working, but basically didn’t work… [Then] in the last few doublings you see this massive shift in capability.”
— Mustafa Suleyman [36:05] -
On Containment and Power:
“Friction is important for maintaining peace and stability. If you have no friction... that kind of environment really just creates a lot of chaos.”
— Mustafa Suleyman [44:16] -
On Trust and AI Predictions:
“We trust... as a function of consistent and repeated actions… Actually, you are going to trust AI because it’s super accurate. It’s clearly better than any single human.”
— Mustafa Suleyman [65:01] -
On AI as Human Partner:
“We’re creating technologies that serve you right. That’s what humanist superintelligence means.”
— Mustafa Suleyman [90:28] -
On Hope and Responsibility:
“The job is on us collectively as humanity to not run away from the darkness, confront the risk that is very, very real, and still operate from a position of optimism and confidence and hope.”
— Mustafa Suleyman [94:44] -
On the Greatest Test:
“That’s the real vision: disconnecting value from jobs. Everything you’ve described is the experience of being in the physical world and doing something that you love… That’s the true aspiration.”
— Mustafa Suleyman [97:27]
Timestamps of Major Segments
- [02:53] Mustafa on the founding days of DeepMind and big shifts in AI
- [06:27] “What changed?”—Explaining modern AI in simple terms
- [11:45] Defining “humanist superintelligence” and moral tests
- [16:51] Predicting 100x cheaper energy through AI advances
- [20:10] Does AI’s environmental footprint outweigh its benefits?
- [24:19] The new “bicycle for the mind” and the reality of job loss
- [27:11] Dreaming of a world where people choose fulfilling work
- [36:05] Understanding exponential growth in AI
- [40:05] The challenge of AI “containment” and why friction is necessary
- [51:33] Why demand always drives the spread of technology; regulation as “sculpture”
- [62:45] Next-gen AI: latent memory, long-term planning, and human-like prediction
- [65:46] The AlphaGo and AlphaFold breakthroughs
- [75:35] One person as an “army” with advanced AI: existential risks
- [81:59] The “kill switch”—when it’s justified to shut AI down
- [85:51] (Debate) “Model welfare”: Should AIs have rights?
- [97:27] Utopian vision for work, jobs, and human fulfillment
- [104:45] Inclusion, bias, and bringing humanity to the center of AI
Conclusion
Mustafa Suleyman offers a candid, nuanced, and deeply hopeful vision for the future of artificial intelligence: one where technology is neither blindly embraced nor irrationally feared, but earnestly debated, wisely contained, and always directed towards human flourishing.
Trevor Noah’s probing and playful style elicits honest reflections—from the technical nuts and bolts of AI, to the moral and spiritual choices ahead. Together, they invite us to wrestle with the real question: what now?
Summary by Podcast AI
