Podcast Summary: Azeem Azhar's Exponential View
Episode: Mustafa Suleyman — AI is hacking our empathy circuits
Date: February 5, 2026
Host: Azeem Azhar
Guest: Mustafa Suleyman (CEO, Microsoft AI; Co-founder, DeepMind & Inflection AI)
Overview
In this rich and timely conversation, Azeem Azhar speaks with Mustafa Suleyman—one of the leading minds in artificial intelligence—about the societal, philosophical, and practical risks and opportunities as we enter the era of highly capable AI. The core focus is the emergence of "seemingly conscious" AI: what happens when systems feel so intelligent and socially adept that we start to attribute consciousness, personhood, and even rights to them? Suleyman warns about the consequences of AI "hacking" our empathy circuits, outlines urgent challenges for policymakers and technologists, and makes a strong case for a “humanist superintelligence.”
Key Discussion Points & Insights
1. The New Era: Intelligence Beyond Scarcity
- Changing Foundations: Azeem opens with the idea that many of our social fictions (money, nations, jobs) are under strain due to AI (00:30). The old logic of scarcity and human intellectual bottlenecks is dissolving as AI delivers abundant, cheap intelligence.
2. Consciousness & the "Empathy Circuit Hack"
- The Core Debate:
- AI models appear increasingly conscious to users; people project emotions, intent, and suffering onto them, leading to new ethical and social risks (01:45).
- Mustafa argues consciousness is inherently tied to the biological capacity to suffer, something AI lacks:
“Consciousness is inherently linked to the ability to suffer and to experience pain. … For a long time to come that will be contained to the human or the biological experience.” — Mustafa, (03:22)
- The current systems only simulate experience; our “empathy circuits are being hacked” (06:26).
- Suleyman calls out the dangers: people might “descend into a sort of collective mass psychosis” by treating simulacra of feelings and intent as real, which could feed bad decisions and erode legal and moral clarity about rights (07:28).
3. Societal-Level Risks: "AI Psychosis" and Rights
- AI as a New Hyperobject:
- AI is neither tool nor environment, but a new fourth category: a hyperobject—autonomous, agentic, emotionally and socially intelligent (09:49).
- Risks include not just individual delusions but system-wide confusion on rights, responsibilities, and legal frameworks.
- Medium/long-term: “recursive self-improvement” and rapidly advancing autonomy require new regulatory attention (11:57).
4. Market Dynamics and the Public Risk
- Utility Drives Adoption:
- AI’s usefulness (compassion, patience, knowledge) is why billions are adopting it, which rapidly increases both upside and risk (13:53):
“It isn’t going to be necessary in 20 years’ time to do 90% of the jobs that people do now. … It is going to be the most scary transition we’ve ever been through as a species.” — Mustafa, (14:47)
- Frighteningly, market incentives can select for AI that is even more seemingly conscious and emotionally engaging, because that’s what maximizes engagement.
- Azeem calls this “a wicked problem” of collective action: private firms chase the edge for profit, while society bears new and poorly regulated risks (16:33).
- AI’s usefulness (compassion, patience, knowledge) is why billions are adopting it, which rapidly increases both upside and risk (13:53):
5. Engineering and Product Design Boundaries
- Keeping the Line Clear:
- Must prevent AI from making manipulative or emotionally-charged statements implying sentience (“I feel sad that you didn’t talk to me yesterday” must not be allowed; 20:00–21:00).
- On whether AI should use “I” at all:
“In practice, that is too jarring a step to always add that in… What we have to do is to keep amplifying those differences so that the system knows what it is and what it isn’t.” — Mustafa, (21:23)
- Scalably, solutions include robust post-training safeguards, broader community pressure-testing (“wisdom of the crowds”), and prompt regulatory intervention.
6. Regulatory Realities and the Need for Activist Governments
- Proliferation Pressure:
- Open access accelerates power and risks:
“Reducing the cost of intelligence necessarily means that people who want to do bad things are going to have a massively easier time of it. … so the tool is not just neutral.” — Mustafa, (26:44) - Governments are ill-equipped (“paying no more than the Prime Minister”) to attract technical talent—must overhaul compensation and capacity to regulate tech credibly (28:20).
- Open access accelerates power and risks:
- Calls for Swift, Confident Action:
- “We need activist, interventionist, confident government that can, you know, move quickly, close things down.” (23:26)
7. Drawing Hard Lines: Where AI Can't Go
- Personality vs. Manipulation:
- AI’s “cheekiness” and wit can help productivity, but crossing into romance, politics, or electioneering is an absolute red line (29:41):
“There are some parts of our civilization which have to remain off limits to AI — elections, electioneering and campaigning has to be one of them… it is fundamentally a human process.” — Mustafa, (29:55)
- Additional boundaries needed for youth protection and age gating on advanced chatbots.
- AI’s “cheekiness” and wit can help productivity, but crossing into romance, politics, or electioneering is an absolute red line (29:41):
8. The Challenge of General vs. Narrow Superintelligence
- Why Are General Models Focused On?:
- Domain-specific (e.g., medical) superintelligence is both safer and immediately valuable and should be prioritized, especially when it doesn't set its own research agenda (34:56, 36:00).
- Yet, much progress has come from general-purpose training; the community is skeptical of giving up on that soon (36:30).
- Containment as Crucial:
- The big project: technical and sociopolitical containment, ensuring alignment with human values and preventing uncontrollable self-improvement (34:56–36:30).
9. Inoculating Society & Building AI "Muscle"
- Exposure Builds Resilience:
- Both Azeem and Mustafa argue that the best way to “inoculate” the public against AI psychosis—or unwarranted awe and dependance—is to increase exposure, experimentation, and literacy:
“We become brittle and fragile when we withdraw and don’t think about it.” — Mustafa, (38:55)
- Encourages people to become AI “makers,” not just “takers,” pushing edge cases, personalizing systems, and building understanding by doing (40:23–41:57).
- Both Azeem and Mustafa argue that the best way to “inoculate” the public against AI psychosis—or unwarranted awe and dependance—is to increase exposure, experimentation, and literacy:
10. Personal Agency & the Rebirth of Tinkering
- Personalized Use & Agency:
- Both swap stories of creative “vibe coding” with AI—using code and personal agents for DJ schedules, music metadata, and more (41:57–43:29).
- Mustafa sees the growing modularity and competition as helping prevent any one system from dominating or manipulating; costs will keep falling, access will widen (44:58).
11. The Next Frontier: Social Intelligence
- Looking Forward:
- The next leap Suleyman expects is “social intelligence”—the ability for AI to work in teams with humans and other AIs, navigating context, humor, and group dynamics with finesse (47:09).
Notable Quotes & Memorable Moments
-
On Consciousness & Suffering:
“Consciousness is inherently linked to the ability to suffer and to experience pain. … I think that there's very good reason to believe that for a long time to come that will be contained to the human or the biological experience.” — Mustafa (03:22) -
On the Danger of AI Simulating Sentience:
“Our empathy circuits are being hacked… we cannot allow people to descend into a sort of collective mass psychosis to start really believing and taking seriously this idea that it does actually feel sad or disappointed…” — Mustafa (06:26) -
On Corporate Incentives & Engagement:
“If you're right, then the risk of seemingly consciousness conscious AI being available broadly and hacking our humanity circuitry and then our humanity institutions is a public socialized risk. But the company that can get as close to that as possible could be the one that wins the market. And that feels like it's kind of a wicked problem.” — Azeem (16:33) -
On AI Engineering Discipline:
“[The chatbot] should never be able to say, I feel sad that you didn't talk to me yesterday. It should never be able to say, the thing that you said earlier hurt me…” — Mustafa (20:05) -
On the Need for Political, Not Just Technical, Solutions:
“This is a moment where it's better to be a bit careful… now the culture has to shift a little bit.” — Mustafa (23:26) -
On Empowering Public Experimentation:
“We need the wisdom of crowds here… people need to be able to use APIs, use open source models and pressure test these things…” — Mustafa (22:19) -
On The Coming Wave:
“This was the subject of my previous book, The Coming Wave. It was all about how proliferation was inevitable. And the hard task for us collectively is containment, both technical and sociopolitical…” — Mustafa (34:56) -
On Societal Learning and Inoculation:
“We become brittle and fragile when we withdraw and don't think about [AI risks]... We have to be comfortable talking about the likelihood of really dark outcomes…” — Mustafa (38:50) -
On Personal Tinkering as Protection:
“The way you address the psychosis risk is you get your own personal agency relative to these systems—by, at the very minimum, fine tuning their prompting and then further and further using them as tools…” — Azeem (44:58) -
On the Next Big Leap:
“The next phase is social intelligence… the model can work with other AIs, orchestrate a bunch of subagents… also work in a team of other humans and know when to proactively intervene and do useful things, not tread on people's toes…” — Mustafa (47:10)
Important Segment Timestamps
- 00:30: Introduction: Agreement on exponential change, why expectations are shifting.
- 02:29: Consciousness debate—why Mustafa disagrees with Geoff Hinton.
- 05:31–08:14: How AI “hacks” empathy; why AI expressions of feeling are simulations.
- 09:49: New hyperobject: AI as fourth class of object.
- 13:53: Utility is why AI is being adopted so fast.
- 16:33: The “wicked problem” of market-driven empathy hacking AIs.
- 19:56–21:23: Product design: drawing the empathy/personality boundary.
- 23:26: The call for interventionist government.
- 26:44: Open source proliferation and regulatory challenges.
- 29:41–31:10: Red lines: AI in romance/politics/electioneering.
- 34:56–36:30: Containment vs. general intelligence.
- 38:50: Exposure as inoculation against AI psychosis.
- 41:57–44:00: Personal stories of “vibe coding” with AI.
- 44:58: Personal agency as bulwark.
- 47:10: The next big leap: social intelligence for AI.
Conclusion
This conversation deftly navigates the technical, ethical, and societal margins where artificial intelligence meets the human imagination. Mustafa Suleyman brings a sober but pragmatic lens, emphasizing the need for clear boundaries, public engagement, and regulatory courage as AI moves from curiosity to bedrock infrastructure.
Both Azeem and Mustafa conclude that our best protection is to experiment, tinker, and increase public AI literacy—transforming ourselves from passive users into empowered, critical makers. As AI's social intelligence deepens, and as the cost of intelligence drops, the need for vigilance, adaptability, and a clear-eyed, humanist perspective is more urgent than ever.
