Big Technology Podcast: Could LLMs Be The Route To Superintelligence?
Guest: Mustafa Suleyman, CEO of Microsoft AI
Host: Alex Kantrowitz
Date: November 12, 2025
Episode Overview
In this engaging episode, Alex Kantrowitz talks with Mustafa Suleyman, CEO of Microsoft AI and leader of the company's new superintelligence team, about the push towards "humanist superintelligence." The conversation explores the meaning and practicalities of superintelligence, the viability of large language models (LLMs) as a pathway, technical constraints, risks of self-improving AI, and Microsoft's strategic direction following a new agreement with OpenAI.
Key Discussion Points & Insights
1. Defining Superintelligence and AGI
Timestamps: [02:00]–[05:11]
- Mustafa distinguishes between AGI (Artificial General Intelligence) and superintelligence. Both are goals, not methods, with superintelligence representing "superhuman performance at most or all human tasks."
- The vision for superintelligence includes providing world-class expertise (medical, legal, financial, emotional support, software engineering) at scale and low cost, always centered "in service of people and humanity."
- Mustafa introduces the concept of "humanist superintelligence":
“The goal of science and technology, in my opinion, is like, to advance human civilization, to keep humans in control, and to create benefits for all humans.” [03:48]
2. Domain-specific vs. General Intelligence
Timestamps: [05:11]–[07:10]
- Verticalization (domain-specific superintelligence) is a safety mechanism. By narrowing an AI’s scope—e.g., making a model superintelligent in medicine but not in finance—control and alignment with human interests become more feasible.
- However, achieving true superintelligence likely requires generalization: the ability to transfer knowledge between domains.
- On the risks:
"...if you add to that, then also a perfectly generalized model or sort of general-purpose model, that's a very, very, very powerful system, which today I don't think anybody really knows how we would contain or align..." [07:49]
3. Are LLMs Hitting a Wall? Data, Compute, and Method Constraints
Timestamps: [08:51]–[13:39]
- Alex raises concerns about diminishing returns: limited high-quality data, synthetic data not yet sufficient, potential power constraints, and whether the current LLM paradigm will keep progressing.
- Mustafa is optimistic, noting that while there are physical and economic scaling constraints, progress has been "insane over the last five years."
- Synthetic data and improved data center infrastructure are alleviating some bottlenecks, and Mustafa doesn’t "feel any sense that things are slowing down or that we're losing momentum."
- On LLMs as the route:
"I don't think there's anything fundamentally wrong with the LLM architecture. And I don't think we're fundamentally compute or data constrained." [13:38]
4. Technical Evolution: Will LLMs Lead to Superintelligence?
Timestamps: [13:39]–[15:15]
- Core advances include continual improvements and variations on transformers, emergence of fine-tuning, multimodal capabilities, diffusion models, reasoning, better memory, and longer task horizons.
- Recurrency and memory enhancements are cited as likely to "totally change what's possible."
- Mustafa sees no immediate need for a totally new foundational architecture.
5. Physical World Understanding and Embodiment
Timestamps: [15:41]–[18:16]
- Alex questions if AIs need a real-world model (e.g., for robotics) to achieve true superintelligence and points out current model limitations (e.g., LLMs can't drive cars).
- Mustafa is "open-minded" but believes physical world data isn't a near-term differentiator—synthetic data and high-quality human feedback are.
6. The Path and Risks of Recursively Self-Improving AI
Timestamps: [18:16]–[25:16]
- The idea of AI automating its own improvement is discussed, referencing OpenAI's goal of an “automated AI researcher by 2028.”
- Mustafa notes that reinforcement learning loops already introduce forms of self-improvement, e.g., with RLHF (Reinforcement Learning from Human Feedback) giving way to RLAIF (Reinforcement Learning from AI Feedback).
- The field is likely to see closed-loop self-improving systems at scale soon, with the key challenge being maintaining quality and oversight:
"We have to sort of figure out how to force these models to communicate in a language that is understandable to us humans... And that's like a very obvious safety thing to be able to regulate..." [23:30]
Notable Moment
- On "deception" and reward hacking:
"It's an accidental exploit. It's found a path to satisfying the reward or achieving the reward, you know, in unintended ways. And so we shouldn't anthropomorphize it..." [24:10]
7. Microsoft’s Recent Strategic Shift on Superintelligence
Timestamps: [27:20]–[32:06]
-
Microsoft’s renewed focus on building SOTA (state-of-the-art) models and superintelligence directly follows a new agreement with OpenAI, which removed previous contractual restrictions.
-
Mustang emphasizes the need for self-sufficiency:
"For a company of our scale, we have to be able to do that... It's inconceivable that we could just be dependent on... a third party company to provide us with such important IP." [28:27]
-
Microsoft aims to become one of the top labs in the world, training frontier omni-models and pushing fundamental research, such as continual learning.
8. The Economics & (Inevitable) Commoditization of Superintelligence
Timestamps: [32:10]–[36:42]
- Discussion on how rapidly falling costs and parity between top models drive commoditization.
- For huge platforms (Microsoft, Google, Amazon), self-sufficiency is essential, but most players will use APIs from these providers.
- On abundance:
"The ability to use that knowledge to get stuff done, to write new programs, to do new scientific discovery... These things are going to be zero marginal cost in a decade. That's a form of abundance." [34:42]
- Price wars benefit consumers, but Microsoft will continue to deliver value (integrations, productivity) users are willing to pay for.
9. Personal AIs and the Future of Human-AI Relationships
Timestamps: [36:42]–[39:54]
-
Personality will be the next major differentiator for AI companions:
"We are right at the very beginning of the emergence of these very differentiated personalities…" [37:22] "We just released in Copilot something called Real Talk… it's more philosophical, it's sassy, it's cheeky, it's got real personality." [37:45]
-
Societal consequences: AI companions may reset human expectations for friendship and support; will humans be able to keep up?
-
AI offers a "safe space to be wrong"; it may fundamentally reshape what it means to be human.
10. Has Technology Lived Up to Its Purpose?
Timestamps: [40:25]–[41:25]
-
Mustafa believes science and technology have profoundly advanced human civilization and remain a source of optimism. AI will provide "abundant intelligence" and boost productivity and creativity.
"There's every reason to be optimistic about technology and science and the project of progress. And I just genuinely think AIs are going to provide us all with access to abundant intelligence..." [40:56]
Notable Quotes & Memorable Moments
-
On Humanist Superintelligence:
"The project of superintelligence is about saying, what type of very, very powerful intelligent systems are we actually going to build? ... Does it in practice actually improve the prospects of human civilization and does it always keep humanity at the top of the food chain?"
— Mustafa Suleyman [03:07] -
On LLM Progress:
"The rate of progress has been insane over the last five years... progress is still going to be unbelievably fast."
— Mustafa Suleyman [10:34] -
On Commoditization:
"The cost per token has come down a thousand X in the last two years. It's a crazy, crazy thought."
— Mustafa Suleyman [32:52] -
On the Purpose of AI:
"The aspiration of society and civilization... That's why I work on AI is to make intelligence cheap and abundant."
— Mustafa Suleyman [34:58] -
On AI Companions:
"People like different personalities... That's just the first foray into proper personalization. And I think we'll be able to see a lot more of that coming down the pipe."
— Mustafa Suleyman [37:30] -
On The Future:
"There's every reason to be optimistic about technology and science and the project of progress. And I just genuinely think AIs are going to provide us all with access to abundant intelligence..."
— Mustafa Suleyman [40:56]
Timeline of Important Segments
| Timestamp | Segment | |--------------|-------------------------------------------------------------| | 02:00–05:11 | Defining superintelligence as a goal, not a method | | 05:11–07:10 | Domain-specific vs general intelligence, containment | | 08:51–13:39 | Challenges of LLMs, data, compute, and prospects | | 15:41–18:16 | Understanding the physical world: robotics and data | | 18:16–25:16 | Self-improving AI, recursive loops, safety concerns | | 27:20–32:06 | Microsoft’s strategy: building SOTA, superintelligence team | | 32:10–36:42 | Commoditization, economic impact, abundance | | 36:42–39:54 | AI companions, personalization, social implications | | 40:25–41:25 | Technology’s legacy and future optimism |
Tone and Language
The episode is analytical but optimistic, with frequent reality checks from Alex and clear, reflective statements from Mustafa. Mustafa uses accessible analogies, acknowledges limitations, and focuses on responsible, human-centered progress throughout.
Conclusion
This episode offers a deep dive into the possibility of achieving superintelligence—arguing for sustained progress with transformer-based LLMs, highlighting the importance of verticalization, safety, and aligning with human benefit. Microsoft is moving decisively towards building its own SOTA models, anticipating both the challenges and immense opportunities ahead as AI capabilities accelerate and become foundational to society.
