Podcast Summary: Thoughts on the Market — "AI as New Global Power?"
Date: February 27, 2026
Hosts: Michael Zezas (Deputy Head of Global Research, Morgan Stanley), Steven Berg (Global Head of Thematic and Sustainability Research, Morgan Stanley)
Main Theme:
Exploration of how artificial intelligence (AI) is rapidly evolving into a core pillar of geopolitical power, drawing parallels to traditional military and economic influence. The hosts analyze how US strategic initiatives around global AI adoption, particularly following the recent India AI Impact Summit, are reshaping alliances, economic stratification, and the future architecture of global technology and sovereignty.
1. Main Episode Overview
The discussion centers on:
- The US’s push for global AI adoption, promoting a “quantum real AI sovereignty” via integration with the American AI stack.
- Skepticism and concerns from Global South nations and Europe regarding reliance on US proprietary AI models.
- Core trade-offs between open vs. proprietary models in terms of sovereignty, capability, and societal benefit.
- The broader geopolitical ramifications of AI as a strategic anchor of national and global power.
2. Key Discussion Points and Insights
A. US Vision for Global AI and International Reception
[00:09–01:17]
- At the India AI Impact Summit, the US showcased its strategy for global AI adoption, emphasizing “quantum real AI sovereignty,” which means strategic autonomy via US AI frameworks.
- Several Global South nations and some European states are wary of sole dependence on proprietary (mainly US-centric) AI systems, concerned about issues of sovereignty, control, explainability, and data ownership.
- “What’s at stake isn’t just technology policy, it’s the future structure of global power, economic stratification and whether sovereign nations can realistically build competitive alternatives outside the US and China.” (Zezas, 00:26)
B. Areas of Agreement and Tension at the Summit
[01:17–02:44]
- US and India reached important agreements, such as the “Pax Silica” deal, securing access to supply chains and AI tech.
- India’s focus: open access, explainability, and societal benefit, especially for underserved populations.
- Notable example: AI enabling healthcare access in remote areas by diagnosing conditions via smartphone photos.
- “I was really struck by Prime Minister Modi’s focus on ensuring that all Indians have access to AI tools that can help them in their everyday life... That’s very powerful.” (Berg, 01:40)
- US hyperscalers are making efforts to align with Indian requirements (e.g., open access models for public health).
- The core tension: wanting the benefits of advanced AI without losing national autonomy or being beholden to proprietary systems.
C. Captivity to Proprietary Models vs. Open Source Trade-offs
[02:44–05:03]
- Concerns about being “captive” to proprietary LLMs (large language models)—including potential costs and loss of control over data.
- Quality gap: US proprietary models are advancing faster due to massive investments in compute—“the big five American firms have assembled about 10 times the compute to train their current LLM compared to their prior LLMs... If the scaling laws hold, then a 10x increase... should result in models that are about twice as capable. Now just let that sink in for a minute.” (Berg, 03:38)
- Open models may struggle to keep pace due to limited compute and data access.
- Policy implication: India appears committed to exploring both approaches, notably prioritizing broad societal impact.
D. AI as a Geopolitical Asset
[05:03–07:50]
- The US is leveraging advanced AI to attract and bind partners, much like its historical use of military superiority—AI becomes a strategic asset in alliance building and global negotiation.
- The “Pax Silica” agreement is part of the US approach to solidify AI-centric alliances while reducing dependency on China.
- AI strategy is increasingly fused with national security, trade policy, and foreign relations.
- “...the US is talking about AI and developing AI as an anchor asset to its power, in a way that military power has been that anchor asset for much of the post World War II period.” (Zezas, 07:25)
E. Durability of the US Strategy and Emerging Trade-offs
[07:50–08:40]
- The durability of the “AI umbrella” hinges on whether nations accept dependence on advanced but proprietary US AI models versus opting for open-source, but possibly less advanced, local models.
- No clear commitments yet from India or others regarding how far they’ll align with the US proprietary model approach.
F. Standards, Governance, and US Leverage
[08:40–10:06]
- The US explicitly rejects centralized global AI governance, advocating for national control in line with “domestic values.”
- Potential for the US to use its legal and economic leverage over AI in a similar manner to how it uses the US dollar for sanctions and influence.
- “...if there's a use case that comes out of it that they find is against US values... similar in some way to how the US dollar, being the predominant currency... gives the US degrees of freedom to impose sanctions...” (Zezas, 09:19)
G. AI as Strategic Infrastructure—What Investors Should Watch
[10:06–12:37]
- AI is poised to become a core component of global strategic infrastructure, influencing geopolitical competition and economic differentials.
- Key investor signal: The pace of progress among leading LLMs, especially potential breakthroughs by the “big five” US firms anticipated in the coming months.
- “...what really got my attention was about a week ago, one of the LLMs broke that trend in a big way to the upside... the best American model that was recently introduced was more like 15 [hours of independent operation]... that's a big deal.” (Berg, 11:37)
- The race between open-source and proprietary models—and their socioeconomic impacts—will become especially pertinent through spring and summer 2026.
3. Notable Quotes & Memorable Moments
-
On the strategic shift in power:
“The US is talking about AI and developing AI as an anchor asset to its power, in a way that military power has been that anchor asset for much of the post World War II period.”
(Michael Zezas, 07:25) -
On India’s societal focus:
“Prime Minister Modi’s focus on ensuring that all Indians have access to AI tools that can help them in their everyday life... That’s very powerful.”
(Steven Berg, 01:40) -
On exponential model improvements:
“Every seven months the complexity of what these models are able to do approximately doubles. ...one of the LLMs broke that trend... The best American model that was recently introduced was more like 15 [hours of agentic operation]. That’s a big deal...”
(Steven Berg, 11:21–11:37) -
On the open vs. proprietary divide:
“The pure technologist would say that these proprietary models are going to increase in capability much faster than the open source models.”
(Steven Berg, 04:31) -
On American strategy and legal power:
“If there's a use case... that they find is against US values... the US... being the predominant currency... gives the US degrees of freedom to impose sanctions...”
(Michael Zezas, 09:19)
4. Timestamps for Key Segments
- US AI vision and global skepticism: 00:09–01:17
- Summit agreements and India’s priorities: 01:17–02:44
- Open vs. proprietary model trade-offs: 02:44–05:03
- AI as asset in geopolitical strategy: 05:03–07:50
- Durability of US strategy: 07:50–08:40
- AI governance and standards debate: 08:40–10:06
- AI as strategic infrastructure/investor guidance: 10:06–12:37
5. Tone and Style
- Analytical and forthright, with both hosts consistently focusing on the tangible policy, technological, and investment implications of the AI power shift.
- The discussion is grounded, deeply informed by recent summit developments, and mindful of both technical progress and real-world political trade-offs.
End Summary
