Transcript
A (0:00)
Welcome to Thoughts on the Market. I'm Michael Zezas, Morgan Stanley's deputy head of Global Research.
B (0:05)
And I'm Steven Berg, Global head of Thematic and Sustainability research.
A (0:09)
Today, is AI becoming the new anchor of geopolitical power? It's Friday, February 27th at noon in New York. So, Stephen, at the recent India AI Impact Summit, the US laid out a vision to promote global AI adoption built around what it calls quantum real AI sovereignty or strategic autonomy through integration with the American AI stack. But several nations from the Global south and possibly parts of Europe, they appear skeptical of dependence on proprietary systems, citing concerns about control explainability and data ownership. And it appears at stake isn't just technology policy, it's the future structure of global power, economic stratification and whether sovereign nations can realistically build competitive alternatives outside the US and China. So, Stephen, you were there and you've been describing a growing chasm in the AI world in terms of access to strategies between the US and much of the Global south and possibly Europe. So from what you heard at the summit, what are the core points of disagreement driving that divide?
B (1:17)
There definitely are areas of agreement and we've seen a couple of high profile agreements reached between the US government and the Indian government just in the last several days. So there certainly is a lot of overlap. I point to the Pax Silica agreement that so important to secure supply chains to secure access to AI technology. I think the focus, for example, for India is, as you said, it is, you know, explainability, open access. I was really struck by Prime Minister Modi's focus on ensuring that all Indians have access to AI tools that can help them in their everyday life. You know, a really tangible example that really stuck with me is someone in a remote village in India who has a medical condition and there's no doctor or nurse nearby using AI. You know, take a photo of the condition, receive diagnosis, receive support, figure out what the next step should be. That's very powerful. So I'd say open access explainability is very important. Now. The American hyperscalers are very much trying to serve the Indian market and serve the objectives really of the Indian government. And so there are versions of their models that are open weights that are being made freely available for health agencies in India as an example, to the Indian government as an example. So there is an attempt to really serve a number of objectives. But I think this key is around open access explainability that I do see that there's a tension.
A (2:44)
So let's talk about that a little bit more because it seems one of the concerns raised is this idea of being captive within proprietary large language models. And maybe that includes the risk of having to pay more over time or losing control of citizen data. But at the same time, you've described that there are some real benefits to AI that these countries want to adopt. So what is effectively the tension between being captive to a model or the trade off instead for pursuing open and free models? Is it that there's a major quality difference and is that trade off acceptable?
