Loading summary
A
Welcome to Thoughts on the Market. I'm Sean Kim, head of Morgan Stanley's Europe and Asia Technology team. Today, a foundational shift in the development of AI and its broad market implications. It's Tuesday, May 5th at 3pm in London. Think about the last time you asked a chatbot to write a summary or a draft, or maybe answer a query. It was probably useful, but you are also still driving the interaction, asking, refining, copying, checking and moving the work forward. Now imagine a system that does not just respond, but acts it remembers what you asked last week, understands your preferences, works across digital tools, plans a workflow and adapts as circumstances change. That is a shift from gen AI to agentic AI, from AI that helps with thinking to AI that helps with doing. Genai is mostly passive. It takes a prompt and produces an answer. Agentic AI is active, less a copilot for one task, but an autopilot for multi step workflows. The distinction is key because computing requirements are changing. In Gen AI, large language models and GPUs handle much of the thinking. GPUs or graphics processing units process many calculations in parallel, making them central to modern AI models. In agentic AI, CPU becomes more important. CPUs or central processing units coordinate tasks and connect systems to the broader digital infrastructure. Agentic AI also depends on three the brain or or the large language model orchestration, where the CPU manages the doing and knowledge. Which is memory. Memory may be the most important layer. An agent that knows your preferences, documents, tone and task history becomes more useful over time. That creates a context flywheel. The more context it collects, the more personalized it becomes and the harder it is to leave. Typically in computing we think of memory as storage. Mainly we need to rethink this. Memory is also continuity. When an AI system can use past experiences, memory becomes a long term state, shared knowledge and behavioral grounding. And that matters because LLMs have fixed context windows. Once a conversation exceeds that window, all the contents falls off. For simple questions that may be fine, but for a coding agent working across a large code base over days or weeks, it is a major limitation. Serious work requires persistent memory, short term orientation and active retrieval, remembering prior decisions, understanding changed files and finding relevant codes without the user pointing to every dependency. For investors, the implication is clear. Agentic AI changes the bottlenecks. We see CPUs as a new bottleneck, with memory seeing the highest content increase. We estimate as much as 60% or $60 billion of incremental CPU total addressable market by 2030 within a total CPU market of more than $100 billion. We also estimate up to 70% of incremental DRAM bit shipment tied to this theme. That makes us more positive on the supply chain, including memory, Foundry, substrate, CPU and memory interface, and capacitors and CPU sockets. These areas benefit from content growth, pricing, power and capacity constraints into 2027 as AI moves from answering questions to taking actions. Investors should watch the infrastructure behind the shift because in the agentic era, the next big AI leap is maybe less about the prompt, but more about the processor. Thanks for listening. If you enjoy the show, please leave us a review wherever you listen and share thoughts on the market with a friend or a colleague today.
B
The preceding content is informational only and based on information available when created. It is not an offer or solicitation, nor is it tax or legal advice. It does not consider your financial circumstances and objectives and may not be suitable for you.
Host: Sean Kim, Head of Morgan Stanley’s Europe and Asia Technology Team
Date: May 5, 2026
In this episode, Sean Kim discusses a foundational shift underway in artificial intelligence: the movement from generative AI—AI that answers questions when prompted—to “agentic AI,” which not only responds but also takes proactive, multi-step actions on behalf of users. Kim explores the technical distinctions, market implications, and investment opportunities arising from AI systems that are becoming not just thinking assistants, but doers—actively managing workflows, retaining contextual memory, and adapting to user preferences over time.
Generative AI (Gen AI)/Current Paradigm:
“GenAI is mostly passive. It takes a prompt and produces an answer.” (Sean Kim, 01:00)
Agentic AI/New Paradigm:
“Now imagine a system that does not just respond, but acts. It remembers what you asked last week, understands your preferences, works across digital tools, plans a workflow, and adapts as circumstances change. That is a shift from gen AI to agentic AI, from AI that helps with thinking to AI that helps with doing.” (Sean Kim, 00:33)
GenAI’s reliance on GPUs:
“GPUs or graphics processing units process many calculations in parallel, making them central to modern AI models.” (Sean Kim, 01:28)
Agentic AI’s increased dependence on CPUs:
“In agentic AI, CPU becomes more important… CPUs or central processing units coordinate tasks and connect systems to the broader digital infrastructure.” (Sean Kim, 01:37)
Memory as a Strategic Asset:
“Memory may be the most important layer... Memory is also continuity. When an AI system can use past experiences, memory becomes a long-term state, shared knowledge, and behavioral grounding.” (Sean Kim, 02:02, 02:32)
Challenges with LLMs:
“Once a conversation exceeds that window, all the contents falls off. For simple questions that may be fine, but for a coding agent working across a large code base over days or weeks, it is a major limitation.” (Sean Kim, 02:36)
Active, Persistent Memory:
“Serious work requires persistent memory, short-term orientation and active retrieval, remembering prior decisions, understanding changed files and finding relevant codes without the user pointing to every dependency.” (Sean Kim, 02:50)
Changing Bottlenecks:
Financial Projections & Opportunities:
“We estimate as much as 60% or $60 billion of incremental CPU total addressable market by 2030 within a total CPU market of more than $100 billion.” (Sean Kim, 03:20)
“We also estimate up to 70% of incremental DRAM bit shipment tied to this theme.” (Sean Kim, 03:32)
“These areas benefit from content growth, pricing, power and capacity constraints into 2027 as AI moves from answering questions to taking actions.” (Sean Kim, 03:38)
Investor Takeaway:
“Investors should watch the infrastructure behind the shift, because in the agentic era, the next big AI leap is maybe less about the prompt, but more about the processor.” (Sean Kim, 03:54)
On Agentic AI’s Potential:
“That creates a context flywheel. The more context it collects, the more personalized it becomes and the harder it is to leave.” (02:13)
On Memory’s Evolving Role:
“In computing we think of memory as storage... Memory is also continuity... shared knowledge and behavioral grounding.” (02:19–02:32)
On Market Impact:
“As AI moves from answering questions to taking actions... the next big AI leap is maybe less about the prompt, but more about the processor.” (03:38–03:54)
Sean Kim details how AI’s evolution from reactive assistants to active agents changes both technological requirements and market opportunities. As agentic AI systems rise, CPUs and memory—especially persistent, retrievable memory—become critical. This transition is set to generate major growth in those sectors, presenting attractive prospects for investors in the AI supply chain. Ultimately, in Kim’s analysis, “the next big AI leap is maybe less about the prompt, but more about the processor.”