Episode Overview
Title: AMD CEO Lisa Su & OpenAI President Greg Brockman Talk New Partnership
Date: October 7, 2025
Podcast: Bloomberg Talks
Main Theme:
This episode features a deep-dive conversation with AMD CEO Lisa Su and OpenAI President Greg Brockman, discussing their landmark partnership to deploy 6 gigawatts of AMD GPUs for AI compute. The conversation explores the size and impact of the deal, how it will fuel the next phase of AI development (with an early focus on inference), the evolving landscape of compute infrastructure, implications for the AI industry, and collaboration strategies involving major cloud providers and the broader supply chain.
Key Discussion Points & Insights
1. The Scale and Significance of the Partnership
-
Deal Overview:
- AMD and OpenAI have entered a definitive agreement for OpenAI to deploy 6 gigawatts of AMD GPUs.
- OpenAI will receive up to 160 million AMD shares in tranches, contingent on operational and financial milestones.
- The agreement is expected to result in "tens of billions of dollars in revenue" for AMD ([00:30]).
-
Milestones:
- Initial target is 1 gigawatt, scaling up to 6 gigawatts over several years.
- Focus is initially on inference, as opposed to training, for OpenAI’s massive user base.
-
Industry Impact:
Lisa Su underlined the deal as a "huge milestone for AMD" and the broader AI ecosystem, reflecting the massive and growing demand for compute power:“When you get right down to it, you need more AI compute. Compute is the foundation for all of the intelligence we can get from AI.” — Lisa Su ([01:10])
2. AI Compute Demand and Bottlenecks
-
Exponential Demand:
- OpenAI faces real limits in launching new features/products due to a shortage of computation.
- ChatGPT now serves “800 million weekly active users”—a meteoric rise in less than three years ([02:08]).
“We are in a position where we cannot launch features, we cannot launch new products simply because of lack of computational power.” — Greg Brockman ([02:20])
-
Economic & Strategic Context:
- Brockman foresees a looming "compute desert":
“...we're very much heading to a world by default that I think looks like a compute desert. Right. That there's just not enough compute to go around. And so we're trying to build as much as possible as quickly as possible.” — Greg Brockman ([02:36])
- Brockman foresees a looming "compute desert":
3. Technical & Operational Aspects of the Rollout
-
Product Focus:
- Deployment will begin with AMD’s next-generation MI450 chip.
- OpenAI is committing to AMD Instinct for large-scale inference—as the biggest public customer to date ([03:08]).
“This is certainly the largest deployment that we have announced by far... These types of partnerships take years to really get comfortable with the idea that we're going to go all in together.” — Lisa Su ([03:08])
-
Implementation Footprint:
- Compute will be distributed across multiple data centers, cloud providers (including a deal with Oracle), and locations globally.
- Lisa Su: “For this amount of compute, it's going to have to be in a lot of different places... multiple locations, I would imagine, multiple providers to really get this online as fast as possible.” ([05:16])
4. Funding and Financing Structures
-
Financing Approach:
- Rapid growth in AI revenue justifies aggressive investment.
- OpenAI is considering all financing options: equity, debt, creative structures.
- The deal is structured so that AMD issues OpenAI stock only as OpenAI delivers gigawatt capacity—a risk-mitigation mechanism.
“...as a company that is trying to move as fast as we can, we look at everything, right? ... equity, debt, creative ways of financing.” — Greg Brockman ([06:16])
-
Mutual Confidence:
Lisa Su assured shareholders:“This deal is a win for AMD, it's a win for OpenAI, and it's a win for our shareholders... It's about who has the most compute and how fast can we get it online.” ([07:22])
5. Supply Chain, Manufacturing, and Energy
-
Supply Chain Security:
- Building as much capacity as possible in the U.S. is a major priority, but international sites are possible too.
- AMD maintains deep partnerships with TSMC for chip manufacturing and closely manages its supply chain ([09:24], [08:32]).
“This is the U.S. stack. We want to have as much of it in the US as possible.” — Lisa Su ([09:24]) “...computational power is going to become this national security strategic resource.” — Greg Brockman ([08:32])
-
Power & Sustainability:
- Brockman emphasizes the need for far more energy—including the role of nuclear—to keep the AI boom powered.
- The entire compute supply chain, from energy production to chip fabrication, must scale up rapidly ([04:17]).
6. Competition & Strategic Positioning
-
Nvidia vs. AMD:
- OpenAI’s deal with AMD is incremental, not a replacement for Nvidia partnerships.
- For AI inference, AMD’s new chips offer “specific benefits” and diversify capabilities.
“Getting AI training to work is a huge, huge amount of lift; that's something we've really only done the work for Nvidia, but for inference, that's something that's much more... easier barrier to entry there.” — Greg Brockman ([10:22])
-
Market Opportunity:
- Lisa Su projects the total addressable market for AI accelerators will exceed $500 billion.
- The partnership signals validation and competitive positioning for AMD.
“There's so much need for compute... This is a huge pie and you're going to see the need for more plus players coming into it...as much as we love the work with OpenAI, we're working with a lot of other customers as well.” — Lisa Su ([14:15])
7. The Future: Productivity and Economic Transformation
-
AI as a Productivity Engine:
-
Greg Brockman describes an emerging world where access to compute equals productivity:
“We're heading to a world where if you can have ten times as much AI power behind you, you will probably be ten times more productive.” ([12:51])
-
Close financial and strategic ties between AI companies and hardware vendors (like AMD and OpenAI) will become more common moving forward.
-
-
Catalyst for Industry:
- Both guests frame this as a catalyst for the entire ecosystem, accelerating cloud and AI infrastructure investment.
“We're already working with a number of cloud service providers who are also very active on our technology and I think this is a great catalyst to get the industry to build faster.” — Lisa Su ([11:23])
Notable Quotes & Memorable Moments
-
Lisa Su on the magnitude of the deal:
“6 gigawatts of AI compute... a big deal for us, for our shareholders, for our teams, and... for the partnership and the overall ecosystem.” ([01:10])
-
Greg Brockman on demand:
“The world continues to underestimate the amount of demand for AI compute... We’re trying to build as much as possible as quickly as possible.” ([02:08])
-
Greg Brockman on compute as economic fuel:
“We think this is important for the economy, we think this is important for the nation, we think this is important for humanity.” ([04:17])
-
Lisa Su on partnership depth:
“This is an all in partnership in terms of building out the AI compute that OpenAI needs for everything that they're offering to the world.” ([03:08])
-
Greg Brockman on the productivity leap:
“If you can have ten times as much AI power behind you, you will probably be ten times more productive.” ([12:51])
-
Lisa Su’s market analysis:
“Just the accelerator TAM being over $500 billion in TAM over the next few years. Some might say maybe I was a little conservative...” ([14:15])
Important Timestamps
- 00:30 — Host sets up the context, announces the deal and focus on inference.
- 01:10 — Lisa Su: "This is a huge milestone for AMD..."
- 02:08 — Greg Brockman: "We are in a position where we cannot launch features... because of lack of computational power."
- 03:08 — Lisa Su: "This is certainly the largest deployment that we have announced by far..."
- 04:17 — Greg Brockman on industry collaboration, power, and cloud partners.
- 05:16 — Lisa Su: On deployment locations and multi-provider strategy.
- 06:16 — Greg Brockman: Describes creative financing approaches.
- 07:22 — Lisa Su: "This deal is a win for AMD...and our shareholders."
- 08:32 — Greg Brockman: On US and global supply chain, and compute as a strategic resource.
- 09:24 — Lisa Su: “We're absolutely prioritizing building in the United States...”
- 10:22 — Greg Brockman: On AMD vs. Nvidia for inference.
- 11:23 — Lisa Su: On the rollout timeline and importance of first-gigawatt deployment.
- 12:51 — Greg Brockman: On AI productivity and compute scarcity.
- 14:15 — Lisa Su: On market size and industry outlook.
Summary
In this high-impact conversation, AMD and OpenAI leaders reveal the scale, urgency, and strategic significance of their compute partnership, providing rare insights into the ferocious global race to build AI capacity. With billions at stake, AI now hinges on compute supply chains, energy resources, and deep industry alliances—poised to drive another leap in economic productivity and reshape the technology landscape.
