OpenAI Podcast - Episode 8: OpenAI x Broadcom and the Future of Compute
Date: October 13, 2025
Host: Andrew Mayne
Guests:
- Sam Altman (OpenAI)
- Greg Brockman (OpenAI)
- Hauk Tan (Broadcom)
- Charlie Kawas (Broadcom)
Overview
This episode unveils a major partnership between OpenAI and Broadcom, centered on designing and deploying custom AI chips and integrated systems on a historic scale. The discussion dives into why this collaboration matters for the development of advanced AI—in terms of both technical breakthroughs and global impact—as well as how the partnership seeks to build the foundation for future AI models, including steps toward AGI (Artificial General Intelligence). The conversation is forward-looking, practical, and packed with insights on the scale, challenges, and philosophical implications of building out world-class AI infrastructure.
Key Discussion Points and Insights
1. Announcement and Scope of the Partnership
- OpenAI and Broadcom have been collaborating for 18 months on designing a custom AI chip and a full system optimized for OpenAI's workloads, with deployment beginning late next year.
- Scale of Ambition: The plan is to roll out 10 gigawatts of new compute—comparable to the largest industrial projects in history.
"You would say it's the biggest joint industrial project in human history."
—Sam Altman [00:15]
2. Vertical Integration: From Transistor to Output
- The collaboration isn’t limited to chip design; it spans vertically integrating from transistor-level design to full data center systems.
- Optimization Across the Stack: This approach enables significant efficiency gains, leading to faster, smarter, and more affordable models.
"We are able to think from etching the transistors all the way up to the token that comes out when you ask ChatGPT a question and design the whole system."
—Sam Altman [03:28]
- Demand Always Outpaces Supply: Even with 10x efficiency, demand is rising even faster.
"You optimize by 10x and there's 20x more demand."
—Sam Altman [03:28]
3. Why Custom Chips Now?
- Specific Workload Optimization: The unprecedented scale and specialized needs for both training and inference pushed OpenAI to collaborate directly on hardware.
- AI-Aided Chip Design: OpenAI deploys its own AI to optimize chip components, achieving significant area and efficiency improvements.
"We've been able to apply our own models to designing this chip, which has been really cool... the model comes up with its own optimizations."
—Greg Brockman [05:34]
- Historical Shift: OpenAI initially thought success was about algorithms, not compute. But scaling experiments proved otherwise (e.g., Dota 2 RL agents, scaling laws).
"The path to AGI is really about ideas... but the thing that we found was that we were getting the best results out of scale."
—Greg Brockman [15:23]
4. Infrastructure at Global Scale
- Comparisons to Railways and the Internet: Building this infrastructure is a global, decades-long effort. It's more than just hardware; it's enabling a new kind of utility for billions.
"We're defining civilization's next generation operating system."
—Charlie Kawas [11:02]"This is like railroad Internet... critical utility over time for 8 billion people globally."
—Hauk Tan [11:41]
- Global Collaboration: The project requires coordination across many countries, companies, and sectors.
5. Technical Innovations & Roadmap
- XPU Development: Broadcom and OpenAI progressed from customizing accelerators for workloads to tackling network, scaling, and new forms of standardization.
- Multi-Dimensional Scaling: They’re pushing towards stacking chips in 3D and integrating optics for advanced intra-system communication.
"We're actually working together to ship multiple of these in a two-dimensional space ... The last step we're actually also talking about is now we're going to bring optics into this...100 terabytes of switching with optics integrated all into the same chip."
—Charlie Kawas [26:09]
- Continuous Advancements: Regular software-hardware cycles are expected, with new chips powering each generation of models (GPT-5, 6, 7, etc.).
6. The Endgame: Intelligence per Watt
- Maximizing Intelligence Output: The ultimate goal is to wring as much intelligence as possible from each unit of energy.
"What we want is the most intelligence we can get out of each unit of energy, because that will become the gate at some point."
—Sam Altman [17:26]
- Compute Abundance vs. Scarcity: There’s a strong emphasis on democratizing access to compute so every individual and organization can benefit.
"What we really want is to be a world where just if you have an idea you want to create, you want to go build something that you have the compute power behind you to make it happen."
—Greg Brockman [28:06]
Memorable Quotes & Notable Moments
-
"We're defining civilization's next generation operating system."
—Charlie Kawas [11:02] -
"Compute is the gating factor on this journey towards superintelligence."
—Hauk Tan [14:04] -
"If you simplify what we do to... melt sand, run energy through it, and get intelligence out the other end..."
—Sam Altman [17:08] -
“Intelligence is the fundamental driver of economic growth, of increasing the standard of living for everyone.”
—Greg Brockman [23:03] -
"If you do your own chips, you control your destiny."
—Hauk Tan [18:02]
Important Timestamps
- 00:37: Official announcement of the OpenAI-Broadcom partnership and project scale.
- 03:28: The importance of vertical integration and its efficiency benefits.
- 05:34: How collaborating with Broadcom and using OpenAI models enabled novel chip design optimizations.
- 09:46: Historical context—comparison to large industrial projects.
- 11:41: Discussion of AI infrastructure as a new kind of global, critical utility.
- 14:04: Technical specifics of workload-optimized chips for training vs. inference.
- 15:23: OpenAI’s historical shift from algorithms to scale and compute.
- 17:08: “Melting sand to get intelligence”—the ultimate goal of maximizing intelligence per unit energy.
- 26:09: Technical roadmap, including chip stacking and integrated optics.
- 27:08: Timeline—first deployment by end of next year, with rapid scaling over following three years.
- 28:06: The vision for compute abundance and creative empowerment.
Conclusion
OpenAI and Broadcom’s partnership marks a new era in AI infrastructure, culminating in custom, vertically integrated compute systems—pushing the limits of both scale and efficiency. With ambitions to serve billions and underpin the next generation of AI models, both parties see this endeavor as essential, not just for their organizations, but as foundational for society’s future productivity and opportunity. The mood is both urgent and optimistic; advancing compute is seen as key to making AI’s benefits accessible to all.
For anyone interested in the intersection of advanced AI, hardware innovation, and the massive societal shifts being set in motion, this is a milestone episode, rich in technical depth and vision for the future.
