TBPN: FULL INTERVIEW: Why I Think Nvidia Is Perfectly Positioned In The AI Race
Date: March 30, 2026
Hosts: John Coogan & Jordi Hays
Guest: Tae Kim
Episode Overview
In this engaging and highly technical episode, John Coogan and Jordi Hays sit down with technology analyst Tae Kim to dissect the volatility in Nvidia’s stock and why the company remains at the epicenter of the AI hardware boom. The discussion focuses on Nvidia’s latest moves in chips, supply chain strategy, competition, global geopolitics, and surging AI demand. Listeners get deep, firsthand insights drawn from conversations with top engineers and tech executives, plus a behind-the-scenes view of how Nvidia is evolving to capture the next wave of AI innovation.
Key Discussion Points & Insights
1. Nvidia’s Current Market Position & “Doom and Gloom” Narrative
- Stock Volatility Context: The hosts discuss Nvidia’s recent 21% drop from its 52-week high and the media’s “peak AI” concerns.
- Tae Kim dismisses the negative sentiment as cyclical, comparing it to past market panics and emphasizing business fundamentals:
“It reminds me a lot of about a year ago... People think it might be the peak. And then we have the Iraq [sic, Iran] war and oil up here. Feels like the same thing over.” — Tae Kim (01:28)
- Tae Kim dismisses the negative sentiment as cyclical, comparing it to past market panics and emphasizing business fundamentals:
- Market Overreaction: External factors like oil prices and political news (Trump, Iran conflict) are cited as driving investor anxiety rather than any real threat to Nvidia’s fundamentals.
-
“Chip sector's flat. Flat on the year. Nvidia is down 10%. The actual fundamentals of the business [are] flying.” — Tae Kim (02:25)
-
2. Explosive Demand for AI Compute & Nvidia’s Strategic Advantage
- Post-GTC Innovations:
- Tae Kim reports “crazy inference demand and AI compute shortages” across all big tech, with engineers describing resource scarcity similar to sneaker bots for GPUs.
“I met with Ian Buck, dozens of engineers at Meta, Google, Nvidia. All of them are seeing crazy inference demand and AI compute shortages.” — Tae Kim (03:14)
- Developers are scrambling to obtain GPUs, and even “bots to pick up any B200 GPU” are being used (03:58).
- Tae Kim reports “crazy inference demand and AI compute shortages” across all big tech, with engineers describing resource scarcity similar to sneaker bots for GPUs.
- Jensen Huang’s Foresight:
- Nvidia locked up memory and component supplies ahead of time, foreseeing AI agent demand.
“Jensen, you know, he's very prescient. He probably saw this demand months away. He locked up all the supply agreements... ahead of time.” — Tae Kim (04:14)
- Nvidia locked up memory and component supplies ahead of time, foreseeing AI agent demand.
- GROQ Acquisition & Tech Integration:
- Nvidia’s acquisition of Groq and its integration with Vera Rubin chips will let it address 25% of new inference workloads (05:52).
- “Nvidia is positioned perfectly to thrive on this coding agent wave that we're seeing right now.” — Tae Kim (05:09)
3. Shift to ASICs and Evolving AI Model Architectures
- Strategic Evolution:
- Nvidia’s openness to ASICs, previously seen as a threat, shows flexibility:
“I think what Jensen does, he sees where the market is shifting and where the economic value is... them working together where 75% of the inference is Vera Rubin, 25% is a Groq... the perfect combination.” — Tae Kim (05:52)
- Nvidia’s openness to ASICs, previously seen as a threat, shows flexibility:
- Upcoming AI Innovations:
- Advances discussed at GTC: context window innovations, memory stacking on GPUs/TPUs, and synthetic data growth mean the AI boom is far from tapped.
- “So you have all these vectors where AI models are going to just keep getting better and better.” — Tae Kim (07:54)
4. Nvidia’s Open Source AI Initiatives & Global Market Strategy
- Frontier Lab Investment:
- Nvidia’s $25B investment in open-source AI labs is described as additive, not threatening to partners:
“It's like 25 billion over the next few years, which doesn't really compete with what OpenAI and Anthropic are doing.” — Tae Kim (08:32)
- Nvidia’s $25B investment in open-source AI labs is described as additive, not threatening to partners:
- China Market Access & Regulatory Navigation:
- Despite US–China tensions, Nvidia has secured dual license approvals for billions in future orders:
“Jensen literally said at GTC they got license approvals on both the US and China side.” — Tae Kim (10:07)
- Despite US–China tensions, Nvidia has secured dual license approvals for billions in future orders:
5. Supply Chain, Chip Fabs, and Geopolitical Risks
- TSMC Relationship:
- Jensen Huang’s strong ties to TSMC ensure Nvidia’s superior wafer allocation:
“Nvidia is in the driver's seat... Jensen goes there five, six times a year... they are getting a higher allocation.” — Tae Kim (10:53)
- Jensen Huang’s strong ties to TSMC ensure Nvidia’s superior wafer allocation:
- Bottlenecks & Industry-Wide Constraints:
- Wider industry is chip-constrained; only Samsung and Intel could augment supply, but ramp-up timelines are long.
- “There is going to be an AI compute shortage in the years to come... Nvidia benefits because they're the biggest dog in the house.” — Tae Kim (11:17)
- ARM, CPU Shortages, and Future Demand:
- ARM’s push into CPUs is acknowledged, but focus remains on a coming “massive CPU shortage” to serve AI agent orchestration needs.
- “We're going to see this massive demand for CPUs that people aren't really understanding yet... AI agents, the whole thing requires orchestration... all handled by the CPU.” — Tae Kim (13:42)
- Terrafab and Chip Manufacturing Challenges:
- Skepticism about new entrants like Terrafab:
“Chip fabs is almost like cooking... it takes a lot of trial and error accumulated over decades. TSMC and even Intel. Not something you could just jump right in and do.” — Tae Kim (15:31)
- Skepticism about new entrants like Terrafab:
6. Elon Musk, SpaceX, and the New 'Railroads' of Compute
- Compute in Space:
- Speculation on SpaceX/Starlink deploying GPUs on satellites for future compute demand, echoing a telecom "railroad" play rather than direct competition.
- “So there's going to be so much demand over the next five, ten years that you're going to have to use these SpaceX satellites that have GPUs in them to serve that.” — Tae Kim (18:59)
7. Material Shortages: Helium & Other Inputs
- Helium as a Bottleneck:
- Temporary risks due to geopolitical tension but current inventories are sufficient. If helium truly ran out, “we're going to have bigger problems... There would be world starvation.” — Tae Kim (20:45)
8. Depreciation of GPUs (“Depreciation-gate”)
- Asset Value Holds:
- Despite fears, demand keeps used GPU prices high and rental rates robust; no risk of a glut yet.
- “All the GPU rental prices, even for stuff that's six years old, is still being sold out and the compute demand outpacing supply is so large.” — Tae Kim (21:09)
9. Next Step-Change in Token Demand & Vertical AI Agents
- CodeGen Still Early:
- Enormous untapped demand in code generation; broader verticals like customer service, research, chip design, and drug discovery will drive growth.
- “We’re just getting started... you're going to see vertical AI agents on every single category.” — Tae Kim (22:41)
- AI in Knowledge Work:
- AI is likened to the spreadsheet revolution for knowledge jobs—not job replacement, but 10x productivity.
- “Thirty, forty years ago... the spreadsheet didn't get rid of all knowledge work, just enabled people to think at a higher level and get more done.” — Tae Kim (23:40)
- Automation Example:
- Routine data collection, e.g., same-store sales analysis, now automated by chatbots, freeing analysts to focus on insights.
- “So all the tedious labor, all the manual labor, all the data entry... is going away and we could think [at a] higher level.” — Tae Kim (26:03)
10. Outlook for Meta and Big Tech AI Bets
- Meta’s Endurance:
- Bears doubt Meta’s AI investments, but fundamentals in digital ads and social engagement are strong.
- “No one's going to replace Meta’s digital ad position. In the AI world they’re even better positioned because Google might lose digital ads, share chatbots, their search position going forward.” — Tae Kim (27:00)
- Capex 'Side Quest':
- Meta’s massive Capex on AI is seen like the Reality Labs bet—a long-term play, but existing business generates plenty of cash:
“…even if like the AI spending is like a side quest, it's like really, they just pulled forward like three or four years of Capex and they will use that for their other products.” — John Coogan (27:48)
- Meta’s massive Capex on AI is seen like the Reality Labs bet—a long-term play, but existing business generates plenty of cash:
Notable Quotes & Moments
-
On AI agent demand and “GPU sneaker bots”
“You see tweets, like people are building bots to pick up any kind of B200 GPU... It’s basically like sneaker bots, but for neoclouds.” — Tae Kim (03:58, John Coogan & Jordi Hays)
-
On Nvidia's open sourcing and China licenses:
“Jensen literally said at GTC they got license approvals on both the US and China side. So we're going to see billions of dollars of H200 orders.” — Tae Kim (10:07)
-
On the cultural barriers in fab engineering:
“Everything that I've heard about the culture at TSMC is that the folks who work there are extremely dedicated beyond the economics. They are true missionaries, not necessarily mercenaries.” — John Coogan (17:02)
-
On AI as the new knowledge work multiplier:
“Calculator, spreadsheet... didn’t get rid of all of knowledge work, it just enabled people to think at a higher level and get more done. And I'm very optimistic about that.” — Tae Kim (23:40)
-
On Meta’s durability:
"No one's going to replace Instagram, no one's going to replace Facebook. Billions of people are still going to use those social media apps... Their pure competitive position really hasn't changed." — Tae Kim (27:00)
Timestamps for Key Segments
- 00:28 — Context setting: Is Nvidia “over”? Economic & market cycle background.
- 03:14 — Explosion in AI inference demand; inside reports from engineers.
- 04:14 — Jensen Huang’s supply chain strategy and early demand prediction.
- 05:09 — The strategic logic behind the GROQ acquisition and chip diversification.
- 07:54 — Next-gen AI model trends: context window, memory stacking, synthetic data (“AI is just getting started”).
- 10:07 — Nvidia’s US/China licensing breakthrough.
- 10:53 — How Nvidia’s close ties to TSMC secure its leading edge.
- 13:42 — ARM CPUs, hyperscaler semiconductor strategy, and rising demand for CPUs in AI.
- 15:31 — Challenges of new fab entrants (e.g. Terrafab); why manufacturing prowess takes decades to build.
- 18:59 — How Elon Musk/SpaceX might make space-based compute the new infrastructure “railroad”.
- 20:45 — Helium supply fears: Real risk or media hype?
- 21:09 — “Depreciation-gate” and why used GPUs still hold value.
- 22:41 — Vertical AI agents and coming token demand surges.
- 23:40 — AI as an enabler for higher-level knowledge work.
- 26:03 — Personal productivity gains as AI agents automate drudge work.
- 27:00 — Meta’s market position and AI Capex “side quest”.
Conclusion
The conversation delivers a nuanced, well-reasoned optimism on Nvidia’s centrality in the AI hardware ecosystem. Despite market volatility, supply chain stress, and intense innovation cycles, Nvidia’s prescient management and global strategy position it strongly for the ongoing AI revolution. Tae Kim, drawing on deep technical sources and industry context, lays out why this cycle is the beginning—not the end—of a massive multi-year AI and compute boom.
