Podcast Summary: AWS Custom Silicon Surges Toward Multibillion Revenue
The Last Invention is AI
Date: December 6, 2025
Episode Overview
This episode breaks down Amazon Web Services’ (AWS) expanding role in the AI chip industry and investigates whether AWS custom silicon can challenge Nvidia’s dominant grip on the AI hardware market. Host B discusses recent announcements from AWS, especially the growth of its Trainium silicon business, and analyzes what these developments mean for the future of AI infrastructure, cloud competition, and enterprise adoption.
Key Discussion Points & Insights
1. Nvidia’s Current Dominance in AI Chips
- Nvidia leads the AI chip market and is often considered “unstoppable” due to its integrated hardware and software ecosystem.
- Their financial success and valuation reinforce the narrative of near-monopoly, especially for training state-of-the-art AI models.
2. Amazon’s Strategic Advantage with AWS
-
Amazon’s cloud dominance gives it unique leverage in distributing its own chips.
-
AWS routinely buys Nvidia chips to rent to customers for model training, creating both dependency and opportunity:
"Most would say no because [Nvidia is] so ingrained...But I think AWS and Amazon definitely have a big competitive advantage here, which is beyond the chips. Because of AWS, Amazon Web servers, a lot of people are using their cloud for...AI model training." (B, 01:10)
-
By embedding custom silicon in its infrastructure, AWS can undercut pricing and incentivize customers to use its chips.
3. The Growth of AWS Custom Silicon (Trainium)
- Trainium 3 chips are reported to be "about four times faster and use less power than the current Trainium 2".
- AWS CEO Andy Jassy revealed:
- Over 1 million Trainium chips are in production.
- 100,000+ companies are using them, powering a majority of Amazon’s Bedrock AI development tool usage.
- The business is already running at a multibillion-dollar revenue rate:
"He said it is a multibillion dollar revenue run rate business." (B, 03:00)
- The core selling point is price performance; AWS offers lower prices to attract customers who don’t need the absolute cutting-edge performance of Nvidia but want affordable, scalable AI compute.
"...the main reason why people are picking it is because it 'has price performance advantages over the GPU options that are compelling.'" (B, 05:00)
4. The Amazon Basics Strategy Applied to AI Chips
- Amazon’s tactic is reminiscent of its approach to consumer goods: undercut with in-house “basics” alternatives.
- This is manifesting as slightly cheaper silicon offerings that may not be top-tier but are good enough for many AI workloads.
5. The AWS-Anthropic Relationship
-
A significant portion of AWS’ AI chip revenue is attributed to Anthropic, an AI model developer heavily invested in by Amazon:
"Now, this is not a big shocker. Anthropic is a company that has been heavily invested in by AWS. So Amazon has put, I think, like about over $4 billion into Anthropic in different investments." (B, 09:14)
-
Anthropic’s Project Rainier relies on over 500,000 Trainium 2 chips to build next-gen models.
“We've seen more enormous traction from Trainium 2, particularly from our partners at Anthropic, who've announced Project Rainier, where there's over 500,000 Trainium 2 chips helping them build the next generation of models for Claude.” (Matt Garman, AWS CEO, quoted by B, 11:48)
-
These investments often come with contractual obligations for Anthropic to use AWS chips and infrastructure, likened to a “forced contract”.
6. The Challenge for True Nvidia Alternatives
- Only a handful of tech giants (Google, Microsoft, Amazon, Meta) possess the engineering expertise to design homegrown AI silicon at this scale.
- Nvidia’s competitive edge also comes from proprietary software ("CUDA"), which makes switching away from their chips difficult for most AI developers:
"It's not a small thing to rewrite an AI app for non-CUDA chips...There's a lot that goes into it. So typically these AI models are just going to keep using Nvidia." (B, 16:00)
7. Looking Ahead: Interoperability and the Future
-
Amazon’s Trainium 4 (in the pipeline) is expected to support interoperability with Nvidia GPUs—potentially making AWS attractive for even more customers and hedging against Nvidia lock-in.
"The next generation of its AI chips which is going to be the Trainium 4 is going to be built to...work with both Nvidia's GPUs and have that in the same system as...Trainium 4 chips." (B, 19:02)
-
The host speculates that regardless of whether AWS chips surpass Nvidia’s in performance, the cloud integration and cost play may make AWS a major winner in the AI compute space.
Notable Quotes & Memorable Moments
-
On Amazon’s emerging strategy:
"They're kind of applying the same strategy where they're like, look, tons of people are using Nvidia's chips to train models on AWS. What if we just made...compute that was a little bit cheaper? People could still get their models trained. They'll just use our chips because they're a little bit cheaper." (B, 07:10)
-
On forced adoption via investment:
"It's almost like a forced contract, like people didn't have a choice other than to use them. And maybe it was a better option because it was cheaper, but it is an interesting place to be, for sure." (B, 10:56)
-
On the significance of CUDA:
"It's not a small thing to rewrite an AI app for non-CUDA chips...Typically these AI models are just going to keep using Nvidia, but there are a couple players that are trying to do this." (B, 16:00)
-
On what the future may hold:
"I think the next generation is going to be a lot better and that alone just might be enough to make AWS the winner." (B, 20:10)
Timestamps for Key Segments
- [01:10] — AWS’s competitive advantage versus Nvidia
- [03:00] — AWS chip business reaches multibillion dollar run rate
- [05:00] — Why customers choose AWS chips: price/performance
- [07:10] — Amazon Basics strategy applied to chips
- [09:14] — Anthropic as major AWS AI chip customer
- [10:56] — The reality of "forced" Amazon AI infrastructure deals
- [11:48] — Matt Garman on Anthropic’s Project Rainier and Trainium deployment
- [16:00] — The barrier of CUDA and why most stick with Nvidia
- [19:02] — Future plans for Trainium 4 and Nvidia interoperability
- [20:10] — Outlook: AWS becoming a long-term winner in AI compute
Conclusion
This episode provides a sharp analysis of the rapidly shifting landscape beneath the AI infrastructure market’s surface. While Nvidia's technical and ecosystem lead is substantial, AWS's scale, pricing, and strategic partnerships—especially its Amazon-style commoditization model—place its silicon business on a strong growth trajectory. Multibillion-dollar revenues and high-profile AI partners (like Anthropic) signal AWS is not just a cloud provider but a genuine contender in the AI chip arms race.
For anyone wanting to dig into the details or see how these strategies may shape the future of AI, this episode delivers a concise and insightful primer.
