
Loading summary
A
What can 160 years of experience teach you about the future? When it comes to protecting what matters, Pacific Life provides life insurance, retirement income and employee benefits for people and businesses building a more confident tomorrow. Strategies rooted in strength and backed by experience. Ask a financial professional how Pacific Life can help you today. Pacific Life Insurance Co. Omaha, Nebraska and in New York, Pacific Life and Annuity, Phoenix, Arizona.
B
Google has been crushing it lately in many departments, but one of those areas is in its TPUs, an alternative to Nvidia's GPUs for training AI models, and they've actually used it in their latest Gemini 3 model. With all of this happening, shares of Nvidia have just fallen 3% because a report came out that said Meta, one of Nvidia's huge customers, is possibly going to strike a deal with Google to use its TPUs, its tensor processing units for data centers. Google to this Nvidia stock fell 3% and Nvidia came back with their own response today on the podcast I want to talk about the state of the AI chips market. Nvidia has a massive lead, but can they maintain that lead? And is Google the sleeping giant that's going to not just come out of nowhere and ring up tons of users in the AI space for their chatbot Gemini, but maybe also challenge Nvidia in a major way on the chips? We're going to get into all of that. Before we do, I wanted to say if you want to try any of the AI models that I mention on the show today, I would love for you to try out my startup which is AI box AI. You get access to over 40 of the top models. Everything from Gemini to Claude to OpenAI to ElevenLabs for audio. Tons of cool image models and it's all on one platform for 20amonth. You can chat with all of the models in the same chat thread and compare the responses side by side. If you want to go check it out, it is AI Box AI. I'll leave a link in the description. All right, let's get into what Nvidia said. So with all of the news, Nvidia stock falling after this kind of rumored report of Meta and Google striking up a big TPU processing deal, Nvidia has come back and said that it's GPUs are a generation, quote unquote ahead of Google's AI chips. Nvidia put out the statement on Thursday and this is as Nvidia has more than 90% of the market for AI chips with its graphic processing units. Um, this is Essentially what analysts are looking at and saying. But Google's in house chips have gotten increased intention because of how viable of an alternative they are. They're powering the second biggest model, which is Gemini 3 in the entire world right now. So due to all of this, Nvidia is trying to maintain that it has a superior chip. And I don't think this is really something that everyone will argue with. Google says that their TPUs have a better architecture and from a lot of analysts that I've seen that are specifically in the chip space, they do admit that the TPU is a better architecture or some of them will will call it that famously. Chamath Palihapitiya, who is an early investor into grok, another alternative to Nvidia chip processing company, says that the TPUs and GROQ are a better form of chips than what Nvidia has. But Nvidia has such a massive market and I think what a lot of people are not really talking about is that Nvidia isn't just a chip. They have a really great way of pulling multiple chips together. They have a really great infrastructure and they have a really great software platform that a lot of these days, a lot of these chip providers rely on for training models. In response to this rumored deal with Google, Nvidia said, quote, we are delighted by Google's success. We've made great advances in AI and we continue to supply Google. Nvidia is a generation ahead of the industry. It's the only platform that runs every AI model and does it everywhere computing is done. This post is coming as their shares, like I said, are falling 3% on Thursday after the report of Meta possibly striking up this deal with Google. In the same post, Nvidia said that its chips are more flexible and powerful compared to the ASIC chips like Google's TPU which are essentially designed for a single company or function. Nvidia's latest generation of chips, they're known as the Blackwells and they are more general purpose and useful for a lot of different things. But they're not specifically for AI because if you remember Nvidia's history, they really came out creating GPUs for gamers originally. That's how they got a lot of notarization, variety and you know, making regular consumers computers great. They then really scaled during the crypto boom when people were using them for mining crypto and as the crypto was market was crashing, the AI market started really taking off and they kind of switched into that industry because of this approach where they've hit three different industries. Their chips are quite good for many things, but they're not optimized exclusively for training AI models. And this is what the ASIC chips like Google's TPU are specifically designed for. With 90% of the market, it feels like Nvidia only has room to kind of decrease in market share. Google's in house chips have gotten a lot of attention as an alternative, which are really expensive but also really powerful. Like Blackwell chips are kind of the best in class, but they're good for a lot of different things, like I mentioned. And Google's chips are designed for one thing in particular, which is exclusively training AI models. They do it very well and they're very optimized, so you can make them for a lot cheaper. Unlike Nvidia, Google doesn't sell its TPU chips to companies yet, but it uses them for a lot of internal tasks and it allows companies to rent them on their Google Cloud. Whereas Nvidia, right, they'll just sell their, their GPUs to anyone. People buy tons of GPUs, build data centers, and then we'll rent them out. So it's kind of an interesting business model where Google's keeping all of their TPUs in house. Google, when they released Gemini 3 earlier this month, it was, you know, obviously state of the art. It hit, it was the top on a lot of different benchmarks and it was trained exclusively on TPUs, not Nvidia's GPUs, which got a lot of headlines. What they said about this, A Google spokesperson said, we are experiencing accelerated demand for both our custom TPUs and Nvidia GPUs. We are committed to supporting both as we have for years. Google doesn't want to bite the hand that feeds them, right, and come out and be like, hey, we have this big competitor. We're going to take on Nvidia because they are a huge customer to Nvidia. Currently, as they're scaling up their TPU chips, they need to buy a ton of Nvidia GPUs that they put in their data centers, especially for Google Cloud, where people are renting them to train AI models and other things. Nvidia CEO Jensen Huang has kind of addressed the whole rising TPU competition in a recent earnings call earlier this month. And he said that Google was a customer for his company's GPU chips and that Gemini can run on Nvidia's technology, which is a good point. Gemini can run on the TPUs. It was trained on the TPUs, but it also could run on Nvidia's technology. He also mentioned that he was in touch with Demis Hassabis, the CEO of Google DeepMind, and he said that he apparently Hasabis texted him to say that the tech industry theory that using more chips and data will create more powerful AI models, often called scaling laws by AI developers, is intact. Meaning if you buy more chips, your model gets better. And this is something that we saw with early versions of ChatGPT4O, where Sam Altman came out with a bunch of benchmarks and said, like, basically if you give the model $10,000 of compute to answer a question, the question gets like insanely better. Of course it's not. It's, you know, not very cost effic. Um, so we don't do that today. But if you wanted your model to get better, you basically would just give it access to more compute. And this was with older models. So imagine GPT5 and a lot of, you know, Gemini3. A lot of these newer models, they will get better and better with more compute. And the reason why Jensen brings that up is because there is a perfect theory for if for some reason there was some sort of plateau where the researchers couldn't figure out ways to make the models better, all they would need to do is buy more compute, and that would just feed straight into Nvidia. Now, of course, they'd have to figure out how to make them more efficient, and they'd have to figure out how to get them for cheaper and make the chips cost less money. But it really is the future where they would just need to buy more and more chips. So Nvidia says that scaling laws are going to lead to even more demand for their chips and their system. And if this is true, Nvidia could continue to grow. Thank you so much for joining the podcast today. If you enjoyed the episode, it would mean the world to me if you could leave a rating or review wherever you get your podcasts. On Apple, if you could leave some stars or on Spotify, you can hit the about tab and drop some stars as well. Thanks so much for tuning in and I will catch you in the next episode. Make sure, as always, to check out AI box AI if you want to get access to all of the AI models for 20 bucks a month on one platform. I'll catch you in the next episode.
Podcast: The Last Invention is AI
Host: The Last Invention is AI
Date: November 27, 2025
This episode dives deep into the rapidly evolving landscape of AI hardware, specifically focusing on the rivalry between Nvidia’s new-generation GPUs and Google’s TPUs. The host explores the latest developments, including market reactions, company strategies, technical comparisons, and speculations on the future of AI chip dominance. Listeners are given a balanced view of how these tech giants are shaping the capabilities and possibilities of artificial intelligence.
Nvidia GPUs:
Google TPUs:
Market Differences:
Quote:
Industry “Scaling Laws” Philosophy:
Example from OpenAI:
Implications for Nvidia:
Quote:
Nvidia’s Victory Lap:
On Technical Superiority:
Gemini 3’s Training Notoriety:
The host maintains an informative, fast-paced tone—balancing technical insight with market analysis, and using plain language accessible to both tech-savvy listeners and newcomers. The episode is packed with up-to-date industry developments, expert opinions, and grounded speculation, making it a must-listen for anyone interested in the crossroads of AI hardware and enterprise strategy.
This summary captures the episode's main narrative and key moments, providing listeners a comprehensive view of the Nvidia-Google AI chip competition and its broader ramifications in the rapidly advancing AI arms race.