Transcript
A (0:04)
Welcome to the Tech Brew ride Home from Monday, October 6, 2025 I'm Brian McCullough. Today another huge OpenAI deal, this time with AMD. The huge amount of chips Elon is buying for his Colossus 2. And it's not just compute how the AI boom is driving up prices for memory and storage chips and does the math work out for those new fangled small nuclear reactors. Here's what you missed today. In the world of tech, when your business evolves, so does your risk of data loss. But with veeam, your data is always on the map. Partner with Veeam for coverage that keeps you moving and get protection for workloads of all shapes and sizes, even the ones you haven't created yet so you can stay resilient as you scale. When your data goes dark, Veeam turns the lights back on. Get the right kind of data recovery options for any kind of disruption so you can undo the unpredictable. With Veeam, it's all good. Get workload coverage that works for your business@veeam.com that's V E E A M dot com folks. Today is basically all about the AI horse race because a lot of big moves have been happening. A lot of literal jockeying for a position is going on. First up, OpenAI and AMD have announced a deal as part of which OpenAI could take a 10% stake in AMD. The deal is largely to deploy up 6 gigawatts of AMD's instinct GPUs over multiple years. AMD's stock is up over 30% at the time of this writing. On this news quoting CNBC, OpenAI will deploy 6 gigawatts of AMD's instinct graphics processing units over multiple years and across multiple generations of hardware. The companies said Monday it will kick off with an initial 1 gigawatt rollout of chips in the second half of 2026. As part of the tie up, AMD has issued OpenAI a warrant for up to 160 million shares of AMD common stock, vesting milestones tied to both deployment volume and AMD's share price. The first tranche vests with the first full gigawatt deployment, with additional tranches unlocking as OpenAI scales to 6 gigawatts and meets key technical and commercial milestones required for large scale rollout. If OpenAI exercises the full warrant, it could acquire approximately 10% of ownership in AMD based on the current number of shares outstanding. The ChatGPT maker said the deal was worth billions, but declined to disclose a specific dollar amount. The deal positions AMD as a core strategic partner to OpenAI, making one of the largest GPU deployment agreements in the artificial intelligence industry to date. The partnership could help ease industry wide pressure on supply chains and reduce OpenAI's reliance on a single vendor, Nvidia. OpenAI unveiled a landmark $100 billion equity and supply agreement with Nvidia nearly two weeks ago, cementing the chip giant's role in powering the next generation of OpenAI's models. That arrangement combined capital investment with long term hardware supply, though in Nvidia's case it was the chipmaker taking an ownership stake in OpenAI. That deal accounts for a dedicated 10 gigawatt portion of OpenAI's broader 23 gigawatt infrastructure roadmap at an estimated $50 billion in construction costs per gigawatt. Together with the AMD deal, OpenAI has committed roughly $1 trillion in new build out spending in just the past two weeks. OpenAI is also in talks with Broadcom to build custom chips for its next generation of models. The arrangement between OpenAI and AMD adds a new layer to the increasingly circular future of AI's corporate economy, where capital, equity and compute are traded among the same handful of companies building and powering the technology. Nvidia is supplying the capital to buy its chips, Oracle is helping build the sites, AMD and Broadcom are stepping in as suppliers. OpenAI is anchoring the demand. It's a tightly wound circular economy and one that analysts fear could face real strain if any link in the chain starts to weaken. For amd, the partnership is both a commercial milestone and a validation of its next generation instinct roadmap. After years of trailing Nvidia in the AI accelerator market, AMD now has a flagship customer at the forefront of the generative AI boom. AMD CEO Lisa Su said it creates, quote, a true win win, enabling the world's most ambitious AI buildout and advancing the entire AI ecosystem. So yeah, this is hella interesting strategically it's worth summing up at this point. OpenAI, now $100 billion investment from Nvidia, could have up to a 10% ownership in AMD, is involved in that $500 billion Stargate investment and has a $500 billion valuation, but has made commitments for $1 trillion to build all that out. Oh, and they're still technically a nonprofit company, or at least owned by a nonprofit, as Matthew Zeetlin tweeted. So basically OpenAI wants AMD to stick around so it doesn't become dependent on Nvidia what does Nvidia think of all this? Sam Altman had a post up on X that said, quote excited to partner with AMD to use their chips and serve our users. This is all incremental to our work with Nvidia and we plan to increase our Nvidia purchasing over time. The world needs much more compute, end quote. But as Zephyr tweeted in response to Sam's tweet quote, everyone is scared of Jensen. That post mentions Nvidia more times than amd despite the deal being about amd. Jensen may bump them down the preference list due to this end quote and then there's the whole money flywheel questions the circular nature as they said earlier quoting NS123ABC, Nvidia gives OpenAI money. OpenAI uses that money to secure output from AMD. AMD is also partnered with Nvidia Infinite Free Money Glitch, end quote. Notice that focus there on compute from Sam Altman. Could this whole AI horse race really come down to who has the most compute power? Like literally who has the most chips and everyone else loses? Well, the big money really seems to think so. A source has told the Journal that Elon Musk's Xai is set to spend more than $18 billion to acquire around 300,000 more Nvidia chips for its Colossus 2 project in Memphis, Tennessee. The AI arms race is shaping up as the most expensive corporate battle of the 21st century, with the belief that the first to the finish line will dominate the market, making speed crucial. Money also makes a difference. The more cutting edge chips companies have, the smarter their models are. But at this stage it's unclear if or when the enormous investments will pay off. Musk, who has been at the forefront of innovation in electric vehicles, rockets and brain computer interfaces, is in the unusual position of playing catch up to rivals like Sam Altman's OpenAI. Finishing Colossus 2 will cost tens of billions of dollars, some AI and data center experts say. The Nvidia chips alone cost a fortune. Musk will need to spend at least $18 billion for the roughly 300,000 more chips he needs to complete the Memphis project, a person familiar with the project's financials said. Musk said in July that Colossus II will have a total of 550,000 chips and has separately signaled it could eventually have a million processing units. Musk is burning cash at a breakneck clip. Earlier this year, Xai raised $10 billion through a combination of debt and equity. The company was slated to run through about $13 billion in cash in 2025, according to projections shared with creditors. A few months ago, the Wall Street Journal previously reported, Musk turned to his private to chip in $2 billion, an unusual move for a company that rarely makes outside investments. Some executives have left XAI after clashing with Musk's advisors over concerns about the startup's management and financial health. In typical Xai and Elon fashion, the company's future is highly unpredictable, said Dylan Patel, CEO of Semianalysis, an independent research company focused on the semiconductor and artificial intelligence industries. Elon will do everything he can to not lose to Sam Altman. End quote Musk's gamble is playing out in real time in Memphis, according to the locals. His arrival has kindled hopes of an economic renaissance, but it has also stoked controversy. Musk's data centers will probably bring in only a few hundred jobs to Memphis while consuming millions of gallons of water a day and more electricity than is needed to power all the city's homes. Natural gas turbines powering the data centers have brought pollution and controversy over their use. XAI has argued that many of the structures are temporary and don't require a permit. Some residents question plans for the utility to issue rebates to XAI for building the new power structures it needs. Musk's pitch to Memphis is that he is building infrastructure that will benefit the city. The company has promised to construct a giant wastewater recycling facility to be used in its cooling system that would help reduce demand on the Memphis aquifer. The company has also donated funds to Memphis schools and other organizations and hired workers to go around the city and pick up trash. In one year, XAI has become the second largest taxpayer in the city and county after FedEx, said Bill Donavut III, a Memphis businessman who sits on the board of directors of the city's Chamber of Commerce. Critics say the project is a big risk and could leave residents with pollution caused by the natural gas turbines and higher electricity bills stemming from the extreme demand on power. Memphis is desperate, said Batsel Booker, a 65 year old retired firefighter who lives in the neighborhood next to Colossus. And this is not the first time that they have been so desperate for companies. They come in and promise them the world. End quote. Here's another data point. Tom's Hardware says the AI boom is driving memory and storage shortages that may last a decade. Storage, memory, you know, the other crucial types of chips, apparently. OpenAI's Stargate has deals in place for 900,000 DRAM wafers per month or around 40% of total global output quote for the better part of two years. Storage upgrades have been a rare bright spot for PC builders. SSD prices cratered to all time lows in 2023, with high performance NVME drives selling for little more than the cost of a modest mechanical hard disk. DRAM followed a similar trajectory, dropping to price points not seen in nearly a decade. In 2024, the pendulum swung firmly in the other direction, with prices for both NAND flash and DRAM starting to climb. The shift has its roots in the cyclical nature of memory manufacturing, but is amplified this time by the extraordinary demands of AI and hyperscalers. The result is a broad supply squeeze that touches every corner of the industry, from consumer SSDs to DDR4 kits to enterprise storage arrays and bulk HDD shipments. There's a singular through line. Costs are moving upward in a convergence that the market has not seen in years. Every memory cycle has a trigger or a series of triggers. In past years it was the arrival of smartphones, then solid state notebooks, then cloud storage. This time the main driver of demand is AI. Training and deploying large language models require vast amounts of memory and storage, and each GPU node in a training cluster can consume hundreds of gigabytes of DRAM and multiple terabytes of flash storage. Within large scale data centers, the numbers are staggering. OpenAI's Stargate project has recently signed an agreement with Samsung and Skhenix for up to 900,000 wafers of dram per month. That figure alone would account for close to 40% of global dram output, whether the full allocation is realized or not. The fact that such a deal even exists shows how aggressively AI firms are locking in supply at an enormous scale. Cloud service providers are behaving similarly. High density NAND products are effectively sold out months in advance. Samsung's next generation V9 NAND is already nearly booked before it's even launched. Micron has pre sold almost all of its high bandwidth memory, or HBM output through 2026. Contracts that once covered a quarter now span years with hyperscalers buying directly at the source. The knock on effects are visible at the consumer level. Raspberry PI, which had stockpiled memory during the downturn, was forced to raise prices in October 2025 due to memory costs. The four gigabyte versions of its Compute Module 4 and 5 increased by $5, while the eight gigabyte models rose by $10. Eben Upton, the company's CEO, noted that memory costs roughly one hundred and twenty percent more than it did a year ago in an official statement seen on the Raspberry PI website. Seemingly nothing and no one can escape the surge in pricing, according to CEO of Fizen Electronics, Taiwan's largest NAND controller company. It's this redirection of capital expenditure that will cause tight supply for, he claims the next decade. NAND will face severe shortages in the next year. I think supply will be tight for the next 10 years, he said in a recent interview. When asked why, he said two reasons. First, every time flashmakers invested more, prices collapsed and they never recouped their investments. Then in 2023, Micron and Skhenix redirected huge capex into HBM because the margins were so attractive, leaving even less investment for flash. So AI usage will send your electricity bills up. That's old news. The new news is AI might make everything from laptops to gaming consoles more expensive for years. Then the horse raced from a different angle. Sources tell the FT that OpenAI and Jony I've have yet to solve key technical and software issues that could delay their rumored palm sized AI device slated for release in 2026. QUOTE the San Francisco based startup run by Sam Altman acquired the former Apple Design Chiefs Co. IO for $6.5 billion in May, but the pair have shared few details on the projects they are building. Their aim is to create a palm sized device without a screen that can take audio and visual cues from the physical environment and respond to users requests, people familiar with their plans said. OpenAI and I've had yet to solve critical problems that could delay the device's release, despite having hardware developed by I've and his team, whose alluring designs of the imac, ipod and iPhone helped turn Apple into one of the most valuable companies in the world. Obstacles remain in the device's software and the infrastructure needed to power it. These include deciding on the assistant's personality, privacy issues and budgeting for the computing power needs to run OpenAI's models on a mass consumer device. Compute is another huge factor for the delay. Said one person close to I've Amazon has the compute for an Alexa. So does Google for its home device. But OpenAI is struggling to get enough compute for ChatGPT, let alone an AI device that they need to fix that first. A person close to OpenAI said the teething troubles were simply normal parts of the product development process. Multiple people familiar with the plan said OpenAI and I've were working on a device roughly the size of a smartphone that users would communicate with through a camera, microphone and speaker. One person suggested it might have multiple cameras. The gadget is designed to sit on a desk or table, but can also be carried around by the user. The Wall Street Journal previously reported some of the specifications around the device. One person said the device would be always on rather than triggered by a word prompt. The device's sensors would gather data throughout the day that would help to build its virtual assistant's memory. Alright, to keep my promise about all AI horse race news today, although I didn't intend it to be that way. Here we are. Let's come back to the cost of this AI buildout. The bottomless need for data centers chips, which, as we've mentioned, just absolutely gobbled energy. Well, one of the proposed solutions to that has been nuclear power, specifically small nuclear reactors, which several startups have sprung up to produce. Except According to the FT, while Washington and investors have wagered around $9 billion on small modular reactors, so called SMRs, to power AI data centers, the math might not work out. Wood Mackenzie estimates 2030 SMR power at $180 per megawatt hour versus $133 per megawatt hour for a conventional larger nuclear reactor. 126 for a natural gas power plant like the ones apparently powering Colossus 2 and wind and solar plus batteries would cost roughly one third of that price. Critics also cite delays and overruns in this new industry. NuScale, for example, scrapped a project after costs rose 120%. Three operating SMRs abroad ran 300 to 400% over budget. Engineers also warn that scale just favors bigger reactors. Supply bottlenecks and Hailu fuel, largely controlled by Russia, could add about an additional $20 per megawatt hour. Still, utilities and tech firms have signed more than 32 gigawatts of agreements. Small reactors, including a binding agreement between Google and a startup called Kairos, a deal for 500 megawatts by 2035. The point is, will anyone want to run these SMRs, these smaller reactors if they are potentially generating energy that is two to three times more expensive than alternatives like natural gas and solar? Sam Altman said compute cost was already eye watering in AI and this wouldn't put a dent in that. Nothing for you today. Talk to you tomorrow.
