Mason Amadeus (2:47)
So we've been hearing about this whole Stargate project for a while, right? Earlier this year they outlined it was like a half trillion dollar 500 billion dollar expand a bunch of AI data centers with partners including Softbank and Oracle. And that saga has continued with sort of a lot of twists and turns and problems and big plans. And so I've pieced together a couple of different news stories about it, and I want to cruise through them together with all of us. So Stargate was initially conceived as a new company that would invest $500 billion in AI infrastructure. Now, OpenAI executives say the parameters have expanded to include data centers that were launched months before Stargate was announced. And OpenAI has been exploring some creative financing options, apparently. So all of that startup capital has not been enough. They're looking at potentially getting into some debt, which is interesting. And what I don't fully understand is the implications of this sentence. OpenAI will pursue different creative financing options, some of which have only emerged within the last year, to secure chips for the data centers that their executives said. On Tuesday, Sam Altman put out this blog post where he outlined sort of their biggest ambition, which was to grow a gigawatt of compute a week. Ultimately, I'm gonna skip a lot of the sort of bloviating that goes on in here and just try and hit these two paragraphs that I think are most pertinent to the rest of this discussion, where Sam says, if AI stays on the trajectory that we think it will, then amazing things will be possible. Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer. Or with 10 gigawatts of compute, AI can figure out how to provide customized tutoring to every student on Earth. If we are limited by compute, we'll have to choose which one to prioritize. No one wants to make that choice. So let's go. So that is a little bit of bloviating, but I think it really illuminates where his mindset is. They're really all in on the scaling right now. And he says, our vision is simple. We want to create a factory that can produce a gigawatt of new AI infrastructure every week. The execution of this will be extremely difficult. It will take us years to get to this milestone, and it will require innovation at every level of the stack, from chips to power to building to robotics. But we've been hard at work on this and believe it is possible. In our opinion, it will be the coolest and most important infrastructure project ever. So very lofty goals, very lofty amounts of money changing hands in all of this. And now we've had a little bit of movement. So we mentioned that they were feeling the pressure, talking about exploring some financing options. On Tuesday, same day that that letter came out, OpenAI, Oracle, and SoftBank unveiled plans for five new US AI data centers for Stargate, including three sites with Oracle, two affiliated with SoftBank, and An expansion of a big Oracle site in Abilene, Texas. Abilene. I'm not super familiar with how they pronounce that. Abilene was the flagship Stargate project. It's been under construction for more than a year. So OpenAI is in a position now where they need to execute on these lofty ideals and try and get these data centers built. And the article goes into sort of this new partnership with Nvidia, which I feel like every episode we talk about someone's new partnership with Nvidia. So I feel a little bit like I'm going crazy, but I'll just read here. This is from Reuters. After announcing Stargate in January, OpenAI held hundreds of meetings across North America with potential partners that could provide land, power and other resources. It was a flood of people. One executive said the expanded Stargate plan now includes self built data centers and third party cloud capacity. The new Nvidia deal, which I have more details to share about, the Nvidia deal specifically is part of this broader strategy that allows OpenAI to pay for its chips over time rather than purchasing them outright. They say of the roughly $50 billion estimated for a new data center. So like each new Data center, about 15 billion of that covers land, buildings and standard equipment. Financing the GPU chips is more challenging due to shortages and uncertainty over the life of the chips in this sort of current state of the industry. And then Nvidia and OpenAI have formed sort of this, this new partnership. Nvidia released a letter of intent, which I guess we can look at first. And then I have an article from Fortune that goes a little bit more into detail about the sort of how much power all this is going to be sucking down, right? The Nvidia letter of intent reads as follows. They said OpenAI and Nvidia today announced a letter of intent for a landmark strategic partnership to deploy at least 10 gigawatts of Nvidia systems for OpenAI's next generation AI infrastructure to train and run their next generation of models. Blah, blah, blah, blah, trying to get super intelligent to support this deployment, including data center and power capacity. Nvidia intends to invest up to $100 billion in OpenAI as the new Nvidia are deployed. This first phase is targeted to come online in the second half of 2026 using the Nvidia Vera Rubin platform. And they go on to just sort of talk about how they've worked together and all this stuff that they're going to do, right? But all of what I've said is just Sort of context to talk about this Fortune article, which was my entry point into all this and kind of ties it into a single thread. This article is titled Sam Altman's AI empire will devour as much power as New York City and San Diego combined. Experts say it's scary and it is gonna be a lot of power. They start with some world building, talking about how the heat wave in San Diego shot demand past 5,000 megawatts and then how that is half of the 10 gigawatts that we are talking about with these figures here. Andrew Chen, whose name I may be butchering and I'm so sorry, Andrew, a professor of computer science at University of Chicago said, quote, I've been a computer scientist for 40 years and for most of that time, computing was the tiniest piece of our economy's. Now it's becoming a large share of what the whole economy consumes. It's scary because now computing could be 10 or 12% of the world's power by 2030. We're coming to some seminal moments for how we think about AI and its impact on society. They mentioned this week, OpenAI announced a plan with Nvidia to build data centers consuming up to 10 gigawatts of power, with additional projects totaling 17 gigawatts already in motion. They say that's roughly equivalent to powering New York city, which uses 10 gigawatts in the summer, and San Diego during the intense heat wave of 2024 when more than 5 gigawatts were used, or, or as one expert put it, it's close to the total electricity demand of Switzerland and Portugal combined. So this is the kind of stuff when we talk about power use where we are looking at some pretty significant numbers.