Transcript
A (0:00)
It's true that some things change as we get older, but if you're a woman over 40 and you're dealing with insomnia, brain fog, moodiness and weight gain, you don't have to accept it as just another part of aging. And with MITI Health, you can get help and stop pushing through it alone. The experts at MIDI understand that all these symptoms can be connected to the hormonal changes that happen around menopause, and MIDI can help you feel more like yourself again. Many healthcare providers aren't trained to treat or even recognize menopause symptoms. MIDI clinicians are menopause experts. They're dedicated to providing safe, effective, FDA approved solutions for dozens of hormonal symptoms, not just hot flashes. Most importantly, they're covered by insurance. 91% of MDI patients get relief from symptoms within just two months. You deserve to feel great. Book your virtual visit today@joinmidi.com that's join M I D I.com call Zone Media.
B (1:04)
Give me that turtleneck shirt, a tin pan apple, a German Shepherd, a wristband stand and a lurching red bird. And more braves than the Turner Network. This is your weekly Better Offline monologue and I'm your host, Ed Zitron. Better Offline now, before we go any further, I need your help. I look better. Offline is up for a webby, and I really need you to vote for Best Episode in the Business category. It's the man who Killed Google Search. It's Prabhuaga Ragavan. Let's get him. I realize it's a huge pain in the ass to sign up for something and vote, but I've never won an award in my life and I'd really appreciate it. Link is going to be in the episode notes, and while you're there, also vote for the wonderful Molly Conger's Weird Little Guys, which I'll also have a link to. I know signing up to stuff is annoying. I'm masking a lot from you, but there you go. I'm doing it anyway. To the monologue. I feel like we're approaching a choke point in the whole generative AI bubble, the culmination of over a year of different narratives and pressures that I believe will lead to an ultimate collapse. Last week, OpenAI released an image generator with GPT4O, which quickly gained massive attention for its ability to create images in the style of famed Japanese animation company Studio Ghibli. And to be clear, I think these images are an abomination, and everyone involved in launching this tool has committed a mortal sin anyway. Nevertheless, creating these disgusting, disgraceful images comes at an incredibly high cost, and for the last week OpenAI CEO Sam Altman has been complaining about their GPUs melting, leading to OpenAI having to limit free users to only three image generations a day, along with longer wait times and capacity issues with video generator Sora. To make matters worse, Altman also announced that, and I quote, by the way, that users should expect New releases from OpenAI to be delayed, stuff to break, and for services to sometimes be slow as we deal with capacity challenges. This led me to ask a very simple question that I think everybody in the tech media really should be asking. Why can't Sam Altman ask Microsoft for more GPUs? The answer is, as you may have guessed from my last monologue, is that there may not actually be capacity for them to do so. OpenAI's relationship with Redmond has grown kind of chilly over the past year. I'd speculate that Microsoft has refused to provide additional immediate capacity, or has refused to provide capacity on the chummy terms that OpenAI previously enjoyed, receiving a significant discount on the usual ticket prices. In the past, we know that Microsoft has both walked away from 2 gigawatts of future compute capacity and declined the option to spend another $12 billion on Core Weave's compute and Core Weave. If you don't remember there the the publicly traded data center AI company a whole dog's dinner unto itself, and analyst house TD Cohen suggested in it that this is a sign that Microsoft is no longer willing to shoulder the immense financial burden of supporting OpenAI, even though OpenAI picked that option up, by which I mean they took the $12 billion of compute. It isn't clear if Core Weave can actually build the capacity they need and definitely don't think they're going to be able to do it in the time they need it. Microsoft allegedly walked away from coreweave due to its failure to deliver, and that delivered the services they asked for and indeed probably the compute as well. If that's true, it's unclear what has changed to make Coreweave magically able to support OpenAI, or even how a company that's drowning in high interest debt can finance the creation of several billion dol worth of new data centers. Also, it's not quite as simple as OpenAI calling up a data center company with a bunch of GPUs and running ChatGPT EXE. OpenAI likely has reams of different requirements and the amount of GPUs they will need will likely vary based on demand, putting them in a problematic situation where they could be commuting to a bunch of compute that they don't need if demand slows down. I've heard that companies generally want a 6 to 12 month commitment for GPUs too. The cost is fixed no matter how much they get used, or at least there's a minimum commitment. But let's assume for a second that demand for ChatGPT continues to rise. How does OpenAI actually get that compute if Microsoft isn't handing it over? And the Information reports that OpenAI still projects to spend about $13 billion on Azure Cloud in 2025. There aren't really a ton of other options, especially for a company with such gigantic requirements. Meaning that whatever infrastructure OpenAI is building is a patchwork between smaller players, and using so many smaller providers likely creates unavoidable inefficiencies and overhead. I'm naming another pale horse of the AI apocalypse, by the way. Limits to service and Service degradation across ChatGPT OpenAI is running out of compute capacity. They've talked about it since October of last year, and ChatGPT's new image generation is a significant drain on their resources, meaning that to continue providing their services, they're going to need to expand capacity or reduce access to services. Otherwise, the problem is that expanding is extremely difficult. Data centers take three to six years to build, and OpenAI's planned Stargate data center won't have anything ready before 2026 at the earliest, which means we're approaching a point where there simply might not be enough data centers or GPUs to burn. While OpenAI could theoretically go to Google or Amazon, both of those companies are invested in anthropic and have little incentive to align with OpenAI. Meta is building their own ChatGPT competitor and Elon Musk despises Sam Altman. Real shithead versus fuckwad situation there. While I can't say for certain, I can't work out where OpenAI will the capacity to continue, and I just don't know how they're going to expand their services if Microsoft isn't providing capacity. Yes, There's Oracle, which OpenAI has a partnership with, but they're relatively small in this space. ChatGPT's image generation has become this massive burden on the company, right at the point where it's introducing some of its most expensive models ever, and the products themselves are extremely expensive to run. Deep Research is perhaps the best example. Using OpenAI's extremely expensive O3 model, which can cost in some cases as much as $1,000 per query. Deep Research is probably cheaper, but not that much cheaper. Probably. I would I've heard rumors and this is a rumor. It's a rumor. I've heard like a dollar or two per query. If that's the case, that's fucking insane. Anyway, while OpenAI could absorb the remaining capacity at say, Crusoe Lambda and Core Weave, this creates a systemic risk where every GPU provider is reliant on OpenAI's money, and this assumes that they'll actually have enough to begin with. OpenAI also just closed the largest private funding round in history, 40 billion theoretical dollars, valuing the company at a ridiculous $300 billion raised from, you guessed it, SoftBank and other investors. That's good news, right? Not really. In truth, OpenAI really only raised $10 billion, with 7.5 billion of those dollars coming from SoftBank and another $2.5 billion coming from other investors, including Thrive Capital and Microsoft. The remaining $30 billion of which SoftBank is on the hook for $20 billion of, will arrive at the end of the year. That's all we've got, but OpenAI will only get $10 billion from SoftBank, so bringing it down to a $30 billion round total if OpenAI fails to convert from a non profit to a for profit company by the end of 2025, a massive acceleration there. As a reminder, OpenAI is a weirdly structured nonprofit with a for profit arm, and their last round of funding from October 2024 had another caveat failed to become a for profit company. By October 2026, all investment dollars would convert into debt. I've also read that they would have to hand the money back. I'm not sure whether that's the case. Debt is the one that's being reported the most. Furthermore, OpenAI loses money on every single prompt on ChatGPT, even from their $200 a month ChatGPT Pro subscribers. The burdensome interest payments would make it even harder for OpenAI to reach break Even, which right now it doesn't even seem like they can do anyway. As another reminder, SoftBank is a company that has now invested in two different fraudulent schemes, Wirecard and Greensill Capital, the latter of which helped put the nail in the coffin of Credit suisse back in 2023 and put $16 billion into WeWork. It will be incredibly, some might say impossibly difficult, and I'll cover this in a future episode to convert OpenAI into a for profit company and the fact that SoftBank is putting this caveat on their investment heavily suggests that they have doubts happen. And I must be clear, when the Monopoly man is getting nervous, you should get nervous too. The fact OpenAI accepted these terms also suggests they're desperate, and I don't blame them. They've committed $18 billion to the Stargate data center project, will spend $13 billion on Microsoft compute alone in 2025 according to the information, and they've now created an incredibly popular product that will guarantee people come and use it like twice and then never use it again. Now keep a keen eye on any restrictions that OpenAI makes on ChatGPT in the coming months. I do not see how this company survives, nor do I see how they expand their capacity much further. Price increases, rate limits and other ways of slowing down the pressure on their servers will likely suggest that OpenAI is up against the wall, both in their ability to support the services they provide and the cost they must bear to provide them. We are entering the hysterical era of the bubble, a time when the craziest stuff will happen as the money does everything it can to keep the dream alive. I look forward to telling you what happens next.
