Transcript
Oracle Representative (0:00)
AI is rewriting the business playbook with productivity boosts and faster decision making coming to every industry. If you're not thinking about AI, you can bet your competition is. This is not where you want to drop the ball, but AI requires a lot of compute power, and with most cloud platforms, the cost for your AI workloads can spiral. That is, unless you're running on oci. Oracle Cloud Infrastructure this was the cloud built for AI, a blazing, fast, enterprise grade platform for your infrastructure, database, apps and all your AI workloads. OCI costs 50% less than other major hyperscalers for compute, 70% less for storage, and 80% less for networking. Thousands of businesses have already scored with oci, including Vodafone, Thomson Reuters and Suno AI. Now the ball's in your court. Right now, Oracle can cut your current cloud bill in half if you move to OCI. Minimum financial commitment and other terms apply. Offer ends March 31st. See if your company qualifies for this special offer@oracle.com strategic that's oracle.com strategic in a world of economic uncertainty and workplace transformation, learn to lead by example from visionary C suite executives like Shannon Schuyler of PwC and Will Pearson of iHeartMedia, the Good Teacher explains the great teacher inspires. Don't always leave your team to do the work. That's been the most important part of how to lead by example. Listen to leading by example executives making an impact on the iHeartRadio app, Apple Podcasts or wherever you get your podcasts. Do you remember what you said the first night I came over here? Ow. Go slower. From Blumhouse TV, iHeart podcasts and Ember 20 comes an all new fictional comedy podcast series. Join the flighty Damien Hirst as he unravels the mystery of his vanished boyfriend. I've been spending all my time looking for answers about what happened to Santi and what's the way to find a missing person? Sleep with everyone he knew? Obviously. Listen to the hookup on the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows. Welcome to Pod of Rebellion, our new Star Wars Rebels Rewatch podcast. I'm Vanessa Marshall, voice of Harris Syndulla Specter 2. I'm Tia Sirkar Sabine Wren, Spectre 5 I'm Taylor Gray, Ezra Bridger, Spectre 6 and I'm Jon Lee Brody, the Ghost Crew Stowaway Moderator. Each week we're gonna rewatch and discuss an episode from the series and share some fun behind the scenes stories. Sometimes we'll be visited by special guests like Steve Blum voices Zabarelio's Spectre 4, or Dante Bosco voices Jaquel and many others. So hang on because it's going to be a fun ride. Cue the music. Listen to Potter Rebellion on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Call Zone Media Greetums I'm Ed Zitron and this is Better Offline Better Offline in my last episode, I started by telling you about a report from the analyst wing of a major bank, TD Cohen, that revealed how Microsoft had drastically pulled back on its plans to build new data center capacity. At a bare minimum, Microsoft had canceled the equivalent of every data center in London or Tokyo. At least. The real figure is actually probably much, much higher. And as I pointed out that this is a real pale horse, a harbinger of bad times through generative AI and OpenAI especially, and an indicator that the bubble is imploding or popping. I really don't want to say it's popped, and I actually don't know if it's necessary to say when it does, it's never going to be one thing. I also mentioned that some may interpret this move as a response to OpenAI Stargate, which aims to build hundreds of billions of dollars of data centers and power generation facilities in the US all to power generative AI apps and tools that I'm not sure actually there's really the demand for. But this begs the question, how feasible is the Stargate project? Let's start with a bit of background. The Stargate project was officially announced at the very beginning of the Trump presidency as OpenAI tried to court favor from the notoriously transactional and praise hungry President of the United States of America. Despite that, the project has been in the works for some time and OpenAI had previously courted the Biden administration. Sam Altman's previous pitch to that administration late last year was that it was necessary to build a 5 gigawatt data center. We don't know how big Stargate will be, just that it will initially involve spending $100 billion to, and I quote, develop data centers for artificial intelligence in the U.S. according to the information, with the project potentially scaling to $500 billion, a truly fucking astoundingly stupid number. Stargate's first and only data center deal currently signed is in Abilene, Texas, and it's expected to be operational in mid-2026, though these centers usually in phases, this is especially likely to be the case here considering that according to the information OpenAI plans to have access to a gigawatt of power and hundreds of thousands of GPUs. As part of this, the Stargate project will construct a 360.5 megawatt natural gas power station. And as I said last episode, that one's about power. This power station is, as far as I can tell, still in the permitting phase. It'll be some time before Stargate breaks ground on the facility, let alone starts to actually generate power. Now, as for funding, things have got a little weird. Both OpenAI and SoftBank have committed to putting in either 18 billion or $19 billion each into the project. I've seen both numbers reported, by the way. Regardless, it's not that big a difference to worry about, especially with, well, the fact that OpenAI does not really have the money. Either way, what's a billion dollars when you don't have anything? The company is currently trying to raise $40 billion at a $260 billion valuation with the quote CNBC part of the funding expected to be used for OpenAI's commitment to Stargate. Now, I'm old enough to remember when this round was previously rumored to be valuing OpenAI at $340 billion and also at $300 billion. And SoftBank appears to be taking full responsibility for raising the round, including syndicating as much as $10 billion of the amount, which means that it would include a group of other investors. Nevertheless, it certainly seems SoftBank will provide the majority of the capital, $30 billion, with CNBC reporting that it will be paid out over the next 12 to 24 months, with the first payment coming as soon as spring. SoftBank also has another problem, and this one's this one I think even the least technical of you can understand. They also appear to not have the money. They don't have the money. They not the money to give OpenAI not the money for Stargate. It's a little bit of worrying math issues with this. According to the information, SoftBank CEO Masayoshi Son is planning to borrow $16 billion to invest in AI and may borrow another $8 billion next year. The following points, by the way, are drawn from the Information's reporting, and I give serious props to Juro Osawa and Corey Weinberg for their here. I attack the information sometimes with some of the framing, but they actually do some of the best reporting in tech journalism. Now let's do some maths. SoftBank currently only has $31 billion in cash on its balance sheet as of December 2024. Its net debt, which, despite what you think, does not measure total debt, but rather represents its cash minus any debt liabilities stands at $29 billion. They plan to use the loan in question to finance part of their investment in OpenAI and their acquisition of chip design firm ampaire. According to SoftBank's reported assets, its holdings are worth about $219 billion, so 33.66 trillion yen for those of you who deal with yen, including stock in companies like Alibaba and ARM. On a side note, not every company in SoftBank's portfolio is an ARM or an Alibaba like a good one. There are some real stinkers too, and I'm not just talking about the moribund carcass of WeWork. I recommend you look at and I'll link to this in the episode notes SoftBank's recent group report, which is linked in the spreadsheet like I just said, which is me reading a script out a little behind the curtain for you there. In particular, got a 29 in the report, which lists the 10 largest publicly traded companies in SoftBank's Vision Fund portfolio. Note how all of them, without exception, trade at a significant fraction of their peak market cap. In Simpler terms, the 10 biggest companies in SoftBank's flagship tech fund are worth far, far less than their all time high and in some cases worth less than one fifth of their all time high. If SoftBank liquidated its assets and I admit this is a big if and most likely a worst case scenario situation, how big would their losses be? Separately, SoftBank has committed to a joint venture called SB OpenAI Japan and to spend $3 billion a year on OpenAI's tech for the various companies across its group. We'll talk about that later, but doing some napkin maths, here's what SoftBank's agreed to $18 billion or $19 billion, really do not know in funding for the Stargate Data center project, $3 billion a year in spend on OpenAI soft and as much as $30 billion in funding for OpenAI paid over 12 to 24 months according to the information. And by the way, 25 billion has also been reported. But the information reports that OpenAI has told investors that SoftBank will provide it at least $30 billion of the $40 billion they need. Jesus fucking Christ, can you imagine if this went into something useful? Anyway, what I'm getting at is that SoftBank has effectively agreed to bankroll the entirety of OpenAI's future, signing up for over $46 billion of investments over the next few years and does not appear to so without selling Its current holdings in valuable companies like ARM, or taking at least $16 billion of debt this year, representing a 55% increase of its current liabilities. Worse still, thanks to this agreement, OpenAI's future both in its ability to expand its infrastructure, which appears to be entirely contingent on the construction of Stargate with Microsoft pulling out, and its ability to raise funding, which is also entirely dependent on SoftBank and SoftBank in this case, where OpenAI is entirely dependent on SoftBank to live, must borrow money to give to OpenAI, a company that only loses money on top of that. OpenAI, also on the money losing front, anticipates it will burn as much as $40 billion a year by 2028 and projects to only turn a profit by the end of the decade after the build out of Stargate, which I add, is, like I said, almost entirely dependent on SoftBank, which has to take on debt to fund both OpenAI and the project required to theore make OpenAI profitable. How the fuck does this work? How does this work? How does this work? How does this work? How the fuck does this work? OpenAI, a company that spent $9 billion to lose $5 billion in 2024, requires so much money to meet its obligations, both to cover its stupid, ruinous, unprofitable, unsustainable operations and the 18 billion to $19 billion it to keep growing, that it has to raise more money than any startup has ever raised in history. 40 billion fucking dollars. With the cast iron guarantee that it will need even every goddamn month. Sam Orton has to go to someone and say, my huge, beautiful company is so powerful, but it's weak and sick and frail. I need more money than you have today and I'll need more money than you'll have tomorrow. My company is so sick and weak, but it's the most powerful company of all time. Sam flipping pisses me off. Imagine if this money went literally anywhere else. It could set it on fire. At least be fucking warm. SoftBank, on top of the $30 billion of funding and $3 billion a year of revenue it's committed to, OpenAI itself, also has to cough up 18 billion dol Stargate, the data center project that OpenAI will run. And get this SoftBank will take financial responsibility for $48 billion in cash, $3 billion in revenue, the latter of which, like all OpenAI's offerings, will lose the company money. OpenAI has no path to profitability, guaranteeing it will need more cash. And right now, at the time when it needs it more than it's ever needed it. SoftBank, the only company willing to provide it and possibly the only company with the money to do so, has proven that it will have to go to greet possibly ruinous lengths to do so. If OpenAI needs $40 billion in 2025, how much will it need in 2026? $50,100,000,000? Where is that money going to come from? While SoftBank might be able to do this once, what happens when OpenAI needs money in six to 12 months? SoftBank made about $15 billion of profit in the last year on about $46 billion of revenue. $3 billion is an absolutely obscene amount to commit to buying OpenAI software annually, especially when some of it is allegedly for access to OpenAI's barely mediocre deep research products. As per my previous podcasts and pieces, I do not see how OpenAI survives, and SoftBank's involvement only gives me further concern. While SoftBank could theoretically burn its current holdings to fund OpenAI in perpetuity, its ability to do so is cast into doubt by them having to borrow money from other banks to get both into this funding ground and to get Stargate done. OpenAI burned $5 billion in 2024, a number it will likely double in 2025. And remember the Information reported that OpenAI was projected $13 billion in compute alone with Microsoft in 2025 and has no path to profitability. SoftBank has already had to borrow to fund this round, and the fact they had to do so suggests its inability to continue supporting OpenAI without accruing further debt. OpenAI, as a result of Microsoft's cuts to data center capacity, now only has one path to expansion once it runs through whatever remaining build out Microsoft has planned, that is. And that's Stargate, a project funded by OpenAI's contribution, which it's receiving from SoftBank. And SoftB is also having to take out loans to meet its share. How does this work exactly? How does this continue? Do you see anyone else stepping up to fund this? Who else has got 30 to 40 billion dollars to shit out every year? While the answer is Saudi Arabia, SoftBank CEO Masayoshi Son recently said that he had had and I quote not given Saudi ruler Mohammed bin Salman enough return, adding that he still owed him. That's really not the ideal thing you want to say after naming mbs, nothing about this suggests that Saudi money softbanks in anywhere near the volume necessary. As for the Emiratis, they're already involved through the MGX fund and it's unclear how much more they'd Be willing to commit. Really, really, though. Really, mate. Buddy. Fellow. How does this, how does this work? In my opinion, this OpenAI SoftBank deal is wildly unsustainable, dependent on SoftBank continuing to raise both debt and to funnel money directly into a company that burns it. Burns it by the billions every year and is set to only burn more thanks to the arrival of its latest bullshit model and had a huge breakthrough that would change everything. Wouldn't Microsoft want to make sure they were building the data center capacity to support it? Hey, maybe they're not. Something about Mary Poppins. Something about Mary Poppins. Exactly. Oh, man, this is fun. I'm A.J. jacobs, and I am an author and a journalist and I tend to get obsessed with stuff. And my current obsession is puzzles. And that has given birth to my podcast, the Puzzler. Dressing. Dressing. French dressing. Exactly. That's good. Now you can get your daily puzzle nuggets delivered straight to your ears. I thought to myself, I bet I know what this is. And now I definitely know what this is. This is so weird. This is fun. Let's try this one. Our brand new season features special guests like Chuck Bryant, Mayim Bialik, Julie Bowen, Sam Sanders, Joseph Gordon Levitt and lots more. Listen to the Puzzler every day on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. That's awful and I should have seen it coming. Our iHeartRadio Music Awards are coming back Monday, March 17th on Fox. Starring Bad Bunny Glorilla, Kenny Chesney, Money, Long Nelly, your host, iheartradio. LL Cool J. Are you guys ready to have some fun tonight? Plus, I heart Innovator award recipient Lady Gaga, I heart icon award recipient Mariah Carey, and I heart breakthrough award recipient Gracie Abrams. Watch live on Fox, Monday, March 17th at 8, 7 Central. Now, perhaps this crazy level of spending would be necessary if OpenAI was still the leader in generative AI and it was still meaningfully improving its capabilities year after year or two. Year after two years. And I think we all know that that isn't the case. A few weeks ago, OpenAI launched GPT 4.5, its latest model, that. Well, this isn't brilliant. Sam Altman says, and I quote, is the first model that feels like talking to a thoughtful person. Which, by the way, is really funny. It's really funny to say it, to be like, yeah, I have spent billions of dollars telling you that this bullshit is like a person, but this one really is. The other one's not so much. I did not think that in the past, and it's not obvious what GPT 4.5 does better or even really what it does, other than Altman saying it is a different kind of intelligence and there's a magic that he has not felt before. Wow. Wow. Really, really inspirational stuff, Sammy. Fucking idiot. This was, by the way, in Altman's words, the good news. The bad news was that, And I quote, GPT 4.5 is a giant expensive model, adding that OpenAI was out of GPUs but proudly declaring that it had had tens of thousands of GPUs in the week following and would roll out to OpenAI's $20 a month + tier and that they would be adding hundreds of thousands of GPUs soon. Excited? Well, you shouldn't be. On top of a vague product set and indeterminately high compute cost, GPT 4.5 costs developers an incredible $75 per million input tokens, and those are the prompts and data pushed into the model and a absolutely astounding $150 per million output tokens. That's the output it creates, and that's roughly 3000% more for input tokens and 1500% more expensive for output tokens than GPT4O for results that OpenAI's Andrej Karpathy described as a little bit better and aw, but also not exactly in ways that are trivial to point to. And One developer described GPT 4.5 to Ars Technica as a lemon when comparing its reported performance to its price. Ars Technica also reported that GPT 4.5 was terrible for coding, relatively speaking, and other tests showed that the model's performance was either slightly better or slightly worse across the board, with, according To Ars Technica, one success metric being the OpenAI found open human evaluators preferred GPT 4.5 responses of a GPT 4O in about 57% of interactions. Wow, that's very underwhelming. So just to be crystal clear, the biggest AI company's latest model appears to be even more ruinously expensive than its last one, while providing modest at best improvements and performing worse on several benchmarks than competing models. Very good. Despite these piss poor results, Sam Altman's reaction was to bring in hundreds of thousands of GPUs as a means of exposing as many as people as possible to his mediocre, ultra expensive model. And the best that Altman has to offer is that this is the first time people have been emailing with such passion asking OpenAI to promise to never stop offering a specific model I am just going to say this. That is a tweet and it never happened. Or it happened like, once. This is some girlfriend in Canada. Shit. Sam Altman is washed when all of this falls apart. Remember I said he was washed? Now, remember how I talked about OpenAI's lack of meaningful improvement? As a reminder, GPT 4.5 was meant to be GPT 5, but according to the Wall Street Journal, continually failed to make a model that advanced enough to justify the enormous cost, with a six month training run costing $500 million and GPT 4.5 requiring multiple runs of different sizes. So, yeah, OpenAI spent hundreds of millions of dollars to make this great stuff. And I haven't even mentioned the company's purported agent products. And no, I'm not talking about Operator, which is also dogshit, by the way. OpenAI wants to create tiers of AI agents with the cheapest costing $2,000 a month and capable of handling administrative tasks, and the most expensive costing $20,000 and having PhD. Look, I wrote the script, all right? And I'm gonna be honest, I can't even read that sentence with a straight face. Operator cannot even search TripAdvisor properly. It can't even do a thing that. Let me Google that for you. Does. And these chunderfucks want to charge $2,000 a month for an agent that does what? Does some sort of. What's it do? Oh, $20,000 for something with PhD level capabilities. I think all the people with PhDs listen to this have just stood up and gone. I have an idea. And this is insane on many levels. Not simply because the base product is undercut by actual human workers in many parts of the world. And even PhD students are typically only paid 20,000 to 30,000 a year on average. And even people with actual doctorates in industries rarely earn $20,000 a month unless they're working in Silicon Valley or occupying a C suite job. But forget all about that. What does it mean to have a PhD level agent? Remember, LLMs are guessing machines. They don't know anything or even understand the concepts behind the words they spit onto a page. No, serious. Seriously, Sammy, what does it mean? I'm fucking waiting, you damp goblin, you piss ant. Ah, maybe I shouldn't just sit here insulting him. Wanker. Anyway, this, by the way, is the company that is about to raise $40 billion, led by a Japanese bank that has to go into debt to fund both their operations and the infrastructure necessary for them to grow any further. Again, as we started with, Microsoft is canceling plans to massively expand its data center capacity right at a time when OpenAI just released its most computationally demanding model ever? How do you reconcile those two things without concluding either that Microsoft expects GPT 4.5 to be a flop, although it's simply unwilling to continue bankrolling OpenAI's continued growth, or perhaps it's having doubts about the future of generative AI in general? Maybe. Now I have been and remain hesitant to call the bubble bursting because bubbles do not burst. Really. They certainly don't burst in neat little events. Nevertheless, my pale horses I predicted in the past were led by one specific call that reduction in capital expenditures by a hyperscaler was a sign that things were collapsing. Microsoft walking away from over a gigawatt of data center plans equivalent to as much as 14% of its current data center capacity is a direct sign that it does not believe that growth is there in generative AI, and thus they are not building the infrastructure to support it and indeed may have overbuilt something that, as I've mentioned, that Microsoft CEO Satya Nadella has directly foreshadowed in his interview with Dwarkesh and otherwise extremely boring and waste of an hour of your life. The entirety of the tech industry and the AI bubble has been built on the assumption that generative AI was the next big growth vehicle for tech. And if Microsoft, the largest purchaser of Nvidia GPUs and the most aggressive builder of AI infrastructure, is reducing capacity, it heavily suggests that the growth is not there. Microsoft has, by the look of things, effectively given up on further data center expansion, at least at the breakneck pace it runs promised. And that even suggests that generative AI will be a thing in a few years. Definitely not at the scale it is right now. AI boosters will email me and they'll say there's something that I don't know that in fact Microsoft has some greatest strategy and some efficiency play. But answer me this. Why is Microsoft canceling over a gigawatt of data set expansion? And again, this is the most conservative estimate customer. The realistic number is much, much higher. Do you think it's because it expects there to be this dramatic demand for the AI services? Do you think it's reducing supply because of all the demand? Now you might think that this is an efficiency play. They're playing with Deep Seek, right? No, sorry, that doesn't matter. Even if Deep Seek was this magical efficiency play, which it may or may not be, I actually think it is more efficient. Like that's true. But we actually don't know if it's profitable even then. They've been talking about not having the capacity to deal with demand. They've been talking about how incredible this is. They've been talking about how big this is going to be. This sounds like they don't think it's going to big going to be big either. It's going to big. They're going to get my ass in the comments on that one. Now one might argue that Microsoft's reduction in capacity buildout is just a sign that OpenAI is moving its compute elsewhere. Maybe that's true. And if Stargate ever gets built, which I question anyway, here are some questions to ask Microsoft still sells access to OpenAI's API through Azure. Does it not see the growth in that product? Do they not see it they not expanding? Is the growth not there and Microsoft still, one would assume, makes money off of OpenAI's compute expenses, right? Or is that not the case due to the vast 75% discount OpenAI gets on using its services? I have been told that it's very close to the wire by sources, but I can't say it. But you'd have to just look at the fact that they do actually do that. That was reported by the information. Look, look, look, look. Microsoft making such a material pullback on data center expansion suggests that the growth in generative AI products, both those run on Microsoft servers and those sold as part of Microsoft's products, do not have the revolutionary growth trajectory that both CFO Amy Hood and CEO Satya Nadella have been claiming. And this is all deeply concerning, while also calling into consideration the viability of generative AI vehicle for any hyperscaler. If I am correct, Microsoft is walking away not just from expansion of its current data center operations, but from generative AI writ large. I actually believe it will continue selling this unprofitable, unsustainable software because the capacity it has right now is more than sufficient to deal with this incredible lack of demand. It's time for investors in the general public to begin demanding tangible, direct numbers on the revenue and profits related to generative AI, as it is becoming increasingly obvious that the revenues are small and the profits are non existent. A gigawatt of capacity is huge and walking away from that much capacity is a direct signal that Microsoft's long term plans do not include needing a great deal of compute. One counter could be that it's waiting for more of the specialized Nvidia GPUs to arrive, to which the response is Microsoft still wants to build the capacity so it has somewhere to fucking put them again. These facilities take anywhere between three and six years to build. Do you really think Blackwell will be so delayed that they won't arrive until what, 20, 28? Were you? What are you talking about? If I'm even alive then. Anyway, one counter could be that there isn't the power necessary to power these data centers. And if that's the case, it isn't. But let me humor the idea then. The suggestion is that Microsoft is currently changing its entire data center strategy so significantly they now has to issue over a gigawatt's worth of statements of intent across the country to different places because of more power. There's more power in those place is less than. Did they make a like a gigawatt worth of mistakes? Something about Mary Poppins. Something about Mary Poppins. Exactly. Oh man, this is fun. I'm AJ Jacobs and I am an author and a journalist and I tend to get obsessed with stuff. And my current obsession is puzzles. And that has given birth to my podcast, the Puzzler. Dressing. Dressing. French dressing. Exactly. That's good. Now you can get your daily puzzle nuggets delivered straight to your ears. I thought to myself, I bet I know what this is. And now I definitely know what this is. This is so weird. This is fun. Let's try this one. Our brand new season features special guests like Chuck Bryant, Mayim Bialik, Julie Bowen, Sam Sanders, Joseph Gordon Levitt and lots more. Listen to the Puzzler every day on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. That's awful and I should have seen it coming. Our iHeartRadio Music Awards are coming back Monday, March 17th on Fox. Starring Bad Bunny Glorilla, Kenny Chesney, Money Long Nelly, your host, iheartradio. LL Cool J. Are you guys ready to have some fun tonight? Plus, I heart innovator award recipient Lady Gaga, I heart icon award recipient Mariah Carey, and I heartbreak breakthrough award recipient Gracie Abrams. Watch live on Fox, Monday, March 17th at 8, 7 Central. Now. Okay, another counter is that I'm only talking about leases and not purchases. In that case, I'll refer you to this article from cbre, which is linked in the spreadsheet for this episode, which includes an elucidating read on how hyperscalers actually invest in data center infrastructure. Leases tend to account for the majority of spending simply because it's less risky. A specialist takes care of the tough stuff. Location buying, land handling, construction. And the hyperscaler isn't left trying to figure out what to do with the facility when it reaches the end of its useful life cycle. I also expect someone to chime in and say, well, that's just Microsoft. What about Google or Amazon? Get out of my house. I'd counter and say that these companies are comparatively less exposed to generative AI. Amazon has invested $8 billion in anthropic, which is a bit less than half of what Microsoft has reportedly invested in OpenAI, which amounted to about $14 billion as of December. When you consider the discounted Azure rates Microsoft offers to OpenAI, too, the real number is probably much, much higher. Google also has $3 billion in anthropic in addition to its own AI services like Gemini. OpenAI, as I noted in my last newsletter, is pretty much the only real generative AI company with market share and significant revenue, although I once again remind you that revenue is not the same thing as profit. And this is true across mobile, web and likely its APIs too. To similarly, nobody has quite pushed generative AI as aggressively as Microsoft, which has introduced it to an overwhelming number of its paid products, hiking prices for customers as it goes. I suppose you could say that Google has pushed Gen to its workspace products as well as its search products, but the scale and aggression of Microsoft's push feels different. That, and as I've mentioned repeatedly, they are the largest purchaser of Nvidia GPUs by nearly twice as many 485,000 as its nearest competitor Meta, which bought 224,000 in 2024. Ultimately, Microsoft has positioned itself at the heart of generative AI, both through its own strategic product decisions and its partnership with OpenAI. And the fact that it's now scaling back on the investment required to maintain that momentum is, I believe, pretty significant. I also recognize that all of this is a big juicy steak for someone some people call a pig or an animal or a monster or an AI cynic. Look, I've pored over this data repeatedly and done all that I can to find less convenient or satisfying conclusions. Letters of intent are likely. That is part of my argument. These are serious documents, by the way, but they're not always legally binding. Neither of those statements are qualifications, but as TD Cohen pointed out, SQQs are generally treated as the green light to start working on construction, even though a formal lease agreement hasn't yet been signed. And to be clear, Microsoft let an indeterminate amount of SQQs go. Nevertheless, it's incredibly significant that Microsoft is letting so many the equivalent of as much as 14% of its current data center capacity at a bare minimum on top of the couple hundred. So at least 200 megawatts of data center leases become canceled. I I do not know why nobody else has done this analysis. I've now read every single piece about the TD Cohen report from every single outlet that covered it. I've read some weird SEO stuff. It's. It's not good, and I'm under I'm just kind of astounded by the lack of curiosity as to what one GW plus means in the report, the meaningfully moved markets as I'm equally astonished by the lack of curiosity to contextualize most tech news news. It's as if nobody wants to think about this too hard. Like nobody wants to stop the pie. Nobody wants to accept what's been staring us in the face since last year, if not earlier. And when given the most egregious, glaring evidence, people still must find ways to dismiss it or ignore it rather than give it the energy it deserves. Far more resources were dedicated to finding ways to gussy up the releases of Anthropic's Claude Sonnet 3.7 or OpenAI's GPT 4.5 than were given the report from an investment bank's research wings that the largest spender in generative AI, the largest backer for now of OpenAI at least, is massively reducing its expenditures in data centers required for the industry. And for OpenAI, a company ostensibly worth $157 billion to expand Microsoft's stake in OpenAI is a bit fuzzy, as OpenAI doesn't issue traditional equity, and there's a likelihood it may be diluted as more money comes in. It reportedly owned 49% in 2023, though assuming that's still the case, are we to believe that Microsoft is willing to strangle an asset worth at least $75 billion, several times more than its investment today by canceling a few leases? How many more alarms do we need to go off before people recognize that something bad is happening? Why is that tangible, meaningful evidence that we're in a bubble and possibly a sign that it might be popping? Less interesting than the fact that Claude Sonnet 3.7 can think longer. And if you're listening to this and you think I'm talking about you, I fucking am. I am sick of this shit. I am absolutely sick of this shit. I'm sick of reading articles like that when far more important and scary and damning things are happening, things that actually matter. I don't care if Wario Amadeus allowed you to make it compute for longer. It doesn't matter compared to this. It does not matter compared to a gigawatt or more of capacity going and I'm sick of this. I'm sick of me having to be the guy Sometimes I do not say these things to be right. I don't want to be a cynic or a hater. I say this all because I am trying to understand what's going on and if I do not, I will actually go insane every time I sit down to write my newsletter or record this podcast. I'm doing it because I'm trying to understand what's happening and how I feel about it, and these are the only terms that dictate my creativity. It just happens that I've stared at the tech industry for too long and now I can't look away. Perhaps it's driving me mad, or maybe I'm getting smarter, or maybe it's an Arnold Palmer of the two. But what comes out of my work is not driven by wanting to go viral or having a hot take or be a renowned skeptic or being right or being anything. Because such things suggest that I would do this differently if three people listened versus is. I actually can't say the amount of people that do. I have rules, but 57,000 people subscribed to my newsletter. Extrapolate from there. I would do the same goddamn thing. And in fact, if you look back in my work, I've done the same thing from the beginning. Now, I'm not saying that Microsoft is dying or making any grandiose claims about what happens next. What I am describing, however, is the material contraction of the largest investor in data centers, according to TD Cohen, potentially at a scale that suggests that Microsoft has meaningfully reduced its interest in further expansion of data centers writ large. This is a deeply concerning move, one that suggests that Microsoft does not see demand to sustain the current expansions, which has greater ramifications beyond generative AI, because it suggests that there isn't any other reason for it to expand the means of delivering software. What has Satya Nadella seen? What is Microsoft CFO Amy Hood doing? What is the plan here and really what's the plan with OpenAI? SoftBank has committed over $40 billion of costs that it currently cannot afford, taking on as much as $24 billion in DEB the next year to help sustain one more funding round and the construction of data centers for OpenAI, a company that loses money on literally every single customer. To survive, OpenAI must continue raising more money than any startup has ever raised before, and they are only able to do so from SoftBank, which in turn must take on debt. OpenAI burned $5 billion in 2024 and will likely burn $11 billion or more in 2025, and will continue burning money in perpetuity. And to scale further will require funding for a data center project funded partially by a funding from a company that's taking on debt to fund. And when you put this all together, all I can see is calamity. Generative AI does not have meaningful mass market use cases. And while ChatGPT may have 400 million weekly active users, there doesn't appear to be meaningful consumer adoption outside of ChatGPT, mostly because almost all AI media coverage inevitably ends up marketing one company. OpenAI. Argue with me all you want about your personal experiences with ChatGPT or how you personally found it useful. I don't care. I stopped listening a while ago. Your points never prove anything that doesn't make it a product with mass market utility or enterprise utility or worth the vast sums of money being plowed into it. Worse still, there doesn't appear to be any meaningful revenue. As discussed in my last episode, Microsoft claims $13 billion in annual recurring revenue, not profit, on all AI products combined, on over $200 billion of capital expenditures since 2023. And no other hyperscaler is willing to break out any AI revenue at all. Not Amazon, not Meta, not Google. Nobody does that not worry anyone. Is anyone listening to this who actually like deals with the economy? Can you please listen to me? So we don't actually, I don't know what we do. I actually don't know. I have no idea. Where's the growth? Where's the money? Where's the money? Sammy? Where is it? Where's my money, honey? Give me the money. Sam Altman where's my money? Why is Microsoft canceling a gigawatt of data center capacity while telling everybody that it didn't have enough data centers to handle demands for its AI products? Hmm. Well, I suppose there's one way of looking at it. Microsoft may currently have a capacity issue, but soon won't, meaning that further expansion is unnecessary. If that's the case, it'll be interesting to see whether their peers follow suit either way. Look, I see nothing that suggests that there's future growth in generative AI. In fact, I think it's time for everybody to seriously consider that big tech burned billions of dollars on something that nobody ever wanted or would pay for. If you listen to this and scoff, I don't know what Should I have talked about anthropic adding a sliding thinking bar to a model GPT 4.5? Who gives a shit? Can you even tell me what it does differently to GPT4O? Can you explain to me why it matters? Or are you more interested in nakedly captured imbeciles like Ethan Mollick sweatily oinking about how powerful the latest arse can shit? It's like no one's interested interested in the things happening in the real world. It's like nobody's thinking about the silicon and the infrastructure and how things actually get built. Wake the fuck up everybody. Things are on fire. Thank you for listening to Better Offline. The editor and composer of the Better Offline theme song is Matto Sousky. You can check out more of his music and audio projects@matasowski.com M A T T O S O W S K-I dot com. You can email me at ezeteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter. I also really recommend you go to chat where's your ed at? To visit the Discord and go to R betteroffline to check out our Reddit thank you so much for listening. Better Offline is a production of Cool Zone Media. For more from Cool Zone Media, Visit our website coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts or wherever you get your podcast. In a world of economic uncertainty and workplace transformation, learn to lead by example from visionary C Suite executives like Shannon Schuyler of PwC and Will Pearson of iHeartMedia. The good teacher explains the great teacher inspires. Don't always leave your team to do the work that's been the most important part of how to lead by example. Listen to Leading by Example executives making an impact on the iHeartRadio app, Apple Podcasts or wherever you get your podcasts. Do you remember what you said the first night I came over here? Ow. Go slower. From Blumhouse TV, iHeart podcasts and Ember 20 comes an all new fictional comedy podcast series. Join the flighty Damien Hirst to see unravels the mystery of his vanished boyfriend. I've been spending all my time looking for answers about what happened to Santi and what's the way to find a missing person Sleep with everyone he knew? Obviously. Listen to the hookup on the iHeartRadio app, Apple Podcasts or wherever you listen to your favorite shows. Welcome to Pod of Rebellion, our new Star Wars Rebels rewatch Podcast I'm Vanessa Marshall, voice of Harris Syndulla. Spectre 2 I'm Tia Sirkar Sabine Wren, Spectre 5 I'm Taylor Gray Ezra Bridger Spector and I'm Jon Librody, the Ghost Crew Stowaway Moderator. Each week we're gonna rewatch and discuss an episode from the series and share some fun behind the scenes stories. Sometimes we'll be visited by special guests like Steve blume voices Zabarelio's Spectre 4, or Dante Bosco voices Jaquell and many others. So hang on because it's gonna be a fun ride. Cue Listen to Potter Rebellion on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Ever wonder what it would be like to be mentored by today's top business leaders? My podcast this Is Working can help with that. Here's some advice from Jamie Dimon, the CEO of JPMorgan Chase, on standing out from the leadership crowd. Develop your EQ A lot of people have plenty of brains, but EQ is do you trust me? Do I communicate well? Develop the team, develop the people. Create a system of trust. And it works. Works over time. I'm Dan Roth, LinkedIn's editor in chief. On my podcast this Is Working Leaders Share strategies for Success. Listen on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
