Ed Zitron (2:46)
Callzone Media hi, I'm Ed Zitron and this is the second part of this week's Better Better offline as ever. Go to the Episode Notes, buy some merch, get the Challenge Coin. Subscribe to my newsletter if you like it, go premium. Either way, you've got this episode for free if you can stand the ads, which many of you can Anyway, the thrust of this two parter is that the AI trade is going to, as you've probably guessed by now, end badly. And as I'll also explain, I believe we're now set up for potentially economic and market wrecking consequences of this hysterical investment bubble. Today we're going to go beyond just startups and into how the poison of the bubble has crept into our economy and how dangerous things are getting as a result. In the first installment, we talked about Cursor, the AI coding startup that, despite having no moat and zero sustainability, has somehow earned a valuation of $10 billion, and then asked why none of these generative AI companies are being acquired, at least not in the traditional sense. And no, I'm not including the acquisitions where a big company snags the talent and leaves barely breathing bodies of startups in their wake. This matters because if these AI companies can't get an exit, go public, or get bought, it raises serious questions about how any of this ends. Cursor is the best example, and the fact that Anthropic is hopelessly dependent on Cursor for revenue despite doing everything they can to kill it makes the entire situation feel a little bit weird. Cursor's paths to viability seem, frankly, to be non existent, and the collapse of Cursor will inevitably result in a vast chunk of Anthropic's revenues going up in smoke. Which again will also make its collapse seem inevitable, as how can it fundraise at a valuation over its previous one when its biggest customer just died? So much of what I'm describing is structural, and few companies have bigger structural obstacles to survival than OpenAI. As a reminder, OpenAI appears to have burned at least $10 billion in the last two months. It has just raised another $8.3 billion after raising $10 billion in June, according to the New York Times, and intends to receive around $22.5 billion by the end of the year from Softbank and That is assuming it becomes a for profit entity by the end of the year and if that doesn't happen gets cut to $20 billion total, meaning that SoftBank would only be on the hook for a further $1.7 billion. I am repeating myself and I need you to really get this OpenAI just got $10 billion in June, just in June and had to raise another $8.3 billion in August. And that is an unbelievable cash burn, one dwarfing any startup in history, rivaled only by Xai, makers of Grok the Racist LLM, which loses over a billion dollars a month. I should also be clear that if OpenAI does not convert to a for profit, there is no path forward to continue raising capital. OpenAI must have the promise of an IPO. It must go public because at a valuation of $300 billion or more, OpenAI can no longer be acquired because nobody has that much money. And if, let's be real, Nobody actually believes OpenAI is worth that much, the only way to prove that anybody does is to take OpenAI public, and that will be impossible if it cannot convert. And ironically, SoftBank's large and late stage participation makes exits actually harder as early investors will see their holdings diluted as a percentage of total equity or whatever the hell we're calling it. Because these aren't real equity shares, they're profit participation units. They're a non profit. You can't do stock in them while a normal company could just issue equity and deal with the dilution that way. OpenAI's structure necessitates a negotiation where companies can obstruct the entire process if they see fit. Speaking of companies doing that, let's talk about Microsoft. As I asked in my Premium newsletter a few weeks ago, what if Microsoft does not want OpenAI to convert? They own all the IP, they own all the access to OpenAI's research and already run most of their infrastructure while assuming a best case scenario that it would end up owning a massive chunk of the biggest tech startup of all time. And I'm talking About equity, not OpenAI's current profit sharing units. Microsoft might also believe that it stands to gain more by letting OpenAI die and assuming its role in the AI ecosystem. But let's assume that OpenAI converts and now has to continue raising money at a rate that will require it allegedly to only need to raise $17 billion in 2027. That number doesn't make any sense considering that OpenAI already had to bring forward its $8.3 billion fundraise by at least three months. But let's stick with that idea. OpenAI believes it will be profitable somehow by 2030. And even if we assume that that means it intends to burn over 100 billion do to get there is the plan to take OpenAI public and then dump a toxic asset onto the public markets. Then it will just flounder and convulse and die for everybody to see. Kind of, kind of sucks. Can you imagine OpenAI's S1? How well do you think this company would do when they had a true financial audit from a major accounting firm and you can claim they already have one? I simply don't believe you. This burn does not seem like an accountancy firm would be good with them. And if you want to know what all that looks like, Google WeWork, which went from tech industry darling to joke in a matter of days, in part because it was forced to disclose how bad things actually were when they filed to go public. No, really. I'll link to an article I've linked in the spreadsheet called the Strangest and Most alarming things in WeWork's IPO filing. It's a bad sign if your IPO filing is described as strange and alarming. Those are not even terms of art. They're just worrisome. With that in mind, I feel the same way about Anthropic. Nobody is buying this company at $170 billion, which is how much it's raising at right now. And thus the only way to access liquidity would be to take Anthropic public and show the world how a company that made $72 million in January and more than $400 million in July 2025 and also plans to lose $3 billion or more. And then you show them that and let the market decide on a fair price. Especially if the S1 includes a bunch of strange and alarming stuff that raises questions about whether Anthropic's value is actually justified. I'm. I'm stuttering here because everyone's acting like these are rational things. Like, yeah, OpenAI, which burns $5 billion in 2024, probably 10 or more in 2025. If not, if not, 15 or 20. Like that's fine. Like this is all good. They'll just go public one day. Will they really? Will they? No, really. What's the plan? What's the plan here? Because the arguments against my work always come down to. Costs will go down and these products will become essential. Outside of ChatGPT, there's no real proof that these products are anything remotely essential. And I'd argue there's very little about ChatGPT that Microsoft couldn't provide with rate limits via Copilot. I'd also argue that essential is a very subjective term. Essential in the sense that some people use it as search, doesn't mean it's useful for enterprises or the majority of people, or could not be replaced by anything else. And I guess ChatGPT somehow makes $1 billion a month in revenue selling access to premium versions of ChatGPT, though I'm not 100% sure how. Assuming it has 20 million customers paying 20 bucks a month, that's $400 million a month. Then 5 million business customers paying an average of $100 a month each, that's $900 million. And is that, is that really indicative? Is the average really that good? Are there that many people paying 35 or 50 or 200 bucks a month? How many of those versus 100? I mean, most don't pay 100 bucks. It's either 35, 50 or 200. OpenAI doesn't break out the actual revenues behind these numbers for a reason. I believe that reason is they don't look as good. I also, by the way, when I was fucking around trying to play with GPT5, I got offered a team subscription for a dollar for one month. How? How's. How many people, how many of those business customers got a month for a dollar? How long does OpenAI counter customer? What is OpenAI's churn like? And does it really, as I wrote in my newsletter, how much money do OpenAI and Anthropic make end the year making more than Spotify at $1.5 billion a month? We don't know. And OpenAI, much like Anthropic, has never shared actual revenues other than in June when they said 10 billion annualized revenue, choosing instead to leak to the media and hope to obfuscate the actual amounts of money being spent on its services. Anyway, long story short, these companies are unprofitable with no end in sight, don't even make that much money in most cases, and are valued than anybody would ever buy them for, and they don't even have that much in the way of valuable ip. And on top of that, the two biggest players burn billions of dollars more than they make serving these companies that also burn billions of dollars. I wish someone would make this make sense, because it doesn't. But Ed, the government will give them more money forever. They'll bail them out. I'm tired of this fucking point. I'm so Tired of hearing about bailouts. I'm so tired of people saying they'll bail them out. I get that you're scared. I get that things are grim. Things are absolutely fucking grim. But engage with reality here. You can't bail this out. You can't bail it out. Even if this were gonna happen, and it will not, who would they give the money to and for how long? Would they give it to all AI startups? Is every startup gonna get a paycheck protection program for generative AI? How would that play out in rural red districts where big tech has never been popular, which are being hit with both massive cuts to welfare as well as the shock waves of a trade war that has made American agricultural exports, like stocks which previously went to China by shiploads, less appealing worldwide. So they just, in this scenario, by the way, bail out OpenAI, then stuff it full of government contracts to the tune of what, $15 billion a year, right? Just in this example. Just to be clear, that's the low end of what this would take to do. And they'll have to keep doing it forever until Sam Altman could build enough data centers to what, keep burning billions of dollars? There's no plan to make this profitable. Say this happens. It won't, but say it. Now what? America has a bullshit generative AI company attached to it, attached to the state that doesn't really innovate and doesn't really matter in any meaningful way except that it owns a bunch of data centers, which it doesn't yet, by the way. Microsoft owns all their infrastructure. The core weave stuff is barely happening. Fucking hell. I just don't think this happens. I think it's a silly idea and the most likely situation would be that Microsoft would unhinge its jaw and swallow OpenAI and its customers whole. Hey, hey, hey. Did you know, by the way, the Microsoft's data center construction down year over year and it's basically signed no new data center leases. I wonder why it isn't building these new data centers for OpenAI. Could be anything. By the way, Stargate isn't saving them either. As I've written recently, Stargate doesn't actually exist beyond the media hype it generated. And every single person who wrote big articles about Stargate being real and Stargate being connected to Abilene, Texas should be fucking ashamed of themselves. It's deeply unprofessional. Anyway, that also counts the hint of the involvement. There are several articles that hint without directly saying that Abilene, Texas was part of the softbank. Stargate Thing you can't wriggle out of this one. When the time comes, all will be aired. By which I mean I'm going to be pointing at the articles aggressively, but we have other stuff to do anyway. By the way, does the government do this for everybody? This very important question, is everyone getting bailout? Is it just anthropic anatomy and OpenAI? What about Google and Microsoft? They're going to burn, come on, a bunch of money. Why do they get the money? No one else. What other industries are going to turn around and go, wait, wait, wait, wait, wait. Why do I have to fucking make money when what's the point of this? Because AI is important. You're going to get a bunch of fraud if this happens. It's not going to. Just to be clear. And what is the limit, by the way? Do they subsidize all compute for companies like Cursor? To what end? Where's the limit? How does it end? What is the bailout? What are you bailing out? Where does the bailout go? These are all questions that would get answered. You can sit there and say, oh, the government's corrupt. Oh, I feel this way and that way. Here's the thing, tarp, which is the thing that was used to bail out banks and such and hedge funds, and during that whole situation at least made more sense because if a bank fails, people lose actual money. If this stuff fails, what do people lose? What is the service that people lose access to? ChatGPT. That's it. I don't even think that Donald Trump could explain what ChatGPT is. And even then he already has people to give him money. Like these people don't have anything to offer other than just this vague sense of AI is important. I just, I just don't buy it. And I think that people, if you're worried to get hopeful, I don't know, I'm hopeful that something's going to explode within this terrible fucking industry. And I'm kind of hopeful because I think it's time to ask a basic question, which is what if generative AI just is not profitable? And I think this is a question that we have to seriously consider at this point because its ramifications are significant. If I'm honest. I think the future of large language models will be largely client side, on egregiously expensive personal setups for enthusiasts and in a handful of niche enterprise roles. Large language models do not scale profitably and their functionality is not significant enough to justify the costs of running them by immediately applying old economics. The idea that you would pay a monthly fee to have relatively unlimited access. Companies like OpenAI and Anthropic immediately trained users to use their products in a way that was antithetical to their costs. Then again, had these models been served in that way, had they been mindful of the cost and charged what they actually were, there would likely have been no way to get this far. If OpenAI is making billions of dollars a month, or a billion dollars a month, or whatever the fuck it is, it's possibly losing that much or more after revenue. And that's the money it can get selling the product in a form that can never turn prof. Itself. If OpenAI charged in line with its actual costs, would it even be able to justify a free version of ChatGPT outside of a few requests a month? The revenue you see today is what people are willing to pay for a product that loses money. And I cannot imagine they would pay as much if companies in question charged their costs. If I'm wrong, Cursor will be just fine. And that's assuming that Cursor's current hobbled form is even profitable, which it has not said it is. So you've got an entire industry of companies that struggle to do anything other than lose a lot of money. Great. And now we have a massive expansion of data centers the likes of which we've never seen, all to capture demand for a product that nobody makes much money selling. This naturally leads to an important question. How do these people building data centers actually make money.