Ed Zitron (32:58)
Columbia Jan Marsalek was a model of German corporate success. It seemed so damn simple for him. Also, it turned out a fraudster. Where does the money come from? That was something that I always was questioning myself. But what if I told you that was the least interesting thing about him? His secret office was less than 500 meters down the road. I often ask myself now, did I know the true Jan at all? Certain things in my life since then have gone terribly wrong. I don't know if they followed me to my home. It looks like the ingredients of a really grand spy story because this ties together the Cold War with the new one. Listen to Hot Money, Agent of chaos on the iHeartRadio app, Apple Podcasts or wherever you get your podcasts. Hey, that reminds me, I. I got another problem. I got another problem here because I think that there is another reason why the cycles kind of keep repeating. You get a company that grows and then they kind of go nowhere because, well, the company doesn't really seem to have a total addressable market much bigger than 100 million ARR. And I think it's a little simple. It's quite simple in fact. There really are no unique generative AI companies and building a moat on top of LLMs is near impossible. If you look a man. Am I going to get some emails about this? But bring them on. If you look at what generative AI companies do, note that the following is not a quality barometer. It's probably one of the following things. There are the chat bot one. Either you ask questions or talk to. This includes customer service bots searching, summarizing or comparing documents with increased amounts of complexity of documents or quantity of documents to be compared. This includes being able to ask questions of documents. Web search, deep research meaning long form web search that generates a document where some parts of it will inevitably be hallucinated or derived from low quality sources. Generating text, images, voice or in some rare cases, video. Using AI to generate AI, I mean to write, edit or maintain code, transcription, translation or photo and video editing. Every single generative AI company that isn't OpenAI or anthropic. And honestly, kind of those two does one or a few of these things, and I mean every one of them. And it's because every single generative AI company uses large language models which have inherent limits on what they can do. LLMs can generate, they can search, they can kind of edit, they can sometimes transcribe accurately, and they can sometimes translate much more. Well, much less accurately, I guess. Within weeks of Cursor's change to its services, Amazon and ByteDance release competitors that for the most part do exactly the same thing. Sure, there's a few differences in how they're designed, but design is not a moat. Especially in a high cost, negative profit business where your only way of growing is to offer a product you can't sustain. The only other moat you can build is the services you provide, which when your services are dependent on a large language model, are dependent on the model developer, who in the case of OpenAI and anthropic, could simply clone your startup because the only valuable intellectual property is the models and those models are theirs. You may say, well, nobody else has any ideas either. To which I say, I fully agree. My rock. Com bubble thesis suggests that we're all out of hypergrowth ideas. And yeah, I think we're out of ideas related to any large language models too. At this point I think it's fair to ask, are there any good businesses you can build on top of generative AI or large language models? I don't mean add features related to, I mean an AI company that actually sells a product that people buy at scale that isn't called ChatGPT or Claude. In previous tech booms, companies would make their own models, their own infrastructure, or the things that make them distinct from other companies. But the generative AI boom effectively changes that by making everybody build stuff on top of somebody else's models. Because training your own models is both extremely expensive and requires vast amounts of infrastructure and just pure power as a result. Much of this boom is about a few companies, really. Two, if we're honest, getting other companies to try and build functional software for them. And these companies, OpenAI and Anthropic, are their customers. Weak point in a relationship. Relationship that veers from symbiotic to parasitic at a moment's notice. I cannot stress enough how bad OpenAI and Anthropic are for their business customers. Their models are popular, by which I mean their customers. Customers will expect access to them, meaning that OpenAI and anthropic can, as they did to Cursor arbitrarily change pricing, service availability and functionality based on how they feel that day or whether they need to pump their annualized revenue for investors. Don't believe me? Anthropic cut off access to AI coding platform Windsurf because it looked like they might get acquired by OpenAI. They never were, they just harmed that business. They just cut a hole in them. Why? Because they might touch another business. The most anti competitive shit in the world and everyone's sat there clapping like a fucking seal. Disgusting even by big tech standards. This fucking sucks and these companies will do it again. But you know what? Let's talk about the actual uses of Generative AI. Because the limited number of use cases are because large language models are all really really similar. Because all large language models require more data than anyone has ever needed, including like four times the amount of data on the Internet. They all basically have to use the same thing, either taken from the Internet or bought from one of the few companies like Scale Surge, Turing together or whoever. While they can get customized data or do customized training and reinforcement learning. These models are all transformer based and they all function similarly. And the only way to make them different is by training them, which doesn't make them that much different, just better at things they already do. And good lord is it so? Is Generative AI so ungodly expensive? And the training is as well. By the way, they have to pay real humans as well, which they hate doing. And even when they're paying outsourced labor in Kenya at $2 a pop, they're still losing a ton of money. It's really crazy actually how badly built all of this is. And I already mentioned OpenAI and Anthropic's costs as well as Perplexity's $50 million bill in a year to Anthropic, Amazon and OpenAI off of a measly $34 Million in revenue. These companies cost too much to run and their functionality doesn't make enough money to make them make sense. And the problem isn't just the pricing, but how unpredictable it is. As Matashir wrote for CIO Dive last year, Generative AI makes a lot of companies lives difficult through the massive spikes in costs that come from their power users with few ways to mitigate those costs. One of the ways that company manages their cloud bills is by having some degree of predictability, which is difficult to do with the constant slew of new models and demands for new products to go with them, especially when said models can and can't and do often cost more with subsequent iterations, not necessarily for much return. Except if you're a company, like a coding company, your customers are going to actually ask you for the new models. As a result, it's hard for AI companies to actually budget. But Ed, what's that? Ed? What about agents? Aren't they the thing that'll eventually make the insane broken calculus behind generative AI actually work? What is your accent made anyway? Anyway, let me tell you about agents. The term agent is one of the most egregious acts of fraud I've seen in my entire career writing about this crap, and that includes the metaverse. When you hear the word agent, you are meant to think of an autonomous AI that can go and do stuff without oversight, replacing someone's job in the process. And companies have been pushing the boundaries of good taste and financial crimes in pursuit of them. Most egregious of them is Salesforce's agentforce, which lets you deploy AI agents at scale. That's a quote. And brings digital labor to every employee, department and business process. Another quote from Salesforce's website these are two blatant lies Agent Force is a goddamn chatbot program. It's a platform for launching chatbots. They can sometimes plug into APIs that allow them to access other information, but they're neither autonomous nor agents by any reasonable definition. Not only does Salesforce not actually sell agents, its own research shows that the agents and agents in General only achieve around 58 success rate on single step tasks. And I'm going to quote the register here, this means tasks that can be completed in a single step without needing follow up actions or more information or multi step tasks. So you know most tasks they succeed a depressing 35% of the time. Last week OpenAI announced its own ChatGPT agent that can allegedly go and do tasks on a virtual computer in its own demo. The agent took 21 minutes or so to spit out a plan for a wedding with destinations, a vague calendar and some suit options, and then showed a pre prepared demo of the agent preparing an itinerary of how to visit every major league ballpark, baseball for the non Americans out there. In this example's case, agent took 23 minutes and produced arguably the most confusing map I've seen in my life. You can see the map in the newsletter version of this episode. It's hilarious. Missed out every single major ballpark on the east coast including Yankee Stadium and Fenway park, which are two of the most well known stadiums in sports and added a bunch of random ones. And like one in the middle of the Gulf of Mexico. What team is that, Sammy? The Deepwater Horizon Devils. Is there a baseball team in North Dakota? Clammy Sammy. Sammy. I also should be clear. This was a pre prepared example. This is the best they had. I want to see the cutting room footage on this because you you best bet that that map looked like straight dog shit. As with every large language model product and yes, that's what this is. Even if OpenAI won't talk about what model results are extremely variable. Agents are difficult because tasks are difficult even if they can be completed by a human being that the CEO thinks is stupid. What OpenAI appears to be doing is using a virtual machine to run scripts that its models trigger, regardless of how well it works. And it works very, very, very, very poorly and inconsistently. It's also very likely expensive to run. In any case, every single company you see using the word agent is trying to mislead you. They're lying. Glean's AI agents to chatbots with if this, then that functions that trigger events using APIs, which means if an event happens, another thing will be triggered, not taking actual actions because that is not what LLMs can do. ServiceNow's AI agents that allegedly act autonomously and proactively on your behalf are despite claiming they go beyond better chatbots still ultimately better chatbots that use APIs to trigger different events using if this, then that functions. Sometimes these chat box can also answer questions that people might have or trigger an event somewhere. Oh right, that's literally the same thing. The closest we have to an agent is any kind of coding agent, which is they can make a list of things that you might do on a software project and go and generate code and push stuff to GitHub when you ask them to. And they can do so autonomously in the sense that you can just let them do what a model that doesn't know anything and has no consciousness thinks is right based on its corpus of data and the things you give it access to. And it's about as safe as that sounds when I say ask them to and go. And I mean that these agents are not intelligent at all. They do not have intelligence. And when let run rampant, fuck up everything and create a bunch of extra work. Also, a study found that AI coding tools made engineers 19% slower. Nevertheless, none of these products are autonomous agents. Anybody using the term agent likely means chatbot. And all of this is working because the media keeps repeating everything these companies say. It's a disgrace. We need to stop this. I realize we've taken a kind of a scenic route here though, but I needed to lay the groundwork because I really am alarmed. According to a ubs report from 26 June, the public companies running AI services are making absolutely pathetic amounts of money from AI. Microsoft, according to UBS, is making annual revenues of somehow less than the Information reported at $2.1 billion. ServiceNow is making less than 250 million, Adobe less than 125 million, Salesforce less than 100 billion. Now ServiceNow said $250 million ACV annual contract value. This may be one of the more honest explanations of revenue I've seen, putting them in the upper echelons of AI revenue. Unless of course, you think about it for a couple seconds and think, are these all AI specific contracts? Or perhaps they're in contracts where you've taped AI onto the side gives a shit. It's also year long agreements that could churn and according to Gartner, over 40% of Agentic AI products will be canceled by end of 2027. And really, you gotta laugh at Adobe and Salesforce, both of whom talk such a goddamn fuck ton about generative AI and yet have only made amazing 125 million in annualized revenue from it. Pathetic crap. Dog shit. These aren't futuristic numbers, they're barely product categories, and none of this seems to include costs. Oh well, good grief. Look, a lot of what I've been saying is reminiscent of previous podcasts, and I've gone over the this a lot because I really want to make it clear that the signs are very troubling and that the things I've warned you about for the past couple of years are only getting worse. And the cliffs coming up. Things are only getting closer. When we tumble off of it, things may get really, really bad. And in the next episode we'll talk about how and what that tumble might look like and the noises I'm going to make when it happens. Thank you for listening to Better Offline. The editor and composer of the Better Offline theme song is Matosowski. You can check out more of his music and audio projects@matosauski.com m a t t o s o wski.com youm can email me at ez betteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter. I also really recommend you go to chat wheresyoured at to visit the Discord and go to r betteroffline to check out our Reddit thank you so much for listening Better Offline is a production of Cool Zone Media. For more from Cool Zone Media, Visit our website coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts or.