Ed Zitron (2:08)
Hello and welcome to Better Offline. I'm your host, Ed Zitron. Please buy our merchandise. I need money. Better Offline now. In the last episode, we talked about an unfortun affliction affecting people in tech, where they're prone to spouting these inane, meaningless platitudes. And then the other affliction affecting people in the tech media, where they nod along and say, huh? Makes sense to me. The thing is, if we actually care about tech, it's upon us to actually challenge these charlatans, we don't have to be mean or rude or harsh. We just have to say, what does that mean? Or what exactly do you mean? This is the pro tech position. It's not about being a hater or a cynic or whatever. It's an essential quality control mechanism that's been sorely lacking, especially in the AI bubble. The problem is that bullshit is a scarily effective mechanism by the shameless and the cynical. And it's managed to bamboozle so many, especially in the media, where the job is, at least in theory, to do the exact opposite of nodding reasonably and saying, oh, sounds good to me, mate. Let's say the media, not the actually good people who I mentioned by name in the last episode, but slavish lick spittles like Kevin Ruse and Casey Newton, actually want these companies to build powerful AI and believe they're smart enough to do so. Say that somehow, looking at their decaying finances, the lack of revenue, the lack and the remarkable lack of use cases, they still come out of it and say, sure, I think they're going to do this. The problem with bullshit like Nadella's Word Salad Buffet and Altman's Whatever the Fuck he does is that it allows people, ostensibly smart, reasonable people, to reach these conclusions without having to answer the silly little question of how, how, how? How are they going to do it? Why haven't they done it yet? Why, three years in, are we still unable to describe what it is that ChatGPT actually does and why we need it so badly? Take away how AI makes for a second and indeed how much it loses. Does this product actually inspire anything in you? What is it that's actually magical about this, other than the fact, Casey, you get to hang around lots of parties? And you too, Kevin, Kevvy Boy, you get to hang around a lot of parties with a bunch of people sniffing their own farts, like that one episode of south park with the Priuses. And on a business level, what is it that I meant to be impressed by exactly? OpenAI has allegedly hit $10 billion in annualized revenue, essentially the biggest month it can find, multiplied by 12. But it's not actually that much, really, considering that OpenAI is the most prominent software company in the world, with the biggest brand and with the attention of the entirety of the world's media. OpenAI allegedly has 500 million weekly active users on ChatGPT and by the last count, only 15.5 million paying subscribers, an absolutely putrid conversion rate. Even before you realize that the actual Conversion rate would be monthly active subscribers. That's how any real software company actually defines its metrics. By the fucking way. Why am I meant to be impressed? Why? Because ChatGPT grew fast. It literally had more PR and more marketing and more attention and more opportunities to sell to more people than any company has ever had in the history of anything. Every single industry has been told to think about AI for three fucking years, and they've been told to do so because of a company called OpenAI. There isn't a single goddamn product since Google or Facebook that has had this level of media pressure. And both of those companies launched without the massive amount of media and social media that we have today. Having literally everybody talking about your product all the time for years is pretty useful. Why isn't this company making more money? And why are we taking any of these people seriously? Mark Zuckerberg paid $14.3 billion to invest 49% in but really to acquire Scale AI, an AI data company, as a means of hiring its CEO Alexander Wang to run his Superintelligence team and has been offering random OpenAI employees $100 million to join Meta. And they also, by the way, thought about buying both AI search company Perplexity and generative AI video company Runway. And they even tried to buy OpenAI co founder Ilya Sutskeva's pre product $32 billion valuation company Safe Superintelligence, settling instead to hire its CEO and his venture fund. I just want to be clear. Superintelligence refers to a fictional concept. These people, they may as well be saying they're going to hunt and kill the fucking tooth Fairy. I feel like I'm going insane. Hundreds of millions, billion of dollars put into an idea that's fictional. It's like they're trying to make the Ninja Turtles happen. Are they going to capture Santa Claus? Are they finally going to kill Slenderman? Are they going to find the Mario Brothers? Are we going to stop the Koopa Sam Altman? Is Mark Zuckerberg going to kill who was the bad guy from Sacrifice anyway? We'll get back to that later. When you put aside the big numbers. These are the actions of a desperate dimwit with a failing product trying to buy his way to make Generative AI into super intelligence. And by the way, just want to make it clear, the chief AI scientist, Yann LeCun, they call him Yann Lecun, says it isn't going to work. Generative AI isn't going to make super intelligence. Marky Mark, you going to listen to the People you fucking hired who just jack off and give people too much money. I'll take $100 million mate, while you're out there. But by assuming that there's some sort of grand strategy behind these moves beyond if we get enough smart people together, something will happening. The media helps boost the powerful's messaging and buoy their stock valuations. You are not educating anybody by humoring these goofballs. In fact, the right way to approach this would be to ask Meta a very simple question. Why? Why does a multi trillion dollar market cap company with a near monopoly overall social media spend billions of dollars in what appears to be a totally irresponsible way? No, no, no. No need to do that. No need to think these. These big thoughts that might make people uncomfortable. No, no, no. We just need like 10 or 15 different articles suggesting that Mark Zuckerberg is a genius and we're watching him be a genius. Anyway, putting that aside, what exactly is the impressive part of generative AI again? I'm coming back to this. The fucking code. Enough about the code. I'm tired of hearing about the code. I swear to God, you people think that being a software engineer is only fucking coding and that it's fine if you ship mediocre code, as if bad code can't bring down entire organizations. What is it you think a software engineer does? Is all they do code? If you think the answer is yes, you are wrong. Human beings may make mistakes in writing code, but at least they know what a mistake looks like, which a generative AI does not. Because a generative AI doesn't know what anything is or anything at all. Because it is a probabilistic model. Congratulations. You've made another way in which software engineers can automate parts of their jobs. Stop being so fucking excited about the idea that people are going to lose their livelihoods. It's nasty and founded on absolutely nothing other than your adulation for the powerful. These models are dangerous and chaotic, built with little intention or regard for the future, just like the rest of big tech's products. ChatGPT would have been a much smaller deal if Google had any interest in turning Google Search into a product that truly answered a query, as opposed to generating more of them to show more impressions to advertisers. A nuanced search engine that could look at a user's query and spit out a series of websites that might help answer a question, rather than just summarizing a few of them for an answer, or just giving you a series of SEO articles. And if you Ever need proof that Google just doesn't know how to fucking innovate anymore? Really? Look at those AI summaries. It's a product that both misunderstands search and why people use ChatGPT as a search replacement in the first place. While OpenAI may summarize stuff to give an answer, it at least gives something a approximating an answer answer rather than a summary. That feels like an absentee parent trying to get rid of you and then throwing 20 bucks at you in the hopes you'll leave them alone. And even when it does answer shit, it does so in this very peculiar way and gets very obvious things wrong. I looked at the pricing for Claude the Anthropic. I looked up for what the price of Claude code was and it was like, yes, $60. There's no $60 plan that I could find on Anthropic. Maybe one of the team's plans anyway. Google search makes them like $100 billion a year. It's fucking insane. If Google search truly evolved, ChatGPT wouldn't really matter, because the idea of a machine that can theoretically answer a question is kind of why people used Google in the fucking first place. Why doesn't the state of Google dominate tech news, just like how random ketamine fueled tweets from Elon Musk do? Why aren't we collectively repulsed by Google as a company? And why aren't we collectively repulsed by OpenAI? No matter how big ChatGPT is, the fact that there's a product out there with hundreds of millions of users that constantly gets answers wrong is genuinely worrying for society. And that's before you get to the environmental damage, the fact it's trained its models on hundreds of millions of people's art and writing, and now, I don't know, the fact that it loses overall. Loses like billions of dollars, probably more like $12 billion a year. It's planning to lose over $100 billion a year before becoming profitable. And it can't even. They can't even explain how it become profitable. I'm trying to calm down, all right? I'm trying. I don't even. I don't write in here to get pissed off. It's just when I think about it too much, start hearing the music from Kill Bill anyway. But why are we not more horrified? Why are we not more forlorn that this is where hundreds of billions of dollars, the most prominent company in the tech industry, is an unstable monolith with a vague product that can only make $10 billion a year in Revenue, not profit, as the very fabric of its existence is shoved down the throat of every executive in the world at once. Also, if it's not fed, by the way, 20 to 40 billion dollars a year, it will die. Give me a fucking break. I don't know, I sound pretty ornery. I get accused of being a hater or missing the grand mystery of this bullshit every few minutes by somebody with an AI avatar of a guy who looks like he's been banned from multiple branches by. I understand there are things that people do with large language models, I am aware, but none of it matters because the way they're being discussed is like we're two steps away from digitally replacing hundreds of billions of people's jobs. The reality is far simpler. We have an industry that has spent nearly half a trillion dollars between its capital expenditures and venture capital funding to create an industry with the combined revenue of the fucking smartwatch industry. What I'm talking about isn't inflammatory. In fact, it's far more deeply rooted in reality than those claiming that OpenAI is building the. Or Kevin Roose walking up on stage dressed like a fucking ringmaster. Anyone? If you're a listener who was at the Hard Fork Live show, email me. Please email me and tell me what that was about. And if you don't know what I'm talking about. Kevin Roos dressed like a fucking circus ringmaster at his podcast. Shameful, man, shameful. Take a shower, take a walk, go outside, mate. But look, like I said, what I'm saying sounds inflammatory, but it's not. If we add up the combined capital expenditures and projected AI revenues of the big four hyperscalers, we end up with roughly $327 billion in capital expenditures and only $18 billion in revenue. And that's not profit, by the way. I really do mean it. That's less than the projected revenue of the global smartwatch industry. But then someone smashes through my door and they go, what about OpenAI? What about OpenAI? I've talked about this so much, so what? OpenAI makes $12.7 billion. This lose like 10, $14 billion. What does that mean to you? Exactly? What are you going to say? The cost of inference is coming down? No, no. If you are someone who is saying the cost of inference is coming down, I need you to stop. You are wrong. You are wrong. You are wrong. I hate hearing this because you are so wrong. No, the cost that people are being charged is going down. We have no firm data on the actual costs of inference because the companies don't want to talk about it. And yes, they will absolutely lower prices to compete with other companies. The information just reported that OpenAI was doing this to compete with Microsoft a couple weeks back and fucking OpenAI reduced the price of their O3 reasoning model one focused on code by 80% to compete with Claude 4 Opus Max just like any of you could fucking look at. I know I'm getting mad at people who probably don't listen to the podcast, but one day they will. I'll make them anyway. Even if we add OpenAI's revenue to the pot, we're at about $30.7 billion. If we add the supposed 1 billion in revenue from training data startup search, $300 million in annualized revenue from Turing, we optimistically assume that perplexity will have 100 million ARR up from 34 million in 2024. But they lost 64 million and say they make 100 million in 2025 and assume that any sphere which makes Cursor that their $500 million run rate stays consistent through 2025 even though they just had to completely change their pricing because OpenAI and Anthropic choked on prices. We are at carry the two about 32.7 billion. Hmm, that's not good. But I'm not being fair, am I? I didn't include many of the names from the information's generative AI database. I'm stubborn. So I made a point of adding them all up and ended up with a total of less than 39 billion DOL revenue in the entire generative AI industry. Jesus fucking Christ. Fuck. God damn it. This. This is what we've been doing for three years. If you're a Mr. Plinkett fan, this is my Kodak printer moment. Why did you do that? According to the information, generative AI companies have raised more than $18.8 billion in the first quarter of 2025 after VCs invested $21 billion in Q4 2024 and $4 billion in Q3 2024 for a grand total of 43 point or a total of $370.8 billion of investment and capital expenditures for an industry that, despite being the single most talked about thing on the planet, cannot even create a tenth of the dollars it requires to make it function. These companies are predominantly unprofitable, perpetually searching for product market fit, and even when they find it, seem incapable of generating revenue numbers that remotely justify their valuations. And if I'm honest, I think the truly radical position here is the one taken by Most tech reporters that would rather take the lazy position of well, Uber lost a lot of money than think for two about whether we're all being sold a line of shit. What we're watching is a mountain of waste perpetuated by the least charming failsons of our generation. Nobody should be giving Satya Nadella or Sam Altman a glossy profile. They should be asking direct, brutal questions, much like Joanna Stern just did of Craig Federighi who had absolutely fucking nothing to share about why Apple intelligence sucked because he's never been pushed like this. Put aside the money for a second. To be honest, these men are pathetic, unimpressive, uninventive and dreadfully, dreadfully boring. Anthropic's Wario Amade and OpenAI's clammy Sam Alt have far more in common with televangelist Joel Olsteen than the level at which Steve Jobs or any number of people that have actually invented things. And I know about Steve Jobs and they got that way because we took them seriously instead of saying wait, what? What do you mean what? Wait, what do you mean? What does that, what does that mean to a single one of their wrong headed O fish and dim witted hype burps? It's boring. I'm terribly horribly bored. And if you're interested in this shit, I'm genuinely curious why, especially if you're a reporter. Because right now the innovation happening in AI is at best further mutations of the SO software as a service business model, providing far less value than previous innovations at a calamitous cost. Reasoning models don't even reason, as proven by an Apple paper released a few weeks ago. And agents as a concept are fucked because large language models are inherently unreliable. And yes, a study out of fucking Salesforce found that agents began to break down when given multi step tasks, such as any task that you'd want to have an agent automate. But, but, but, but, but. I have one radical suggestion. Let's start making fun of them. Let's start making fun of these people. They're not charming, they're not building anything. They've scooted along amassing billions of dollars, promising the world and delivering you a hill of shit. They deserve our derision, or at the very least our deep unerring suspicion. If not for what they've done, but for what they've not done. Sam Altman is nowhere near delivering a functioning agent, let alone anything approaching intelligence, and really only has one skill. Making other companies risk a bunch of money on his stupid fucking ideas. Not really. He convinced Oracle to buy $40 billion of Nvidia chips to put in the Abilene, Texas Stargate data center, despite the fact that the Stargate organization is yet to be formed, as reported by the Information Softbank and Microsoft pay for all of OpenAI's bills and the media does its market for him. OpenAI is, as I have said before, a banana republic. It requires the media and the markets to make up why it should exist. It requires other companies to pump it full of money and build its infrastructure, and it doesn't even make products that matter. While Sam Altman constantly talks about all the other exciting shit that people will build that never seems to get built, you can keep honking off about how it will build the API that will power the future, but if that's the case, where's the fucking future? Exactly. Where is it? What am I looking at here? Where's the economic activity? Where's the productivity? The returns suck. The costs are too high. Why am I the radical person saying this? This entire situation is goddamn ridiculous, an incomparable waste. Even if it somehow went to the green for the horrendous amounts of capital in generative AI to make sense, the industry would have to have more revenue than the smartphone and enterprise SaaS market combined, rather than less than half of the mobile gaming industry. Satya Nadella, Sam Altman, Wario Amade, Tim Cook, Andy Jassy. They deserve to be laughed at, mocked, or at least very heavily interrog. Their combined might has produced no exciting or interesting products outside of, at best, what will amount to a productivity upgrade for integrated development environments and faster ways to throw out code that may or may not be reliable. These things aren't nothing, but they're nowhere near the something that we've been promised. So I put it to you, dear listener, why are we taking them seriously? What is there to take seriously other than their ability to force stuff on people and make money doing so? And I want to ask you a question. How? How do they manage to keep doing this? They always seem to find new growth every single quarter. Every single quarter, without fail. Is it because they keep coming up with ideas, or is it because they keep coming up with new ideas to get more money? A vastly different choice that involves increasing the prices of products or making them worse so they can show you more ads. My positions aren't radical, and if you believe they are, your deference to power disgusts me. In any case, I want to end this episode with something a little more inspirational, because I believe things can change when regular people feel stronger and more capable. I want you to know that you are fully capable of understanding all of this. I don't care if you think you're not a numbers person or you don't really get business. I don't have a single iota of economics training and everything you've ever heard me say or read me right has been something I've had to learn. And I really mean that. I was a layperson right up until the time I learned the stuff. Then I became a stuff knower just like you can be. The tech industry, the finance industry, the entire mechanisms of capital want you to believe that everything they do is magical and complex when it's all far more obvious than you believe. You don't have to understand the entire fundamentals of finance to know how venture capital works. They buy percentages of companies at a valuation that they hope is much lower than the company would be worth in the future when they sell or go public. You don't need to be technical to know that large language models generate a response based on billions of pieces of training data and by guessing at what the next bit of text or thing might be in a line should be based on based on what they've seen in the model previously. These people love to say things like ah, but didn't you see and present some anecdote when no anecdote will ever defeat the basics of your business does not make enough money, the software does not do the things you claim it's meant to, and you have no path to profitability. They can yammer at you all they want about lots of people using ChatGPT, but that doesn't change the fact that ChatGPT just isn't that revolutionary and their only play here is to make you feel stupid rather than actually showing you why it's so fucking revolutionary. This is the argument of a manipulator and a coward, and you're above such things. You don't really have to be a specialist in anything to pry this shit apart. Which is why so much of my work is either engaging to those who learn something from it, or frustrating to those that intentionally deceive others by gobbledygook hype spiel bullshit. I will sit here and explain every fucking part of this horrid chain of freaks and I'll break it down into whatever pieces it takes to educate as many people as I have to to make things change. I need to be clear about something. I'm nobody. I started writing my newsletter with 300 subscribers and no other reason than the fact that I wanted to and I was depressed. And guess what? Four years later I have nearly 65,000 subscribers and award winning podcast and people actually pay me for shit. I have no economics training, no special access, no deep sources, just the ability to look at things that are happening. And then I say stuff. I taught myself everything I know about this industry and there is nothing stopping you from doing the same. Same. I was convinced I was stupid until about two years ago. If I'm honest, it might have been last year. I felt othered the majority of my life convinced by people that I'm incapable or unwelcome. And as such I've become more articulate and confident in who I am and what I believe in. And I've noticed that the only people that seek to degrade or suppress are those of weak minds and weaker wills. Business idiots in different forms and flavors. I've learned to accept who I am, that I'm not like most people. And people conflate my passion and vigor with anger or hate when what they're experiencing is somebody different who deeply resents what the powerful have done to the community computer. And while I complain about the state of the media, what I've seen in the last year is that there are many, many people like me, readers, listeners and peers that resent things in the same way. I conflated different with being alone and I couldn't have been more wrong. For those of you that don't wish to lick the boots of the people fucking up every tech product, the tent is large, it's a big club, and you're absolutely in it. A better tech industry is one where the people writing about it hold it accountable, pushing it towards creating the experiences and connectivity that truly change the world rather than repeating and reinforcing status quo. Don't watch the mouth, watch the hands. These companies will tell you that they're amazing as many times as they want, but you don't need to prove that they do. I don't care if you tell a single human soul about my work, but if it helps you understand these people better, use it to teach other people. Now these tech executives, they may seem all powerful, but they've built the rot economy on a combination of anonymity and a placant press. But pressure against them starts with you and those you know understanding how those businesses work and trusting that you can understand because you're absolutely fucking can millions of people. Understanding how these people run their companies now poorly they built their software will stop people like Sundar push Shai from being able to quietly burn Google search to the ground. People like Sam Altman are gambling that you're easily confused, easily defeated, and incurious when you could be writing thousands of words on a newsletter or speaking for hours on a podcast that you never ever really edit for, like, brevity. Or perhaps you go on a side note, you ever hear killer be killed? Great metal badge? You should give them a listen. Anyway, I know it sounds smaller and like your role is even smaller than that, but the reason they've grown so rapaciously is driven by the sense that the work they do is some sort of black magic. I mean, it's actually really fucking stupid, boring finance stapled onto a tech industry that's run out of ideas. You are more than capable of understanding this entire world, including the technology, along with the finances that ultimately decide what technology gets made next. These people have got rich and famous and escaped all blame by casting themselves as somehow a barb vase. But if I'm honest, I've never looked down on somebody quite as much as I do the current gaggle of management consultant fucks that have driven Silicon Valley into the ground. You're actually smarter than them. You can learn all of this and I'm here to help you every fucking week. Thank you for your time. Thank you for listening to Better Offline. The editor and composer of the Better Offline theme song is Matt Osawski. You can check out more of his music and audio projects@mattosauski.com M A T T O S O W S K I dot com you can email me at ezeteroffline.com or visit betteroffline.com to find more podcast links and of course, my newsletter. I also really recommend you go to chat. Where's your ed at? To visit the Discord and go to R Better Offline to check out our Reddit. Thank you so much for listening. Better Offline is a production of Cool Zone Media. For more from Cool Zone Media, Visit our website, coolzone media.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcast.