Ed Zitron (21:13)
Let's talk about perplexity. And my general view on perplexity is who gives a shit? Who cares? Perplexity, a company valued at $9 billion towards the end of 2024, has 8 million people a month using its app. But the Financial Times reporting that they have a Grand total of 15 million monthly active users for an Unprofitable search engine Perplexity, like every generative AI company, only ever loses money in its product. Generative AI powered search commoditized that it's actually remarkable that they still exist. I mean, they're bigger than anthropic. That's crazy. Other than the slick design, there's little to be excited about here. And 8 million monthly active users is pathetic. It's embarrassing, deeply embarrassing for a company with the majority of its users on mobile. Aravind Srivinas is a desperate man with questionable intentions that made a half hearted attempt to merge with TikTok in January. Really funny, by the way. It's like, hey, I have a really shitty company that loses a bunch of money. Can I merge with your beloved app for some reason, like you need to do this. Also, their product rips off journalists, by the way. They had a whole thing with force, so they were just ripping fucking content. Did it with Business Insider too. It's disgusting. But any investor in Perplexity needs to ask themselves, what is it? I'm investing in an unprofitable search engine, an unprofitable large language model company. A company that has such poor adoption of its product that is prepared to become the shell Corporation for TikTok. Personally, I'd be concerned about the bullshit numbers they keep making up. The information reported to Perplexity said they'd make $127 million in 2025 and $656 million in 2026. How much money did it make in 2024? Just over $56 million. Is it profitable? Fuck no. Perplexity's product is commoditized and they make less than a quarter of the revenue of the baseball team the Oakland athletics, in 2024 at least. Though I should add the perplexity's app is marginally more popular. It really is time to stop humoring these companies though. It's time to stop writing about them like they're gifted children. They are horrible. They are abominations of startups. They are abominations of capitalism, which is already fairly abominable. I'm really just disgusted reading these numbers. Joke ified me a hundred times. I didn't even need to put on the Joker makeup. It just appeared on my skin. Naturally. I'm currently high kicking around this sound cube I record everything in. But really, all of this is far more apocalyptic for the hyperscalers. The Wall Street Journal reports that Microsoft intends to spend $93.7 billion in capital expenditures in 20 or roughly $8,518 per monthly active user of the CoPilot app in January 2025. Google is planning to spend $75 billion on capital expenditures in 2025, or roughly 4,167 per monthly active user of the month. Gemini app in January 2025. Sundar Pshai wants Gemini to be used by 500 million people before the end of 2025, a number so unrealistic that someone at Google should be fired. And that someone is Sundarpa Shai the fact of the matter is that if Google and Microsoft can't make generative AI apps work, if they can't get meaningful consumer penetration, this entire industry is screwed. There really are no optimistic ways to look at these numbers. And yes, I'm repeating myself. Microsoft Copilot had 11 million monthly active users on the Copilot app and 15.6 million unique monthly visitors to copilot.Microsoft.com Google Gemini had 18 million monthly active users on the Gemini app and 47.3 million unique monthly users visitors even to their website. These are utterly pathetic considering Microsoft and Google's scale, especially given the latter's complete dominance over Google search and web search in general, and the ability to funnel customers to Gemini for millions, perhaps billions. Google is the first page that they see when they open a web browser. Google should be owning this by now. Look, 47.3 million unique monthly visitors is a lot of people. But considering that Google spent $52.54 billion in capital expenditures in 2024, it's hard to see where the return is or even where the return could be. Google, like most companies, does not break out revenue from AI. Just to be clear, if they were doing well, they would. Though they do love to say stuff like a strong quarter was driven by our leadership in AI and momentum across the business. Which means nothing by the way, that that shit is made for journalists to read and go, oh, that means they're making money in AI. When a company's making money in something, they'll tell you directly. And as a result of its unwillingness to share hard numbers, all we have to look at are numbers like those I've received from SimilarWeb and Sensor Tower. And it's fair to suggest that Gemini and its associated products have been a complete flop. Worse still, Google spent $127.54 billion in capital expenditures in 2023 and 2024, combined with an estimated 75 billion. Like I said for 2025. What the fuck is going on? Yes, Google is likely making revenue from people running generative AI models on Google Cloud. And yes, they're likely making money from forcing AI onto Google Workspace customers by raising the prices and saying, you get this for quote free. But Google, like every single other generative AI company, is losing money on every single generative AI prompt. And based on these monthly active user numbers, nobody really cares about Gemini at all. Actually, I take that back. Some people care about Gemini. Not that many, but some. And it's far more fair to say that nobody cares about Microsoft Copilot, despite Microsoft shoving it in every corner of our lives. 11 million monthly active users for its unprofitable, heavily commoditized, large language model app is a joke, as are the 15.6 million monthly active users for its web presence. Probably because it does exactly the same shit that every other LLM does and everyone knows it's powered by ChatGPT. It's just. It's remarkable. Microsoft's Copilot app isn't just unpopular, it's irrelevant. For comparison, Microsoft Teams has, according to a post from Microsoft from the end of 2023, over 320 million monthly active users. That's more than 10 times the amount of monthly active users of the copilot app in January 2025 and the copilot website combined. And unlike Copilot Teams makes Microsoft money. Now, I obviously don't have the numbers on people that accidentally click the Copilot button in Microsoft Office or Bing.com, but I do know that Microsoft isn't making much money on AI at all. Microsoft reported in its last earnings that it was making $13 billion of annual revenue, a projected number based on current contracts versus booked money. And this was on their artificial intelligence products. Now I've made this point again and again and again, and I'm going to keep making it. But revenue is not the same thing as profit and Microsoft does not have an artificial intelligence part of its earnings breakdowns. These numbers are cherry picked from across the entire suite of Microsoft products, such as selling Copilot add ons to their Microsoft 365 Enterprise Suite. And by the way, the Information reported in September 2024 that Microsoft had only copilot to around 1% of their customers buying 365. They also make it selling access to OpenAI's models in Azure, roughly a billion dollars in revenue, and people running their own models on Azure Cloud, Microsoft's cloud compute platform. For context, by the way, Microsoft made $69.63 billion in revenue in its last quarter, $13 billion of annual revenue, not profit, is about $3.25 billion in quarterly revenue off of upwards of $200 billion of capital expenditures since 2023. The fact that neither Gemini nor Copilot has any meaningful consumer penetration isn't just a joke. It should be sending alarm bells through Wall Street. While Microsoft and Google may make money outside of consumer software, both companies have desperately tried to cram Copilot and Gemini down consumers throats. And they have categorically, unquestionably, failed. All while burning billions of dollars to do so. But Ed, ed, what about GitHub Copilot? All right, let's talk about GitHub Copilot, shall we? According to a report from the Wall street journal from October 2023, losing an average of more than $20 a month per user on the paid version of GitHub Copilot, with some users costing them more than $80 a month. Jesus Christ. Microsoft said a year later that GitHub Copilot had 1.8 million paid subscribers. Which is pretty good, except like all generative AI products, it loses money. Like I just bloody said, I must repeat that Microsoft will have spent over $200 billion in capital expenditures by the end of 2025. In return, Microsoft got $1.8 million paying customers for a product that like everything else I'm talking about, is heavily commoditized. Basically every LLM can generate code that some are better at than others, by which I mean they all introduce security issues into your code. But nevertheless. And somehow Microsoft loses money even when the users use it paid. Am I getting through to you yet? Is this working? If you are working for a hedge fund, an investment bank or anyone like that, please get in touch. I will protect your identity. Is anyone around you freaking out? Because they should be be. They should be. Man, I'm freaking out a little and I just keep all my money in a big box under my bed. I don't have a bank. No, I do. Anyway, not going to do that joke. So one of the arguments people make is that AI is everywhere. But it's important to remember that the prevalence of AI you seeing it in different apps is not proof of its adoption, but the intent of companies to shove it into everything. And the same goes for businesses integrating AI that are really just mandating people dick around with Copilot or ChatGPT. And I'm really not kidding. No really. KPMG bought 47,000 Microsoft Copilot subscriptions last year at a significant discount to be familiar with any AI questions their customers may have management consultancy PwC bought 100,000 enterprise subscriptions, becoming OpenAI's largest customer in the process, as well as their first reseller, and have created their own internal generative AI called Chat PwC. The PwC staffers absolutely hate hate. It's really cool that when you actually talk to the users, they just fucking hate it. And while you may see AI everywhere, integrations of generative AI are indicative more of the decision making of the management behind the platforms and the demands of the market more than any consumer demand. Enterprise software is more often than not sold in bulk to managers or C suite executives tasked less with company operations and messy things like doing stuff or making sure the company runs more with seeming on the forefront of technology. In practical terms, this means that there is a lot of demand to put AI in stuff and some demand to buy stuff with AI on it by enterprise buying software, but little evidence that this actually leads to significant user adoption or usage. I'd argue this is because large language models do not really lend themselves to features that would provide meaningful business returns. And I think everyone can agree on that. Like there are things like summarizing emails, which I'll get to to get to that in a second. Look, in fact, let's do it now. Look, let's briefly talk about where large language models work, where they are actually good. And some of you are not going to love this, but I know there's one of you who's like, yes, yes, now I will get Ed. I've got him now. I have him in my sights. To be clear. And this is really dealing with the AM actually responses. I'm not saying, and really have never meant to say that large language models have no use cases or no customers. People really do use them. They use them for coding, for searching defined libraries of documents, for generating draft materials, for brainstorming, for summarizing and searching documents. These are useful, but they're not magical. They're cool, but that's about it. And their coolness or usefulness is a tiny little ant compared to the costs and stealing from millions of people and damaging our power grid and our planet. Okay, okay, so you're probably wondering, I brought it up earlier. Agents. You've heard about agents. Mark Benioff wanking off about agents. Sam Alman talking about agents they loved. They love talking about agents, right? They love saying agents of the future. When a company uses the term agent, they're intentionally trying to be deceitful because the term agent means autonomous AI that does stuff without you touching it goes off and does things for you with one command and it knows what to do. Remember, these models don't know anything. The problem with this definition is that everybody has used it to refer to what is actually a chatbot that can do some things while connected to a database, which I would regularly call the chatbot personally. In OpenAI and Anthropic's case, agents refer to a model that controls a computer. This is closer to the truth, other than the fact that their agents are so unreliable as to be disqualifying. And the tasks they succeed at, like searching TripAdvisor, are very simple and did not need automating. Next time you hear the agent actually look at what the product does and maybe flick a booger at the person. But Ed, Edd, you just burst into my door and having a nice Diet Coke and you're in my house. What are you doing here? Ed, what about artificial general intelligence? Aren't they going to turn this into artificial general intelligence? No, they're not. Get out of my house. Generative AI is probabilistic and large language models do not know anything because they are guessing what the next part of a particular output would be based on the input in reasoning models. They might look at that a few times, go, oh, maybe it's not this. Maybe it's this. They are not making decisions. Generative AI does not make decisions. They are probability machines, which in turn makes them only as reliable as probability can be and as conscious. No matter how intricate the system may be or how much infrastructure is built as a pair of dice, we do not understand how human intelligence works. And as a result, it's completely laughable to imagine we'd be able to simulate it. Large language models do not create or resemble or. They're not artificial intelligence. They are at most the most powerful parrot in the world, trained to respond to stimulus with what they guess is the correct answer. And they're pretty good at it. They're pretty good, right? It's pretty cool. Except we shouldn't be burning hundreds of billions of dollars to make them slightly better at this. Let me put it in simpler terms. Imagine if you made a machine that threw a bouncy ball down a hallway and it was really, really. You got really good at dialing it in to throw the ball so that it followed a fairly exact trajectory. Would you think the arm was intelligent? Would you think the ball was intelligent? Would you think that the ability to precisely do something or. Or more reliably do something would make it smart? The point I'm making about large language models is that they're a cool concept with some interesting things they can do, but they've been used as a cynical marketing vehicle to raise money for OpenAI by lying about what they're capable of doing, starting with calling them artificial intelligence.