Transcript
Ed Zitron (0:00)
This is an iHeart podcast.
Victoria Song (0:04)
So you've always dreamed of that state of the art home theater system. Here's the thing, if you invest well, you could get things like that. With Empower, you can get money working for you so you can go out and live a little. Isn't that why we work so hard to splurge sometimes, like on a massive high res TV or surround sound that puts you right in the action. So use Empower to help get good at money so you can be a little bad. Join their 19 million customers today@empower.com not an Empower client, paid or sponsored. Call Zone Media hello and welcome to this week's Better Offline Monologue. I'm your host Ed Zitron. Better Offline and this has been a tough week, I'm not going to lie. Monday we recorded a wonderful episode with Victoria Song and Ashwin Rodriguez which got lost due to a technical fault. And then I recorded this entire monologue which then got lost to a completely different different computer, different room, different place. Technical error. I love making this show for all of you. It's been kind of a pisser, but it's important we get on top of this goddamn subject. So check the episode notes, buy a challenge coin, Read the newsletter. There's a premium version unrelated to this show. I'd love if you'd subscribe. And if you don't, I won't feel anything. Don't worry about it. But let's get to it. Last week OpenAI launched GPT5, a new flagship model of some sort that's alleged at coding and writing, but in reality is much more of the same. Another model that is indeterminately better at benchmarks built specifically for large language models because they can't do actual work. The Wall Street Journal reported late last year that it took multiple half billion dollar training runs to get GPT5 off the ground. And Altman himself said in a podcast with Theo Von of all people that GPT5 scared him and made him say what have we done? And that's a good bloody question, Sammy. According to OpenAI, GPT5 is a unified system with a smart, efficient model that answers most deeper reasoning model, and a real time router that quickly decides which model to use based on conversation type complexity, tool needs and your explicit intent. I read all of that out because I wanted you to Hear how convoluted GPT5 is and how much effort OpenAI has had to put in to create something that based on all reports is fine. To quote Simon Willison it just does stuff. Wowy. Zowie. In simpler terms, ChatGPT's version of GPT5 takes a user's prompt and decides which model to use as a result, choosing one of a few sub models, GPT5 Regular, Mini or Nano and then spits out an output. And there are rate limits by the way, so if you use it too much you get kicked down to mini automatically. If you ask it to think about something, it will choose to engage the reasoning part of the model. These things do not think, by the way, they are probabilistic models. So reasoning is kind of like you get a prompt and then it reads the prompt with another model and says okay, what would the steps be to execute this? It has some returns, but recent papers suggest that it doesn't work that well. Anyway, using this, you mostly have to trust that OpenAI will choose the best model for the job, as opposed to the cheapest1 for OpenAI to serve, which is what I think they're actually doing as part of the launch. OpenAI has also killed access to all other models, or at least is planning to, and truncated user access to two choices, GPT5 or GPT5. Thinking with legacy models like 0404 mini and their associated rate limits gone immediately for most, although some of them are back and in 60 days of people paying $200 a month, you'll work out why I'm dithering in a second. This enraged the ChatGPT subreddit, with users claiming that GPT5 was, and I quote, wearing the skin of their dead friend, referring to GPT4O and another saying that GPT 4.5 genuinely talked to them and as pathetic as it sounds, was their only friend. And I must be clear, we can make fun of these people if we want, but this is actually genuinely sad. There is something going on here where people are so lonely that they want to talk to a chatbot. But mock them if you want, and some of you will, and I don't know if I even want to, but something is happening here and it isn't brilliant. But after a few days Clammysam Altman restored access to GPT4O for paid users, and then this only managed to stem the tide briefly with one user saying that their baby was back, that they cried a lot, and that they were crying as they wrote the post ending by saying love you, I assumed GPT4. O. Here's the problem though, users are now doubting that the 4.0 that OpenAI has restored is actually the same model One post claims that 4.0has lost its soul and another says that 4.0is lobotomized all the way down. In one thread one user said that 4.0 had gotten markedly worse suddenly and another said it's definitely not the same, though others in the thread claim that it was. Another Post said legacy GPT 4.0 is GPT 5 in cosplay, even as others pleaded with them and said it was exactly the same and that people were experiencing some strange phantom lover placebo effect. And I think that's actually what's going on writ large. ChatGPT was never a success based on its actual abilities or outputs or things it could do but a global marketing campaign perpetuated by a tech and business media asleep at the wheel, or worse still that wanted these companies to win and help them by lying I have done a comprehensive evaluation of the last three years of press around ChatGPT and GPT itself and you will look in the things in 2023 and there's shit that's just fucking made up. There's a whole go and look up TaskRabbit GPT4. There are so many. I'm gonna link one in the fucking notes. There are people that claim that GPT4 ordered a TaskRabbit to complete a captcha. Now on top of this not being A thing that TaskRabbit generally does, this is from the system card of GPT4 and it claims that it hired a TaskRabbit except when you look it just said it messaged them. It's very clearly made up, but everyone reported it as agents existing in 2023. Ah, every time I read this stuff I feel a little goddamn insane. But anyway, because these models do not have obvious replicable ways outside of benchmarks of testing what they can do, each user is effectively in a constant vibe check with the models and the sycophantic qualities of GPT4O were clearly enough to endear them to the platform people using GPT4. Oh, they couldn't tell you why it's different to GPT5 other than it feels less human or doesn't do the same things, even if those things are kind of hard to define. This is what happens when you build a for a product based on specious hype and vague promises and lies of inference of course, and then allow users to make up the reasons that they care. You begin engaging with the gamer mindset, a vibes based fandom that's completely unbreakable unless you make one subtle change that you could never see coming that breaks the illusion leading to gamer like distrust and anger. You see, I theorize that the vast majority of ChatGPT users do not know why they used it in the first place. Three years of media pressure to use AI, that AI was the future. Their boss is saying AI is important and that you would be left behind if you didn't use AI mean that people come to ChatGPT to work out why they're using it in the first place, which has led to all sorts of bizarre emotional attachments, kind of like one's attachments to a live service game and shout out to Catharsis23 on Bluesky, who made this observation. As a result, users were incredibly sensitive to changes like removing or changing a model because their association with ChatGPT was based on however GPT4O works and sounds by ripping it out and replacing it with GPT5, users immediately felt jilted and swindled by OpenAI, and much like a dying live service game, any changes that have been made as a result were met with paranoia and confusion. Clammy All Altman's attempts to paper over the problem by boosting rate limits on GPT5's thinking and restoring access to GPT4O and other models for paid users were not enough because in a very real sense, many of those users could not tell you why they liked 4o to begin with. 4o wasn't good so much as it was an investment of time. By showing that OpenAI is willing to cut things arbitrarily, users can no longer trust that this investment of time is worthy, especially as many complain that at launch GPT5 deleted a bunch of conversations. Now OpenAI sits in an odd spot where their supposedly huge Manhattan Project level launch has been met with either apathy or agony. While they've placated users in the short term, it's very clear that the vast majority of users dislike GPT5, and power users don't seem particularly impressed with it either. This was meant to be the big launch that changed things for OpenAI forever, but it's turned into something of a mass betrayal or just kind of a mass letdown. And because it's based on vibes rather than its actual ability to do something, there's very little one can do to fix this problem. It's unclear how all of this affects the company long term, but things do not seem good. Sam Altman has already said that OpenAI is having to reallocate capacity for the next couple of months, prioritizing paying ChatGPT users over API demand up from the current allocated capacity and commitments that they've made to their customers. Code for those who do not want to pay. Priority processing, which is now available for any developer. It's also unclear what happens next. GPT5 is not the future. OpenAI is running out of capacity and their product, despite the fanfare, has no capabilities or reasons to adopt it that are really new or interesting. Years of allowing the media to spin out ridiculous narratives about what AI can or could do using vague pablum that kind of suggests they're more powerful than they are, has created a PR campaign for a product that does not exist. And the 700 million weekly active users of ChatGPT have clearly arrived there without much guidance. Their attachment born of compulsion and societal pressure rather than any real use cases. When you allow people to define an indeterminately powerful tool by any standard they like, with no interest in correcting them, with no interest in guiding them, with no interest in actually showing them what it was that they were paying for, other than it can generate stuff, you'll create an attachment to it that defies any real ability you have to control things. OpenAI was never forced to productize and at scale. It's a very real possibility that people have, pressured by the media and society itself, force themselves to find meaning in LLMs and somehow, even if it feels kind of stupid, and what I mean by this is you get to the product and there really isn't that much guidance. Go on an OpenAI's website and have a look at the ChatGPT page and look at what it tells you to do. It's quite vague. You can look at. It can analyze data, right? It can generate stuff, it helps you do ideas. Is. Is that good? Am I smart for using this? Everything else you look at in the software world will tell you what it is. You're using it for. Any consumer driven software at the very least. Yet ChatGPT never had to, because the media for three years or two years, I guess, with ChatGPT has kind of just sat there doing the work for them, telling them, oh yeah, you can use it as a powerful personal assistant. Assistant how to assist me with what? Nasty Kevin Roose, Creepy Kevin in the New York Times a few weeks ago when he did everyone's using AI piece with Casey Newton. He said that it's a powerful assistant. It's like for what? You can't do this shit. Can't control my calendar. I don't want it touching my emails. I don't think most people do either. So you've just got people who use it as a shit ass search engine, an online companion and a brainstorming thing, which is a natural way to get people kind of addicted, but addicted to a product you don't truly control and a product you don't truly understand. One that can be swiped by just about anyone. I actually think we're on the kind of downward spiral for this shit. I am oinking like a pig and squawking like a bird watching this happen. Even core weave is crashing as I record this. They're down 17.91%. Still up too much though. And I really do think something has shifted thanks to GPT5. I'm excited. Like Dr. Stones once said, get excited because I think the next few months, like next year is going to be chuckle heavy. We're going to be whimsy pilled as we go through the remainder of the AI boom. And I'll be here to guide you through it. Thanks for listening. Say you've always wanted to get those new audiophile grade speakers. Here's the thing. If you invest well, you could get things like that. With Empower you can get money working for you so you can go out and live a little bit. Isn't that why we work so hard to splurge sometimes? Like on a massive high res TV? Because the 65 inch one just isn't quite big enough? So use Empower to help get good at money so you can be a little bad. Join their 19 million customers today@empower.com not an Empower client. Paid or sponsored. Ah come on. Why is this taking so long? This thing is ancient.
