Podcast Host (30:14)
Now, I want to address one counterpoint. Some might argue that OpenAI has a new series of products that could open up new revenue streams, such as Operator, its Agent product, and Deep Research, their research product. And I I'm so tired of hearing about agents. Whenever you hear someone say agent, really look at what they're saying because they want you to think autonomous bit of software. What they're actually talking about is either a chatbot or well, well, the dog shit that OpenAI and Anthropic have warmed up, which we'll get to shortly. But first, let's talk costs. Both of these products are very compute intensive. Operator uses OpenAI's computer using agent, the CUA, which combines OpenAI's models with virtual machines that take distinct actions on web pages in this extremely unreliable and costly way where they take screenshots as they scroll down and it just doesn't fucking work. I had a whole thing about Casey Newton writing about this. It's just, it was just so bad. Like Casey Newton, you please go outside challenge. Just, just go outside. Casey, stop. Stop with the computer, you don't know what you're talking about. But failures with these, and remember these models, pretty much all of them are inconsistent. And the more in depth the thing you ask them to do, the more likely there's going to be a problem with it. So think about it like this. Failures from something you've asked them to do will either increase the amount of attempts you make to get the thing you want, or make users not use it at all. Not a really great idea. Now let's talk Deep Research. They use a version of OpenAI's O3 reasoning model, which is a model so expensive because it spends more time to generate a response based on the model, reconsidering and evaluating steps as it goes. That OpenAI will no longer launch O3 as a standalone model. And that's really a good thing. When you see a company, be like, yeah, you can't touch it. It's too expensive. In short, these products are extremely expensive to run, and this means that anytime their outputs aren't perfect, which is to say a lot of the time, there's a high likelihood that they'll be triggered again, which will in turn spend more compute. But let's talk about the product market fit because this is really important to use. Operator or Deep Research currently requires you to pay $200 a month for OpenAI's ChatGPT Pro, a $200 a month subscription which Sam Altman recently revealed still loses the money because people are using it more than expected. And that is a quote. Furthermore, even on ChatGPT Pro, deep research is currently limited to 100 queries per month, adding that it is very compute intensive. And though Altman has promised the ChatGPT plus and free users will eventually get access to a few Deep Research queries a month, well, that's not good for their cash burn. That's actually bad for the cash burn. I'm not sure it's gonna make them. Not really sure how that turns into money anywhere. But let's talk about Operator. Operator is this agent product where you're meant to be able to be like, hey, look, go and look something up for me. And it only works like 30% of the time and it takes. It's just very bad. And as I covered in my newsletter a few weeks ago, this product and it claims to control your computer and does not appear to be able to do so consistently. It's not even ready for the prime time and I don't think it has a market. The way they're selling this is that you'll be able to make it do distinct tasks on the computer. But even Casey Newton in his article was like, yeah, it only works sometimes. And the things it works on are like searching TripAdvisor. Imagine this, if you will. What if for the cost of boiling a lake and throwing an entire zoo into the lake and boiling the animals inside it, you could sometimes be able to search TripAdvisor in 2 minutes versus 10, like 5 seconds. The future's so cool. I love living in it. But let's talk about Deep Research for a second. It's already been commoditized. Perplexity, AI and XAI have launched their own versions immediately, and Deep Research itself is not a good product. As I covered in my newsletter last week, the quality of the writing that you receive from Deep Research is really piss poor, and it's rivaled only by the appalling quality of its Citations which include forum posts and search engine optimized content instead of actual news sources. These reports are neither deep nor well researched and cost OpenAI a great deal of money to deliver. And just to give you a primer on what Deep Research is meant to be, you're meant to be able to type something in and it does like a 3,000 word report. It's gobbledygook, it's nonsense, it's bullshit. I really. If you, you should go and look up, go to my newsletter. Where's your red at? It's the. It's the piece before the ones that's going to come out. When these episodes come out, I forget the name. Exactly. You need to go and look at how shit Deep Research is. It's incredible that this money losing juggernaut piece of shit thinks that this is a real product and it's insulting to the intelligence of readers that people like Casey Newton claimed it was good. But now we've established that both of these products are expensive, commoditized and don't work very well, let's talk about how they make money or don't both operate. And Deep Research, like I told you, currently require you to pay $200 a month to a company that loses money all the time that also loses money on the $200 a month. Neither product is sold on its own and while they may drive revenue to the ChatGPT Pro product, as said before, said product loses OpenAI money. These products are also compute intensive and have questionable outputs, making each prompt very likely to create another follow up prompt. And the problem is you're asking something that doesn't know anything that probabilistically generates answers to research something. So as a result, the research isn't going to be any good. It's not like it's going to research it and go hey, what would be a good source? It's going to say what matches the patterns? What matches all the patterns that are being trained on? Eh, that's fine. Who gives a shit? It's like having the world's worst intern, except the intern gets a concussion every 10 minutes. But in summary, both Operator and Deep Research are expensive products to maintain, are sold through an expensive $200 a month subscription that like every other service provided by OpenAI, loses the money and due to the low quality of their outputs and actions, are likely to increase user engagement to try and get the desired output, incurring further costs for OpenAI. Well, you know, like Ed. Ed, you say Ed, you're just being a hater, right? Just being a hater. Things don't look great today. But this early days. It isn't early days. But still, Ed, it's early days. Things don't look great today. What about the future prospects for OpenAI? Things can't be that bad, can they? Yeah, they can. A week or two ago, Sam Altman announced the updated roadmap for GPT 4.5 and GPT 5. Now, these are their next generation models that have been hyping up for the best part of a year. Except GPT 4.5 didn't exist before. It was always GPT 5. Now, GPT 4.5 will be OpenAI's last chain of thought model, referring to the core functionality of its reasoning models, where it checks the work as it goes and it really. It uses a model to ask another model whether the model's doing the right thing. Can they Both hallucinate? Yes. GPT5 will be, and I quote Sam Altman, a system that integrates a lot of OpenAI's technology, including O3. What the fuck are you talking about? Altman also vaguely suggests that paid subscribers will be able to run GPT5 at a higher level of intelligence, which likely refers to being able to ask the models to spend more time computing an answer. He also suggests that GPT5, and I quote, will incorporate voice, Canvas, search, deep research, and more. Fucking Bed, Bath and Beyond, motherfucker. Come on, my man. Your company spent $9 billion to lose $5 billion. Why is anyone taking this seriously? This is ridiculous. But both of these statements, all of these statements honestly vary from vague to meaningless. But I Hypothesize the following. GPT 4.5 will be an upgraded version of GPT4O OpenAI's foundation model you're probably using right now, and it's codenamed Orion. GPT5, which used to be codenamed Orion, could literally be anything. But one thing that Altman mentioned in the Tweet is that OpenAI's model offerings have got too complicated. They'd be doing away with the ability to pick what model you used. Gussying this up and he's claiming it's Unified Intelligence. This fucking guy. If I said this shit to a doctor, they'd institutionalize me. They'd say, you sound like a lunatic. But anyway, as a result of doing away with the model picker, which is literally the thing you click and you choose GPT 4.0 or GPT 4O mini or like the O1 reasoning things, I think they're going to attempt to moderate costs by picking what model will work best for a prompt, a process it will automate. And if there's one thing I've noticed with OpenAI, they're not very good at automating anything. So I expect this to be bad. And I believe that Altman announcing these things is a very bad omen for OpenAI because Orion has been in the works for more than 20 months and was meant to be released at the end of 2024, but it was delayed due to multiple training runs resulted in, to quote the Wall Street Journal, software that fell short of the results researchers were hoping for. As an aside, the Wall Street Journal refers to Orion as GPT5. This was from several months back, but based on the copy and Altman's comments, I believe Orion refers to a foundation model, OpenAI, which is one to replace the core GPT one that powers ChatGPT. OpenAI now appears to be calling a hodgepodge of different mediocre models something called GPT5. It's almost as if Altman's making this up as he goes along. Now the Journal further adds that as of December, Orion performed better than OpenAI's current offerings, but hadn't advanced enough to justify the enormous costs of keeping the new model running with each six month long training run, no matter how well it works, costing over $500 million, OpenAI also, like every generative AI company, is running out of high quality training data, the data necessary to make its models smarter based on the benchmark's specific made up to make LLM seem smart. And I should note that being smarter means completing tests, not new functionality or new things that it can do. Sam Altman deputizing Orion from GPT 5 to GPT 4.5 suggests that OpenAI has hit a wall with making its new model, requiring him to lower expectations for a model OpenAI Japan President Tagao Nagasaki had suggested would, and I quote, aim for 100 times more computational volume than GPT4, which some took to mean 100 times more powerful. When it actually means it, it will take way more computation to train or run inference on it. I guess he was right. Now, if Sam Altman, who is a man who loves to lie, is trying to reduce expectations for a product, I think we should all be really, really worried. Now, large language models which are trained by feeding them massive amounts of training data and then reinforcing their understanding through further training runs, are hitting the point of diminishing returns. In simple terms, to quote friend of the show Max Zeff of TechCrunch, everyone now seems to be admitting you can't just use more compute and more training data with pre training large language models and expect them to into some all knowing digital God. Max is a fucking legend. OpenAI's real advantage, other than the fact it's captured the entire tech media has been its relationship with Microsoft because access to large amounts of compute and capital allowed it to corner the market for making the biggest, most hugest large language model. Now that it's pretty obvious this isn't going to keep working, OpenAI is scrambling especially now Deepseeker's commoditized reasoning models and prove that you can build LLMs without the latest GPU use. It's unclear what the functionality of GPT 4.5 or GPT 5 will be. Does the market care about an even more powerful large language model if said power doesn't do anything new or lead to a new product? Does the market care if unified intelligence just means stapling together various models to produce more outputs that kind of look and sound the same? As it stands, OpenAI has effectively no moat beyond its industrial capacity to train large language models and its presence in the media. Media OpenAI can have as many users as it wants, but it doesn't matter because it loses billions of dollars and appears to be continuing to follow the money losing large language model paradigm, guaranteeing it will lose billions of dollars more if they're allowed to. This is the biggest player in the generative AI industry, both the market leader and the recipient of almost every single dollar of revenue that this industry generates. They have received more funding and more attention than any startup in the last few years and as a result their abject failure to become a sustainable company with products that TR is a terrible sign for Silicon Valley and an embarrassment to the tech media. In the next episode, I'm going to be honest, I have far darker news. Based on my reporting, I believe that the generative AI industry outside of OpenAI is incredibly small, with little to no consumer adoption and pathetic amounts of revenue compared to the hundreds of billions of dollars sunk into supporting it. This is an entire hype cycle fueled by venture capital and big tech hubris with little real adoption and little hope for a turnaround. Enjoy tomorrow's monologue and then the final part on Friday. Thank you for listening to Better Offline. The editor and composer of the Better Offline theme song is Matosowski. You can check out more of his music and audio projects@matosowski.com M A T T o S o W. You can email me@ezetteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter. I also really recommend you go to chat wheresyoured at to visit the Discord and go to R betteroffline to check out our Reddit. Thank you so much for listening. Better Offline is a production of Cool Zone Media. For more from Cool Zone Media, Visit our website coolzone media media.com or check us out on the iHeartRadio app, Apple Podcasts or wherever you get your podcasts In a world of economic uncertainty and workplace transformation, learn to lead by example from visionary C Suite executives like Shannon Schuyler of PwC and Will Pearson of iHeartMedia. The good teacher explains the great teacher inspires. Don't always leave your team to do the work that's been the most important.