Transcript
Oracle Representative (0:00)
AI is rewriting the business playbook with productivity boosts and faster decision making coming to every industry. If you're not thinking about AI, you can bet your competition is. This is not where you want to drop the ball, but AI requires a lot of compute power, and with most cloud platforms, the cost for your AI workloads can spiral. That is, unless you're running on OCI Oracle Cloud Infrastructure this was the cloud built for AI, a blazing, fast, enterprise grade platform for your infrastructure, database, apps and all your AI workloads. OCI costs 50% less than other major hyperscalers for compute, 70% less for storage, and 80% less for networking. Thousands of businesses have already scored with oci, including Vodafone, Thomson Reuters and Suno AI. Now the ball's in your court.
Bob Pittman (0:45)
Right now, Oracle can cut your current cloud bill in half if you move to OCI. Minimum financial commitment and other terms apply. Offer ends March 31st. See if your company qualifies for this special offer@oracle.com strategic that's oracle.com strategic call.
Ed Zitron (1:02)
Zone media hello and welcome to your weekly Better Offline monologue. I, of course, am Ed Zitron. Better Offline so, as you'll find out in tomorrow's episode, the future of generative AI hinges heavily on OpenAI raising tens of billions of dollars, the majority of it from Softpack and a huge Japanese investment firm that has to take out billions of dollars of loans to fund them, along with their contribution to the Stargate data center project. And as I went into in yesterday's episode, Microsoft is pulling back from over a gigawatt of data center capacity. And it certainly looks like these moves are an intentional move to distance themselves from OpenAI cutting back on data center expansion, just as America's worst company needs more of it. Meanwhile, Marc Benioff, CEO of Salesforce, he's sounding the alarm, saying at a recent CNBC conference that he believed hyperscalers were under hip hypnosis in their aggressive pursuit of data center expansion and training larger and larger language models. Benioff believes that, and I quote ish it referring to data center expansions and larger language models has to be rethought. Exactly what are you doing and why are you doing this? That's a bloody good question, Mark. To be clear, Marc Benioff has been saying that Salesforce was adding some sort of Einstein AI shit for the best part of a decade as a means of boosting his stock price. So why is big tech's most effusive bullshitter saying this? Do you Think it's because things are going well? Do you think it's because sales of Agent Force and other associated products are doing really well? Look, as I've repeatedly said, where is the money in this industry? What have these companies actually built with generative AI? Where are the products that matter and why do they matter? Do you really think ChatGPT is revolutionary? Do you think any of this is revolutionary? We are two years in and I'm still getting DMs from people asking me, what would it take to make you believe that this is all the future? And I'm so fucking tired of being asked this. The arguments I make are grounded in numbers and things that have happened, not just financial details and statistics, but in objective evaluations of the products in question, their efficacy at tasks and the people involved. Yet somehow I and other critics are continually made to justify themselves. While Sam Altman of OpenAI and Dario Amadeo of Anthropic vaguely suggest that we'll have a conscious autonomous computer by the year 2027, when I ask how OpenAI survives as it spends $9 billion to lose $5 billion, I'm obliquely threatened by Casey Newton of Platformer and Hard Fork that he's taking detailed notes about Anyone who believes OpenAI might go bankrupt or run out of money. When Ezra Klein suggests that AGI is about to arrive in a conversation with some sort of former Biden administration AI con artist, I'm sent the link 30 times to people saying, does this mean you're wrong? I realize I'm complaining, but I'm justified in doing so. Why the fuck do I and other critics have to make rigorously founded and persuasive arguments while AI companies spout fantastical nonsense? Why does Sam Altman get headlines when he posts about, and this did just happen, by the way, making an AI that can do creative writing. And I wish it was just ignorance. People like Casey Newton and Ezra Klein aren't stupid, but they're also fully willing to back the narratives of powerful people that they want to be friends with. They want the rich and powerful to win, and they want to be the people that write their narratives and get their interviews. And yeah, I'm being petty. These are people that ostensibly compete with my work, but people with such a large audience have a responsibility to said audience, not what they wish would come true. And really, I've got to ask, how does all of this end? Right now we've got Anthropic, a company allegedly makes $150 million a month, according to the information but loses over $5 billion a year. Also report about the information and they make a commoditized product, one very similar to OpenAI's. A company that will also likely lose a shit ton of money, $11 billion or more in 2025. These companies are dependent on receiving billions or tens of billions of dollars a year in funding for an indeterminately. For an equally indeterminate goal. I'm being completely objective here. There's nothing that these companies have made that suggests anything will change. Every new version of Claude Sonnet or GPT is iterative and the products we see today are alarmingly similar to the ones we saw in the last two years. Despite everybody talking about agents, the actual agents that exist don't really work and those that are able to kind of complete a task cost thousands of dollars and again don't always work. This industry is unprofitable, unsustainable, and does not appear to be able to create a product that people want to pay for, let alone one that they pay enough to put the company making it in the green. We're two years in. How do we not have one profitable generative AI company other than, what is it, Turing? They're a consultancy. It does not count. I do want to say though I'm not cheering the apocalypse. What I've been describing for the last year is a group delusion where hundreds of billions of dollars got funneled into an environmentally and financially destructive distraction from the real problems that humanity faces. The longer this goes on means that it will be worse for the tech industry because once this bubble bursts, it will puncture everything. Tens of thousands of people laid off, brutal damage done to tech valuations and likely a glut of tech talent that depresses wages across the valley. What's important to know is that so much of this could have been avoided. Microsoft could have chosen not to continue sustaining OpenAI, as could Google and Amazon have refused to back anthropic or just not do this nonsense. So called reporters like Casey Newton and Ezra Klein could have made these companies justify themselves rather than operating as so called cautious optimists that end up mostly just parroting marketing materials. And the larger media could have covered generative AI based on what it does rather than what they're told it might do by somebody who has the financial incentive to lie. In any case, when this collapses, mark my words, I have been taking very, very detailed notes. I've been watching those who have sustained this bullshit narrative and other bullshit narratives in cryptocurrency and the Metaverse people willfully misleading the public in the process. And when the time is right, I will coldly and clinically read you every single time they've done so. Anyway, enjoy tomorrow's episode.
