Better Offline – "The Case Against Generative AI (Part 3)"
Host: Ed Zitron | Release Date: October 2, 2025
Overview
In the third installment of this four-part series, tech industry veteran Ed Zitron delivers a scathing, data-packed critique of the current state of generative AI, particularly focusing on the (lack of) business viability of AI startups, the spiraling infrastructural costs, and the myth that generative AI is replacing software engineers. Zitron scrutinizes the unsustainable economics underpinning the sector, sharply rebuts media hype around AI’s productivity claims, and probes the increasing futility around enterprise-level AI adoption—even from purported leaders like Microsoft.
Key Discussion Points & Insights
1. The AI Bubble: Revenue, Costs, and the Myth of Profitability
- Zitron begins by reestablishing the main thesis of the series: Generative AI is "an industry that’s meant to be the future of software," yet almost nobody is making real money ([02:05]).
- With rare exceptions (OpenAI, Anthropic), "AI startups are floundering, struggling to stay alive, and raising money in several hundred million dollar bursts as their negative gross margin businesses flounder" ([03:20]).
- Most cited: The top company, AnySphere, is making about $41.6 million in monthly revenue—"piss poor revenue for an industry that's meant to be the future" ([02:33]).
- Zitron highlights burn rates: companies like Perplexity and Replit burn more than they make, with Perplexity "burning 164% of its revenue on Amazon Web Services, OpenAI, and Anthropic last year" ([04:32]).
- "Every user loses you money in Generative AI because it’s impossible to do cost control in a consistent manner" ([05:36]).
Notable Quote
"This is not a real business. That’s a bad business with out of control costs. And it doesn't appear anybody has these costs under control."
— Ed Zitron ([07:54])
2. Insane User Burn and Lack of Cost Controls
- Anthropic’s Claude code as a case study: some users burn thousands of dollars in infrastructure—one user reportedly "burned $51,291 over the course of a month" on a $200 subscription ([06:21]).
- Companies can't control or accurately predict user behavior: "Even the model developers have no real way of limiting user activity, likely due to the architecture of generative AI" ([07:21]).
- Despite weekly rate limits, rampant overuse continues, indicating systemic problems inherent to how LLMs are priced and managed.
3. Shady Pricing & The User Backlash
- Companies are shifting, "doing nasty little tricks on their customers to juice more revenue" ([08:33]).
- Example: Replit’s "Agent 3" feature launched after a price hike, leading users to get hit with unexpectedly massive costs, like one who "spent $1k this week alone" ([09:16]).
- Users report cryptic, exploitative price structures where the models often do nothing yet still incur charges.
Notable Quote
"Replit is charging them for an activity when the agent doesn't do anything, a consistent problem I found across Redditors…man are users pissy."
— Ed Zitron ([10:26])
4. The Broken Economics of AI SaaS
- Conventional VC-backed software model is to "burn a bunch of money, then turn the profit lever." Zitron argues:
"AI does not have a profit lever because the raw costs of providing access to AI models are so high and they're only increasing, that the basic economics of how the tech industry sells software don't make sense.”
([11:08])
5. Why Coding with LLMs Fails as a So-Called "Killer App"
- Zitron lambasts the myth that coding tools like Claude code or GitHub Copilot are effective substitutes for engineers.
- "Code generation AIs from an industry standpoint are roughly the equivalent of a slightly below average computer science graduate fresh out of school without any real world experience" — Carl Brown ([18:43]).
- Tools can solve "straightforward problems" but "lack the experience to be wholly trusted, and trust is the most important thing you need to fully delegate coding tasks" (Colt Vogie, [20:52]).
- Zitron repeatedly chastises the media for parroting the "AI replacing engineers" myth without actually investigating ("Members of the media, I am begging you, stop… you are embarrassing yourself." [19:27])
Notable Quotes
“This grotesque, manipulative, abusive and offensive lie has been propagated through the entire business and tech media without anybody sitting down and asking whether it’s true.”
— Ed Zitron ([18:31])
“LLMs are capable of writing code but can't do software engineering because software engineering is the process of understanding, maintaining and executing code to produce functional software. And LLMs do not learn, cannot adapt.”
— Ed Zitron ([21:09])
6. Generative AI Is Not (And Cannot Be) Replacing Software Engineers
- Zitron draws a key distinction:
"The reality is that software engineers maintain software, which includes writing and analyzing code amongst a vast array of different personalities and programs and problems" ([24:23]).
- True translation (and true engineering, by analogy) requires creativity and contextual knowledge LLMs fundamentally lack.
- Even basic metrics and studies fail to show LLMs making engineers faster—sometimes, they're even slower.
7. Enterprise AI Is… Also Bombing
- Despite Microsoft’s massive sales infrastructure, only a minuscule fraction of Office 365 users pay for Copilot—"about 8 million active licensed—so paying—users of Microsoft 365 copilot, amounting to a 1.81% conversion rate across 440 million Microsoft 365 subscribers" ([27:39]).
- Real revenue is a drop in the bucket compared to Microsoft’s total productivity/business segment.
- Microsoft discounts Copilot heavily; real engagement (actual use, not just license sales) is almost nonexistent ("SharePoint, with over 250 million users, has less than 300,000 weekly active users of their copilot features" [30:08]).
Notable Quote
"If Microsoft's doing this badly, I don't know how anyone else is doing well. And they're not. They're all failing. It's pathetic."
— Ed Zitron ([31:26])
Timestamps for Important Segments
| Timestamp | Segment Summary | |-------------|--------------------------------------------------------------------------------| | 02:05 | Introduction to generative AI economics—“everyone is losing money” | | 04:32 | Burn rates and negative margins at Perplexity and Replit | | 06:21 | User excesses on Anthropic’s Claude code; the $51K single user token burn | | 08:33 | Shift toward exploitative pricing and user blowback | | 11:08 | Why AI SaaS economics are terminally broken | | 14:07 | Specifics on AI infra costs and unpredictability | | 18:43 | Expert quotes on the actual limits of AI coding tools (Carl Brown, et al) | | 19:27 | Zitron’s plea to media: stop spreading the ‘AI is replacing coders’ myth | | 24:23 | Analogy between code, translation, and why AI can’t perform these creative acts| | 27:39 | Microsoft 365 Copilot’s weak adoption and revenue figures | | 30:08 | Hard evidence of non-usage in enterprise environments (SharePoint numbers) | | 31:26 | Zitron’s summary: “They’re all failing. It’s pathetic.” |
Memorable Moments & Tone
- Ed Zitron maintains a fiercely skeptical, at times profane, but always data-driven tone—with a trademark mix of dark comedy, caustic metaphors, and frustration at the industry’s hype machine.
- He repeatedly goes after the media for spreading shallow narratives and calls on tech journalists to "either use these things yourself or speak to people that do" ([19:34]).
- The segment where Zitron reveals Microsoft’s "8 million" Copilot users is a did-you-just-bury-the-lede moment ([27:39]).
- A recurring theme: incredulity at the gap between industry hype ("this was supposed to be the saving grace") and the bleak reality of AI economics.
Conclusion
Part 3 of "The Case Against Generative AI" ruthlessly dissects the unprofitable business models and collapsing narratives that prop up the generative AI sector. While tools like Claude code and Copilot are touted as replacements for software engineers, Zitron’s expert testimony and economic analysis demolish the idea that AI can viably or meaningfully substitute for skilled human labor. With unsustainable margins and lackluster enterprise adoption, the generative AI boom looks increasingly like a bubble on the verge of bursting.
End of summary
