Better Offline — "How to Argue With An AI Booster, Part Two"
September 11, 2025 | Host: Ed Zitron, Cool Zone Media and iHeartPodcasts
Episode Overview
In the second installment of his "How to Argue With An AI Booster" series, host Ed Zitron delivers a critical, often caustic deconstruction of the rhetorical tactics and core arguments wielded by AI industry enthusiasts and apologists ("boosters"). Ed sets out to arm listeners with context, history, and ready-made rebuttals against well-worn but often misleading or hollow pro-AI talking points. With historical analogies, in-depth economic breakdowns, and vivid analogies, he aims to separate fanfiction and hype from reality—emphasizing the real limitations and unsustainable economics of today’s generative AI boom.
Key Discussion Points & Insights
1. Fiber Optic Boom vs. the AI Infrastructure Boom
Timestamp: 03:30–08:30
- The .com/fiber analogy: Ed critiques the common AI booster claim that today’s GPU/data center over-investment is just like the late-90s fiber optic cable boom—where vast overcapacity eventually enabled broadband for all.
- Ed highlights:
- The telecoms boom of the late 90s was fueled by regulatory changes, debt, and speculative spending, ending in a bust rife with fraud and excess.
- While fiber cables laid then are still useful, today’s GPUs are centralized, have fewer use cases, and rapidly depreciate.
- Open access to “cheap” GPUs exists now, but there’s no evidence this unlocks fundamentally new mass-market applications the way fiber did.
- Notable quote: "What are these GPUs setting up exactly?" (08:30)
2. The "AI 2027" Narrative as Fanfiction
Timestamp: 08:30–11:25
- Ed calls out the “AI 2027” manifesto (a popular speculative roadmap often invoked by boosters) as “fan fiction,” designed to impress or frighten the credulous with technical jargon—rather than grounded plausibility.
- He argues it’s speculative vaporware, positioning those who believe in it as no less gullible than participants in the Salem witch trials.
- Notable quote:
- "AI 2027 is fan fiction, nothing more. And just because it's full of fancy words and has five different grifters on its byline doesn't mean a goddamn thing." (11:15)
- Cites Sarah Lyons: “AI 2027 and AI in general is no different from the spurious spectral evidence used to accuse someone of being a witch during the Salem witch trials.” (10:30)
3. The Cost of AI Inference: Rising, Not Falling
Timestamp: 14:47–28:00
- Booster catchphrase addressed: “The cost of inference is coming down!”
- Ed systematically debunks the idea that generative AI is getting cheaper and more economically efficient:
- Industry rhetoric confuses “token price” (what customers are billed) with the underlying real computational cost (what it takes to actually run a query).
- Newer, more "powerful" models require much more computation per action (“token burn”), especially for reasoning and code-generation tasks, causing total costs to increase despite isolated instances of price drops.
- Quotes multiple experts/industry writers noting these rising costs.
- Cites Theo Brown’s video: “I Was Wrong about AI Costs – they keep going up.” (23:00)
- Notable quote:
- “You cannot at this point fairly evaluate whether a model is cheaper just based on its cost per tokens, because reasoning models inherently burn and are built to inherently burn more tokens to create an output... The cost of inference has gone up. Statements otherwise are purely false and are the opinion of somebody who does not know what he's talking about.” (26:00)
- “If you have the temerity to call someone out directly, at least be fucking right. I'm not wrong. You're wrong.” (25:15)
4. "AI Companies Will Be Profitable Like Uber" Fallacy
Timestamp: 28:00–32:49
- Ed dismantles the “Uber burned lots of money and became essential/profitable, so AI companies will too” argument:
- Uber solved a terrible user experience immediately and scaled through VC-fueled subsidies—eventually becoming essential as an alternative to dysfunctional taxis.
- Gen AI, by contrast, has no comparably essential use-case—even its so-called government contracts are priced to create the appearance of centrality, not actual reliance.
- The magnitude of capital expenditures and financial losses in AI (OpenAI, Anthropic) dwarfs Uber—compute costs, infrastructure buildout, and talent costs are on an entirely different order of magnitude.
- Uber’s massive losses subsidized usage and adoption—AI’s subsidies mainly prop up core functionality, with high marginal costs per user.
- Notable quote:
- “There really are no essential use cases for ChatGPT or really any gen AI system. You cannot point to one use case that is anywhere near as necessary as cabs in cities.” (32:05)
5. Data Centers and “Economic Growth”
Timestamp: 32:49–36:00
- Refutes the line that AI/LLM data centers are essential growth engines similar to tech booms of the past.
- The vast capex is absorbed by the largest tech companies (Microsoft, Amazon, Google), but the AI units themselves deliver little profit or meaningful trickle-down.
- No downstream impact comparable to the long-term benefits of the fiber buildout.
6. True Scale of AI Company Losses
Timestamp: 36:00–end
- Provides a detailed, numbers-driven comparison:
- Uber’s total burn over a decade ($25 billion) is dwarfed by the tens of billions burned each year by OpenAI and Anthropic, not counting the gigantic cloud provider infrastructure spend.
- Notable quote:
- “The true cost of OpenAI is at least $82 billion and that only includes CapEx in 2024 onwards... To put it real simple, AI has burned way more in the last two years than Uber burned in 10.” (38:30)
- Concludes with a promise to dispense with the “dumbest of the dumb arguments” in the final episode of this series.
Most Memorable Quotes
- On AI boosters’ analogies:
“This was not an infrastructure buildout. The GPU boom is a heavily centralized capital expenditure funded asset bubble where a bunch of chips will sit in warehouses or kind of fallow data centers waiting for somebody to make up a use case for them.” (08:25) - On “AI 2027” speculation:
“It doesn’t matter if all the people writing the fanfiction are scientists or that they have the right credentials. They themselves say that AI 2027 is a guess... It’s up there with My Immortal. And no, I’m not explaining that.” (09:45) - On inference costs:
“Inference is a thing that costs. Money is entirely different to the price of tokens, and conflating the two is journalistic malpractice.” (18:40) - On Uber-vs-AI comparisons:
“Uber worked immediately. You used it, you’re like, wow... The costs associated with Uber are minuscule compared to the actual real costs of OpenAI and Anthropic.” (34:45)
Timestamps for Important Segments
- 03:30 — Ed introduces why comparing AI infrastructure buildout to the .com/fiber boom is misleading.
- 08:30 — "AI 2027" fanfiction argument: why it shouldn’t be taken seriously.
- 14:47 — The cost of inference: exploring what’s actually getting cheaper or more expensive.
- 23:00 — Theo Brown and Ethan Ding’s insights on inference cost blow-ups.
- 28:00 — “Uber became essential/profitable, so AI will too” doesn’t track.
- 32:49 — Are data centers really growth engines? (Short answer: No.)
- 36:00 — The true, staggering dollar costs of OpenAI and Anthropic versus Uber.
Tone and Language
- Ed’s delivery is sharp, irreverent, and unapologetically direct, blending economic analysis with sarcasm and pop culture references.
- The episode is rich with historical analogy, sarcasm, and a determination to ground the AI discussion in facts, not booster wish-casting.
For Newcomers: Why Listen to This Episode?
- If you’re overwhelmed by techie buzzwords and vague promises about the "transformative" future of AI, this episode is a bracing—if caustic—antidote, giving you fact-based counters to industry talking points.
- The episode offers not just rebuttals, but frameworks for skepticism, helping listeners distinguish between real innovation and self-serving hype.
Next Episode Tease
Ed hints the third and final entry in this series will tackle the most absurd pro-AI arguments—suggesting an even sharper takedown to come.
For more on these topics, visit Better Offline’s newsletter, Discord, or subreddit as mentioned by Ed during the episode.
