Better Offline – "Monologue: What's Going On At Anthropic?"
Host: Ed Zitron
Date: April 1, 2026
Episode Overview
In this solo monologue episode, Ed Zitron—tech industry veteran and sharp-tongued critic—dives into the controversies, confusion, and questionable practices currently swirling around Anthropic, the prominent AI company behind Claude and Claude Code. From abrupt rate limit changes and customer dissatisfaction to leaks of code and secret projects, Zitron paints a damning picture of a company scrambling for profitability and IPO readiness—apparently at the cost of both transparency and customer trust. He weaves in wit, skepticism, and exasperation as he unpacks how Anthropic’s practices reflect the wider, fast-and-loose culture dominating Big Tech A.I.
Key Discussion Points & Insights
1. Setting the Scene: What’s Going Wrong at Anthropic?
- Intro and Personal Tone
Ed pivots to a monologue after a guest cancels, candidly setting up with humor:
“You’re getting two monologues this week because we had a guest pull out last minute, leaving me with just my keyboard, my microphone and a Diet Coke in one of those weird vacuum sealed tubes that really only real Diet Coke freaks own.” [02:19] - Opens with rhetorical questions:
“What’s going on in Anthropic? What is Dario Amade up to? Why do things keep breaking?” [02:45]
2. The Subscription Model & Unsustainable Economics
- Confusing and Opaque Subscriptions
Anthropic’s business hinges on subscriptions ($20, $100, $200/month) that offer “rate limits” on model usage, but these limits are vaguely defined and confusing by design. [03:09-03:32] - Hidden Costs and Subsidies
“When you pay Anthropic $200 a month, you’re not paying on a per token rate, which is what AI startups have to do. They pay per million input and output tokens. No, no, no. You just do stuff and stuff comes out [...] you can burn over two and a half thousand dollars in in model tokens on a 200 buck a month subscription. Or at least you could.” [03:49–04:09] - Profitability Problem
For Anthropic to even approach profitability, they’d need to “rate limit people into the depths of hell at this point.” [04:25]
3. Recent Controversies: Rate Limits and User Backlash
- The March 2026 Incident
Anthropic launched a two-week double-rate-limits promo, then immediately cut functionality during “peak hours,” catching users off guard. [04:28–04:51]- The company claimed “only 7% of users would hit the limits” [04:57], but real-world complaints suggest far more were affected.
- User Experiences
Multiple users (paying $100-$200/month) maxed out or nearly hit weekly limits with minimal usage—sometimes just a handful of prompts, with one user saying “expected a premium experience for $200 and what they got was constant limit stress.” [05:48] - Media and Public Reception
Zitron notes a groundswell of anger on Twitter and Reddit about these surprise restrictions: “Go on Twitter and search Claude limits. It’s not great.” [06:04]
4. Leaks, Mishaps, and Shoddy Practices
- Accidental Model Leaks, Maybe Not So Accidental
Anthropic “accidentally” left a cache of 3,000+ assets open online—hinting at its upcoming Capybara and Mythos models but offering no real details. Ed is skeptical:
“It kind of reminds me of like a skeezy bloke dropping a magnum condom out of his wallet... being like, hey, hey, you see that? Whoa. [Interjection: Whoops. Whoops.]" [07:08–07:16] - Source Code Leak
Even more damaging, within days Anthropic’s entire Claude Code coding interface source was leaked due to a misconfiguration in an NPM package—“one that exposes Claude Code zinnards to the entire Internet and all of their competitors.” [08:12] - The Role of AI-Created Code
Zitron skewers the notion that relying on LLMs to write code is safe:
“Any time you choose to just accept the code they generate without reading it thoroughly, you’re choosing to trust something inherently untrustworthy that doesn’t think or have knowledge.” [10:14]
5. The Culture of “Ship Fast, Break Things”
- Rushed Product Launches & Catastrophic Bugs
Ed recounts the hasty launch of Claude Cowork, built in less than two weeks. A user’s files were deleted during a simple task:
“He was only able to recover them thanks to an iCloud backup.” [11:10] - Big Tech’s Drive for Speed, Not Quality
Lampooning the idea that “shipping software fast is the same thing as shipping good software.” [10:38–10:44] - Consequences and Unaddressed Risks
Ed cites recent real-world failures:
“In the last few months, AI coding tools brought down AWS twice and lost Amazon hundreds of thousands of orders and led to a security breach inside Meta mere weeks ago.” [13:00]
6. Tech’s Deceptive Narrative and Customer Harm
- Misleading Customers and Captured Media
The media, Ed says, hypes these products without scrutiny, leaving customers with a product that degrades in value overnight. [11:22–11:34] - Unpredictable Model Behavior
Users on annual subscriptions now face drastically reduced access and inconsistent utility, made worse by models that seem to get “dumber at random times.” [11:58–12:17]- Ongoing rumors: Models may be artificially scaled back at certain times, but no solid explanation.
- Ed urges listeners to contact him with inside info: “shoot me a piss on clerk or email me@easyteroffline.com...” [12:22–12:28]
7. Broader Implications for the Tech Industry
- The episode closes with a warning: “Claude Code, one of the few popular AI products, is built with the same disregard for safety and customer happiness as the rest of Anthropic’s astonishingly shitty business that burns billions of dollars with no end in sight.” [12:40]
- Using LLMs irresponsibly is leading to security breaches, system outages, and ever-shakier trust in AI products. [13:00–13:19]
Notable Quotes & Moments
- “Dario Amaday doesn’t care about you, and he certainly doesn’t care about your family.” – Ed Zitron [02:58]
- “You just do stuff and stuff comes out and you’re able to do the same things you would if you used in the API. But you’re burning tokens, and you can burn over two and a half thousand dollars in in model tokens on a 200 buck a month subscription. Or at least you could.” – Ed Zitron [03:55–04:09]
- “For Anthropic to approach anything close to profitability, we’d have to rate limit people into the... depths of hell at this point.” – Ed Zitron [04:25]
- “I don’t know man... it kind of reminds me of like a skeezy bloke dropping a magnum condom out of his wallet in front of a woman being like, hey, hey, you see that? Whoa.” – Ed Zitron [07:08]
- (Re: LLM-written code)
“These models do not have thoughts or knowledge or really anything other than probabilistic generations of outputs... Any time you choose to just accept the code they generate without reading it thoroughly, you’re choosing to trust something inherently untrustworthy that doesn’t think or have knowledge.” [10:05–10:14] - “This is the future that these so-called large language model companies want for you. Bad software shipped quickly, hyped by a captured media that doesn’t give a damn about whether the services are... functional or useful, and doesn’t even bother using the tools or understanding them.” [11:17–11:25]
Important Timestamps
- 02:19 — Episode begins post-advertisements; Ed’s setup for the solo monologue.
- 03:09–03:32 — Breakdown of Claude subscription tiers and their intentional opacity.
- 04:28–05:58 — Details on the promo/rate limits and rapid customer backlash.
- 06:03–06:11 — Ed directs listeners to additional resources and highlights community anger.
- 07:01–07:16 — Analysis of the Fortune-reported asset/model leak.
- 08:03–08:51 — Full explanation of Claude Code’s source code leak.
- 09:02–10:14 — Dangers of letting LLMs write code, with analysis of Boris Czerny’s claims.
- 10:42–11:10 — Claude Cowork catastrophe and broader critique of tech’s “ship fast” obsession.
- 12:17–12:40 — Odd quirks of model performance and Ed’s call for insider info.
- 13:00–13:19 — Recent high-profile failures and closing warnings.
Tone & Style
Ed Zitron’s tone is irreverent, punchy, skeptical—and often biting. He mixes humor with industry insider knowledge and isn’t afraid to call out both corporate spin and media complicity. There’s exasperation, sarcasm, and genuine frustration, especially as he weaves personal anecdotes and listener complaints into a broad indictment of the tech industry's reckless pursuit of growth.
Summary Takeaway
This episode spotlights a microcosm of what Zitron sees as tech’s current brokenness: promises made, then broken; products hyped, but not delivered; customer trust abused in the name of appeasing investors and chasing IPOs. If you’re curious about what’s happening at Anthropic—or, more broadly, how AI hype can mask chaos beneath—the episode is a sharp, entertaining, and damning listen.
