Better Offline – Radio Better Offline: Edward Ongweso Jr. & Allison Morrow
Podcast: Better Offline (Cool Zone Media & iHeartPodcasts)
Host: Ed Zitron
Guests: Alison Morrow (CNN Nightcap Newsletter), Edward Ongweso Jr. (Tech Bubble Newsletter)
Date: September 17, 2025
Episode Overview
In this episode, host Ed Zitron is joined by tech journalists Alison Morrow and Edward Ongweso Jr. to dissect the current state of the AI industry—specifically OpenAI’s approach to hallucinations, the disconnect between massive investment and actual utility, and the mounting skepticism about the future (and financial logic) of generative AI. They also compare today’s AI hype with earlier tech bubbles (like the Metaverse and crypto), discuss the role of media skepticism, and critique the tech industry’s tendency to promise revolutionary societal changes without delivering proven value.
The tone throughout is skeptical, irreverent, and candid, with honest admissions of confusion and frustration with the tech media echo chamber, VC-driven hype cycles, and the lack of tangible progress in AI applications.
Key Discussion Points & Insights
1. OpenAI, Hallucinations, and the “I Don’t Know” Fix
Timestamps 04:18–09:02
- OpenAI’s latest claim: They believe they've discovered the cause of AI “hallucinations” (where the model confidently makes things up) and propose a fix: encourage the model to answer “I don’t know” more often.
- Alison Morrow: Compares model guessing on tests to how students make educated guesses in standardized tests; models aren’t rewarded for silence or admitting ignorance.
- “You don’t get a point if you say I don’t know. So you come up with something... That’s why you get a lot of nonsense and hallucinations.” (04:36)
- Ed Zitron: Skeptical of the claimed breakthrough, finds the solution superficial and questions whether OpenAI appreciates the depth of the hallucination problem.
- “This feels like a very flat view of what hallucinations are… The models aren’t getting better, diminishing returns and all that, but this is the best they’ve got.” (05:20)
- Edward Ongweso Jr.: Sees the solution as a symptom of a deeper trend—firms offer shallow tweaks instead of fundamental changes, often missing the actual problem.
- “Maybe another dead end?” (06:32)
2. Sycophancy, AI Utility, and User Dissatisfaction
Timestamps 07:31–10:06
- Sycophancy in AI: Efforts to make AI less sycophantic (overly agreeable and flattering) have made models less “human” or engaging, angering users.
- Alison Morrow: Notes OpenAI underestimates that the main appeal for many users is the “companion” or “therapist” aspect of models—something openly opposed by the company.
- “They don’t realize that what it’s selling often is like a companion and a therapist… reminds me of like Q-tips. You’re not supposed to put them in your ear. Right. But that’s all anyone uses Q-tips for.” (08:16)
- Ed Zitron: Lampoons the move as ignoring user reality and jokes about the lack of any real understanding within these companies.
- “I don’t think they know what ChatGPT is.” (09:02)
3. Media & Hype: Blind Spots, Deference to Authority, and The $115 Billion Question
Timestamps 10:06–16:28
- Massive investment, few results: OpenAI is reportedly on track to burn through "$115 billion by 2028," much of it on yet-to-be-built server farms and Oracle contracts.
- Alison Morrow: Contrasts the swift collapse of the metaverse with the stickiness of the AI hype, despite its ongoing lack of utility or clear ROI.
- “With AI … the lack of utility is still there and…the absurdity of the investment is still there… It has been, like, going in Ed Zitron’s favor in the last few weeks.” (11:44)
- Ed Zitron: Wonders why media doesn’t call out “the money doesn’t exist” aspect; astonished by coverage that uncritically propagates wild projections.
- “Can no one just be like... no one has any idea where this money is coming from?” (12:10)
- Edward Ongweso Jr.: Observes that media, like art critics, defer to authority, rarely interrogate the claims in a sustained or adversarial way.
4. The AI Bubble vs. Real-World Adoption
Timestamps 14:28–18:49
- Low ROI, slow enterprise adoption: Refers to MIT study showing 95% of enterprise Generative AI deployments lack ROI, even as adoption is “high” on paper.
- “The enterprise adoption is high, but the actual transformation is low because this shit doesn’t work.” – Ed Zitron (14:29)
- Media is (very) slowly waking up: The Atlantic runs pieces questioning AI’s economic impact, marking a turning point.
- Sustaining the hype: People expect good news will revive the cycle, even as core issues (revenue, functionality) persist or worsen.
5. Agents, China Panic, and Fiction vs. Reality
Timestamps 16:28–19:04
- Hype over “agentic AI” is “even worse than crypto.” Belief in capabilities ("agents that do things") far outpaces reality. Every rumor of a Chinese breakthrough triggers market panic.
- “I’ve never seen a tech thing…that has not existed like this. And people talk about it like it’s real.” – Ed Zitron (17:53)
- Companies double down: Proposed “solutions” often make things worse (suggest restricting the entire Internet to prop up unproven products).
6. Exploitation, Ghost Workers, and Uber Parallels
Timestamps 25:14–30:05
- Uber comparison: The show unpacks how Uber eventually achieved scale (through massive subsidies and regulatory deregulation) and asks if AI has a comparable “clever parasite” strategy.
- Edward Ongweso Jr.: “If you get enough buy-in from the military-industrial complex… buy-in from other tech firms… If you graft yourself onto everybody’s daily interactions… can you actually make it work? What if you just become a massive parasite?” (26:43)
- Ed Zitron: AI is a “terrible parasite”—not providing utility for users nor finding a path to sustainable exploitation.
7. Harms, Regulation, & Social Fallout
Timestamps 30:05–39:14
- Real-world harms: Psychiatric episodes, suicides, and increasingly bizarre user dependencies on AI chatbots become front-page news.
- Alison Morrow: “If this was coming from a pharmaceutical company, it would be recalled immediately… but there are no regulations around AI.” (30:05)
- Ed Zitron: “It drove that guy insane. That guy went crazy. There are children…horrifying killing themselves because of this thing. That’s what it’s getting known for.” (30:51)
- Tech industry evasion: OpenAI and others are evasive when pressed about bot responsibility for user harm, and models often don’t respond meaningfully to mental health emergencies.
- Alison Morrow: “They kind of have to just try, and then put it out in the world and then wait for something bad to happen.” (35:45)
- Systemic algorithmic bias: Racism and bias are endemic to all algorithmic systems, with little real effort or investment to fix them. (See: Microsoft Kinect’s inability to recognize Black users, COMPAS recidivism algorithm.)
8. AI Use Cases—Coding, Compute Costs, and Failing Economics
Timestamps 41:48–46:16
- Main “success” story is code generation, which ironically increases compute costs. Advanced reasoning models (used for code) burn vastly more resources, making scaling uneconomical.
- Ed Zitron: “Their only real growth market is writing code. The problem is writing code requires you to use reasoning models… the more models reason, the more they hallucinate…” (41:48)
- Financial “fan fiction”: Growth projections are detached from reality; compute costs keep climbing (“$115 billion by 2029”?), yet the business models remain undefined.
9. The “AI Movie,” Labor, and the Limits of Automation
Timestamps 47:17–55:08
- OpenAI’s “AI Movie” project: Journalists pick apart an announcement that OpenAI will power a feature-length animated film (budget: $30 million).
- Actual process relies heavily on human writers, artists, and actors; unclear what percentage is “AI.”
- Alison Morrow: “It seems like they’re hiring two different animation studios with artists and writers… then some mystery X amount of the movie will be put together with AI. And I honestly don’t know how different that is from a regular Pixar or DreamWorks Animation process.” (48:30)
- Ed Zitron: “This feels like a death rattle far more than something terribly scary… They’re using out-of-the-country studios. They’re skipping union stuff… This is the best they can squeak out; years in fucking hell is this.” (51:46)
10. Bubble Dynamics, Exits, and the Coming Reckoning
Timestamps 55:08–64:18
- AI compared with crypto: AI, unlike crypto, cannot create new money from thin air; all companies burn vast sums, with almost no clear path to exits or profitability.
- Many VCs and founders are motivated by the dream of “generational wealth,” creating a protracted bubble.
- “33% of VC went into AI last year,” but acquisition activity and successful exits are minimal.
- Software startups are even worse off: Unlike AI “compute”/data center companies (who can at least sell hardware), open-model AI software startups have minimal assets and no defensible moat.
- “What actual value does an AI startup have? … There never is one.” (62:11)
- Comparison with Uber: Even Uber had more plausible economics and provided real (if problematic) value; AI startups “just annihilate fuel unless it’s like planes.”
Notable Quotes & Memorable Moments
- On AI user experience:
- Ed Zitron: “You’ve changed something. I know. And it’s what happens when you release an imprecise glazing bot onto the world.” (09:02)
- On OpenAI's future:
- Ed Zitron: “Every time I look at this company, I feel a little more insane… Even the metaverse, even crypto, even crypto functioned. It was bad… AI doesn’t even seem to be doing it and they need more money to prove that it can’t do it.” (23:18)
- On media complicity:
- Alison Morrow: “There’s a deference to authority that I think American media… have an issue with… I do think that is an institutional mindset that has taken root… the last 10 years.” (15:28)
- On tech’s safety failures:
- Alison Morrow: “If this was coming from a pharmaceutical company, it would be recalled immediately… but there are no regulations around AI.” (30:05)
- Ed Zitron: “It is not hard for them to just have a…unilateral thing of, oh, you’re talking like this. I’m going to stop… They could stop them.” (36:59)
- On AI’s business logic:
- Ed Zitron: “Their only real growth market right now is writing code. The problem is writing code requires you to use reasoning models. Reasoning models inherently burn more tokens… So the very product they are building that is going to save them is also the one that is going to burn more compute.” (41:48)
- On the “AI Movie” announcement:
- Ed Zitron: “This feels like being at a party where everyone’s pissed themselves.” (53:46)
Key Segment Timestamps
- OpenAI’s Hallucination “Fix” & Skepticism: 04:18–09:02
- Companion AI, Sycophancy, and User Backlash: 07:31–10:06
- AI Bubble Economics, Media Coverage, and Oracle Contract Nonsense: 10:06–16:28
- Bubble vs. Adoption—MIT Study & ROI (or lack thereof): 14:28–18:49
- Agentic AI Hype & Market Panic: 16:28–19:04
- Comparing AI Exploitation to Uber’s Tactics: 25:14–30:05
- AI Harms & Regulatory Indifference: 30:05–39:14
- Coding Use, Compute Crisis, and AI’s Negative Business Logic: 41:48–46:16
- The “AI Movie” & Labor Reality: 47:17–55:08
- VC Exit Myths and the Coming Collapse: 55:08–64:18
Conclusion
Ed Zitron, Alison Morrow, and Edward Ongweso Jr. offer a deep, sometimes bleak, and often darkly funny take on the current state of AI in tech—where PR outpaces real progress, investments spiral upward, and genuine solutions to social and technical risks are absent. Echoing themes from the show’s description (“interrogating the growth-at-all-costs future”), they make clear that the current AI story is not about technological transformation as much as it is about financial fantasy, hype, cycles of exploitation, and media credulity. AI, they argue, may be the biggest tech scam yet: not just a pointless bubble, but potentially a harmful one with little public benefit and massive, lingering costs.
Where to find the hosts:
- Alison Morrow: @blueskymarrow, CNN Nightcap newsletter
- Edward Ongweso Jr.: @bigblackjacobin (Twitter), The Tech Bubble Newsletter
- Ed Zitron: “Google me: prabhakar raghavan” (as a joke), betteroffline.com
For more from Better Offline, visit betteroffline.com or follow on their newsletter and Reddit.
