Hard Fork Podcast Summary
Episode: “OpenAI Calls a ‘Code Red’ + Which Model Should I Use? + The Hard Fork Review of Slop”
Hosts: Kevin Roose (The New York Times), Casey Newton (Platformer)
Release Date: December 5, 2025
Overview
This episode explores OpenAI’s internal “Code Red” panic in response to intensifying AI competition, debates which frontier AI model is best for users right now, and reviews the latest “AI slop”—viral generative content, both delightful and disastrous, overtaking culture and the internet.
1. OpenAI’s “Code Red”: What's Going On at the AI Frontier?
[02:15–14:07]
Main Theme
OpenAI, the company behind ChatGPT, has reportedly issued a “Code Red” alert—a sign of internal crisis—noting stagnation and competitive pressure in the AI landscape. Kevin and Casey break down what it means for OpenAI, why this urgency, what rival models are doing, and the possible risks to OpenAI’s future.
Key Discussion Points
- Explanation of “Code Red”
- An urgent internal memo from Sam Altman, CEO of OpenAI, signaled “Code Red” status, meaning a heightened corporate emergency. (“Code Red is the second most dire state of emergency a company can declare, with number one, of course, being a Baja Blast.” — Casey Newton [03:34])
- OpenAI had previously declared a “Code Orange;” escalation reflects increasing urgency ([04:12]).
- Immediate Actions and Company Focus
- OpenAI is diverting more resources into improving ChatGPT, delaying projects like ads, Agents, and Pulse.
- Focus on personalization, improved model behavior (i.e., fewer refusals), and speed—OpenAI is shifting toward a Facebook/Meta-style playbook of maximizing engagement ([08:08]).
- Competitive Threats: Gemini 3 and Opus 4.5
- Google’s Gemini 3 and Anthropic’s Opus 4.5 are seen as leapfrogging or catching up to OpenAI’s best models.
“The belief was that Gemini 3 was going to be so good that it was going to cut into OpenAI's growth both on the user side and the revenue side.” — Casey Newton [05:13] - Google can subsidize its models, threatening OpenAI’s margins and user base ([07:02]).
- OpenAI’s organizational and financial concerns—heavily leveraged, product focus scattered, huge spending commitments vs. uncertain revenues ([09:42]).
- Google’s Gemini 3 and Anthropic’s Opus 4.5 are seen as leapfrogging or catching up to OpenAI’s best models.
Notable Quotes
- On urgency:
“OpenAI realizes that it has a problem with pre training specifically and that is harder to fix than post training. It’s expensive… But that is, I think, where they are going to be focusing their research energy.” — Kevin Roose [10:49] - On company strategy:
“This is a company that has brought on a lot of people who used to work at Meta. And what kinds of things do they do over at Meta? Well, they try to create a perfectly personalized custom feed to you… So this seems, in other words, like they are going for engagement first and foremost.” — Casey Newton [08:08] - On falling behind:
“The mere fact that OpenAI’s current focus is just kind of clawing its way back to parity with its biggest rivals is a big part of the problem here.” — Casey Newton [12:59] - On the risk of losing their lead:
“They are not going to win by tying for first place.” — Kevin Roose [14:03]
2. Which AI Model Should I Use?
[14:07–42:56]
Review and Comparison: Gemini 3 (Google), Claude Opus 4.5 (Anthropic), ChatGPT (OpenAI)
Gemini 3 (Google) [14:30–18:11]
- Strengths: Speed, helpful for quick tasks and fact-checking, good “workhorse” for research ([14:30], [15:30]).
- Scale: Now at 650 million monthly users, chasing OpenAI’s 800 million weekly users ([16:37]).
- Distribution advantage: Integrated across Google products, easy for users to access ([17:17]).
Claude Opus 4.5 (Anthropic) [18:11–25:40]
- Strengths: Empathetic, “warmer” model; best at style transfer and writing text in a user’s voice ([18:19]).
- Use cases: Useful for research, writing assistance, general conversation—especially “life” questions ([19:51], [22:34]).
- Unique features: Known for “soul” and empathy; not as focused on engagement, ads or commerce—in contrast to Google/OpenAI ([23:03]).
- Enterprise focus: Anthropic winning in the business/enterprise API market while OpenAI fights for consumer dominance ([24:19], [29:24]).
ChatGPT (OpenAI) [35:33–35:58 and general context]
- Current status: Still the most recognized and used, particularly among power users, but feeling pressure from rivals.
- Key differentiator: Ubiquity among AI power users and integration with many tools.
Choosing the Right Model
- General users: “You can use either ChatGPT, Gemini or Claude for many things and probably be fine… a vast set of use cases for which all three are roughly equivalent.” — Casey Newton [35:58]
- Power users: “If you really care about this stuff, you’re just going to try new things… the answer to [which model?] is just going to be changing consistently over the next six months to a year.” — Casey Newton [38:09]
- Metaphor:
“We are in a moment where the AI is getting higher resolution. That was the feeling that I had when Claude was able to just create something that was writing sentences that … felt like me.” — Casey Newton [37:06]
Notable Quotes
- On accelerating progress:
“It is amazing how quickly this stuff is moving. … These tools have probably saved me a year of my life.” — Kevin Roose [39:14] - On perspectives:
“There is what I call the California view of AI, which is what can it do? And then there’s … the New York view … what can’t it do?” — Casey Newton [39:14] - On first-hand experience:
“I’m not going to listen to opinions about AI from people who do not use AI. … you’re actually talking about something that no longer exists.” — Kevin Roose [40:18]
Strategic Takes
- On enterprise market:
Anthropic’s gains are at OpenAI’s expense; Anthropic expects $9B in annualized revenue, mostly from enterprise customers ([29:24]). - On shifting product strategies:
OpenAI’s push for engagement and personalization mirrors Facebook/Meta’s era ([08:08]). - On industry impact:
Both hosts reflect on how AI models are transforming professional and creative workflows, and the unique strengths—and trade-offs—of each leading lab/model.
3. News Roundup: AI World Updates
[31:21–33:12]
- Yann LeCun leaves Meta for startup focused on “world models”:
A Turing Award winner and LLM skeptic, LeCun’s move may reshape research directions ([31:26]). - AI leadership shakeups at Apple:
John Giannandrea exits, signaling either “giving up” or rebooting AI strategy; Apple reportedly adopting Google’s Gemini for AI services ([31:38], [32:39]).
4. The Hard Fork Review of Slop – AI-Generated Content
[43:02–61:04]
Introduction to “Slop”
A recurring segment where the hosts examine the best (and worst) viral generative AI content—“slop”—now permeating the internet and culture.
Segment Highlights & Notable Examples
a) AI-Generated Holiday Market at Buckingham Palace
[45:22–48:34]
- Story: Fake AI images advertising a Christmas market at Buckingham Palace tricked tourists into visiting a non-existent event, leading to real disappointment and surreal confusion.
- Hosts’ reaction:
“There are, you know, I'm sad to say, sorry to be a buzzkill. There are going to be much worse outcomes from this exact dynamic.” — Casey Newton [47:41]
b) AI Slop Recipes
[48:38–51:13]
- Story: Food bloggers lament traffic devastation as AI-generated recipe slop takes over, but these recipes are frequently nonsensical and untested.
- Casey’s take:
“I want people like Yvette Marquez Sharpnack … who posted photos of two different tamale recipes that people were making using AI tools that were just completely bogus. … I want her to be able to make a living. And instead, all the AI companies … they've replaced it with what so far is worse. So I hate that.” [49:09]
c) Educational AI Music Slop (“Learning With Lyrics”)
[51:28–54:43]
- Story: An Instagram account posts AI-generated explanatory songs (e.g., “how instant cold packs work”). Used by students as study aids.
- Hosts’ take: Generally positive, distinguishing between “harmless slop” and slop replacing real creators:
“If you are out there and you want to make a song about why giant steel coils are transported … fine… you have that lane to yourself.” — Casey Newton [53:00]
d) AI-Deepfaked Voice in Whirlpool Ad
[54:47–57:16]
- Story: Brazilian ad used U.S. state senator D’Andrea Salvador’s voice, cloned via AI from a TED Talk, for a commercial—with no permission. The ad was initially awarded, then shamed and awards returned.
- Hosts’ judgment:
“We’ve had one thing that I think was very bad. We’ve had one thing that I think was basically good. And now we have this, which I just think is so incredibly stupid. I can’t believe it. Don’t do this. Don’t do it.” — Casey Newton [57:16]
e) Bird Game 3: Viral Gaming Slop
[57:52–60:02]
- Story: A fake “Bird Game 3” generated via video AI gained millions of TikTok views; people want to play a game that doesn't exist.
- Hosts’ take: Appreciative of slop as satire here, “acknowledges its dumbness…for that reason, I’m giving this one a thumbs up.” — Casey Newton [59:10]
Slop Segment Closeout
- On slop as a “medium”:
“By the end of 2025, I think slop is becoming a medium like any other. Where there is good slop, there's bad slop.” — Casey Newton [60:14] - Guidance for slop creators:
“I would say if I have one parting message … slop in the name of love.” — Kevin Roose [60:55]
5. Final Reflections & Industry Outlook
[39:14–42:56]
- The AI “blurry jpeg” metaphor (from Ted Chiang)—models getting sharper and more capable with each cycle.
- Two philosophical AI approaches: “What can it do?” vs “What can’t it do?”—innovation mindsets vs. skepticism ([39:14]).
- Caution about critics who don’t use AI, and open recognition that transformative utility and breakthrough moments are not evenly distributed across all professions or use cases.
Key Timestamps for Important Segments
- OpenAI Code Red & Competitive Panic: [02:15–14:07]
- Model Reviews/comparison: [14:07–42:56]
- News Roundup: [31:21–33:12]
- Which Model Should I Use? Discussion: [35:33–42:56]
- AI Slop Review: [43:02–61:04]
Memorable Quotes
- “They are not going to win by tying for first place.” — Kevin Roose [14:03]
- “You are just going to want to experiment with these models all of the time… the game just shifted again.” — Casey Newton [35:58]
- “The blurry JPEG is getting a touch less blurry.” — Casey Newton [37:06]
- “There are fundamentally two different views of AI… what can it do? … what can’t it do?” — Casey Newton [39:14]
- “I'm not going to listen to opinions about AI from people who do not use AI… you’re actually talking about something that no longer exists.” — Kevin Roose [40:18]
- “Slop in the name of love.” — Kevin Roose [60:55]
Conclusion
This episode provides a candid, industry-insider look at the real pressures inside OpenAI as AI competition intensifies, a practical state-of-the-art review for anyone choosing which AI model to use, and a witty dose of media criticism as slop becomes a “medium” in the age of generative content. Rich in anecdotes, skeptical humor, and sharp commentary, it is valuable for both AI professionals and general tech observers trying to make sense of the wild, rapidly-evolving new era.
