Podcast Summary: Hard Fork – "The Future of Addictive Design + Going Deep at DeepMind + HatGPT"
Host: The New York Times
Episode Date: April 3, 2026
Hosts: Kevin Roose & Casey Newton
Special Guest: Sebastian Mallaby (author of The Infinity Machine)
Episode Overview
This episode dives into three major themes:
-
Addictive Design and Legal Reckoning for Social Media
A discussion of landmark jury verdicts against Meta and other social platforms regarding addictive product design—potentially changing the legal foundation of the internet. -
DeepMind’s Pursuit of Superintelligence
Sebastian Mallaby joins to reflect on his deep reporting about Demis Hassabis, the inner workings and culture of DeepMind, and the race for AGI (Artificial General Intelligence). -
HatGPT: Tech News Roundup
The hosts play their recurring "HatGPT" game, drawing recent oddball and significant tech stories from a hat for rapid-fire commentary.
1. Addictive Social Media Design & Legal Risks
Overview
Timestamps: 02:16 – 26:27
Kevin and Casey analyze two recent jury verdicts:
- LA: Meta and YouTube found negligent for harmful design features (face $6 million payout)
- New Mexico: Meta found to have violated Unfair Practices Act, misleading consumers about safety, especially regarding children ($375 million penalty)
Both cases hinge on legal theories focusing on platform design as defective and harmful—sidestepping traditional Section 230 immunities.
Key Discussion Points
-
Legal Theory & Section 230 "Crack"
- These are "bellwether" cases—potentially setting precedent for future lawsuits
- Section 230 historically shielded platforms from liability for user posts, not platform design
- “This is not about...content. This is about the design of the whole platform. The design feels defective.” – Casey (06:01)
-
What Features Are on Trial?
- In LA: beauty filters, infinite scroll, autoplay videos, push notifications, and recommendation algorithms
- In NM: child safety issues, end-to-end encrypted messaging, platform’s claims of safety vs. reality
-
Public Health Framing
- Platforms are likened to tobacco companies—knowingly releasing addictive features
- Internal Meta research and leaks (e.g., Francis Haugen) were crucial evidence in these trials
-
Addictiveness as a Product Flaw
- Comparison to "nicotine" for cigarettes; debate about whether internet mechanics are inherently addictive or harmful at scale
- “If it followed the same principle as nicotine, then every product that has those would become way more popular. And one example I've been thinking about on this is Sora... The app did not succeed.” – Kevin (11:56)
- Platforms’ scale and algorithmic curation play a key role in addictiveness
-
Content vs. Container Debate
- Can design elements be separated from content for regulatory purposes?
- “There are some people who are saying that, no, you cannot make that distinction and that effectively all design is content.” – Casey (10:47)
-
Implications if Appeals Fail
- Platforms may need to strip features (e.g., infinite scroll for minors), but no clear legal benchmarks exist yet
- Defensive corporate behavior predicted (i.e., internal chat about "addictiveness" will become taboo)
- Possible slippery slope: TV cliffhangers vs. TikTok autoplay – Are both manipulative?
- “If you start treating the design and mechanical decisions of these media platforms as harmful under the law...the blurrier the lines in my mind get between the content and the mechanics.” – Kevin (17:55)
-
Encryption & Privacy Concerns
- NM case criticized end-to-end encryption; Meta preemptively disables it on Instagram
- “Encryption is a necessary part of privacy in a world where people are mostly communicating online.” – Casey (20:02)
-
Children as a Special Case
- Both hosts agree: stricter age gating is necessary; legal theory hinges on harm to children
Memorable Quotes & Moments
- “Bellwether cases...if successful, are going to open the floodgates for lots of other people to sue under the same theory.” – Casey (03:35)
- “This is kind of a side door that these lawyers have found around litigating on Section 230.” – Kevin (06:24)
- “The juries have said your product is defective. What juries have not said is, here's what an okay product looks like.” – Casey (15:41)
- “Before any of this existed, there were cliffhangers on TV shows... Those were arguably addictive features... Is that illegal? I would say probably it shouldn't be, and it's not.” – Kevin (17:27)
- “...societies across the world have been begging these companies for a decade, please do something to make these platforms safer... And instead, what we've mostly seen is a series of engagement hacks.” – Casey (21:22)
- Humorous: “Do you really think that, like, messaging apps are, like, as addictive...as, like, TikTok or Instagram?” – Casey (23:40)
“Oh, my God, take me back to 1999. Put me on AOL Instant Messenger. I could not tear myself away from that thing.” – Kevin (23:45)
2. Interview: Sebastian Mallaby on DeepMind and the Race for Superintelligence
Overview
Timestamps: 28:38 – 52:44
Sebastian Mallaby, author of The Infinity Machine, joins to discuss Demis Hassabis’ intellectual and spiritual motivations, DeepMind’s internal culture, its strained relationship with Google, the nature of AI safety, and the industry’s personal dynamics.
Key Insights & Discussion Points
-
Portrait of Demis Hassabis
- Deeply driven by curiosity and a quasi-spiritual reverence for nature and reality (Spinoza’s God)
- Noted for embracing Ender’s Game as a personal model (“Does he really see himself as saving humanity by doing what he's doing with AI?... It turns out, yes, he does.” – Sebastian, 33:55)
- Extremely competitive – views AI development as “war” post-ChatGPT
-
DeepMind’s Corporate Drama
- Attempted to spin out from Google (Project Mario), secure safety/ethics independence
- Revealed internal tension with Google leadership, especially Sundar Pichai and Sergey Brin
- “The single most important business buddy act in all of capitalism today is the one between Sundar Pichai and Demis Hassabis.” – Sebastian (44:42)
-
AI Paradigms: Competition & Safety
- DeepMind originally imagined as a singular “lab for humanity”—but competition (OpenAI, Anthropic) made that impossible
- Shifted stance on military partnerships, now collaborates with Pentagon—necessitated by industry rivalry
- “My best shot at making the world better and making AI safer is...government intervention, forcing safety rules on all the labs at once.” – Sebastian (47:10)
-
The "Bunker Scenario"
- Job candidates warned: be ready for an "end game" and disappear into a bunker when AGI is near—due to both outside threats and the need for total focus
-
Comparing Hedge Fund and AI Founders
- Hedge fund managers: “not rethinking everything about society.”
- AI leaders: “playing with something that could destroy humanity. What does it feel like...? Can you sleep?”
-
Writing about an Unwritten Future
- “I'm trying to do a portrait of somebody who has his hands on the 21st century version of nuclear material.” – Sebastian (52:04)
Notable Quotes & Moments
- “When I'm up at 2 in the morning at my desk by myself thinking about science...I feel reality is screaming at me, staring me in the face, waiting for me to explain it.” – Mallaby quoting Hassabis (30:21)
- “This is war. These guys at OpenAI, they've parked the tanks in my front yard.” – Hassabis via Mallaby (35:31)
- “Should he do what Dario did, standing up to the Pentagon about red lines on military usage? ...I don't think he's going to do that.” – Sebastian (47:10)
- “If Demis had told me, anytime when I was working at DeepMind, that I had to take the next flight to Morocco and hide, I would have said I'd been given fair warning.” – Former DeepMind staffer via Sebastian (49:58)
3. HatGPT: Tech News Lightning Round
Overview
Timestamps: 54:51 – 71:59
The hosts play their signature rapid-fire segment, "HatGPT," commenting on recent tech stories—ranging from the serious (security hacks) to the surreal (AI-generated fruit reality shows).
Highlights & Reactions (Chronological Segments)
-
AI Agent Banned from Wikipedia (56:17)
- AI "agents" overwhelming community-moderated web platforms
- Prediction: “This is going to be the year that every system...built on human contribution and review is going to break.” – Kevin (56:17)
-
Disney’s Olaf Robot Meltdown (57:40)
- Hilarious mishap with an animatronic snowman; discussion on real vs. simulated characters and children’s trauma
- “20 children just got lasting trauma. They're going to be talking about this in therapy.” – Kevin (58:36)
-
Claude Code Leak (59:42)
- Code harness for Anthropic’s agentic coding system was leaked & cloned within hours
- Accelerates “open sourcing” of agentic tools
-
Fruit Love Island (61:21)
- AI-generated fruit "reality" show on TikTok, highlighting absurd new forms of digital content
- “I just watched a banana kiss a pineapple, and that's not in the Bible.” – Casey (62:18)
-
Webinar TV: AI Podcasts from Zoom Meetings (63:00)
- Company scraping and turning unsuspecting Zoom calls into AI podcasts
- “If we ever get overtaken on the charts by an AI-generated Webinar TV podcast...I am leaving this industry.” – Kevin (63:51)
-
Axios (Software) Major Cybersecurity Breach (64:36)
- Open-source tool compromised by North Korea, risking software supply chain
- Reflection on AI’s threat to code security: “These AI tools have gotten better than almost any human hacker...at finding vulnerabilities.” – Kevin (65:23)
-
OpenAI Sora Shutdown & No Erotic Mode (67:36)
- Sora (short-form AI video) quietly discontinued; OpenAI pauses plans for adult conversation mode
- Both see this as pragmatic and overdue; resource allocation and reputational risks cited
-
Kalshi "Safe Prediction Market" Campaign (69:55)
- Ads touting their ban on “death markets” and insider trading spark amusement and skepticism
- “Rule number three: we'll always shoot you in the front, never in the back. Who are these people?” – Casey (70:23)
Timestamps for Major Segments
- Jury verdicts & social media design: 02:16 – 26:27
- AI and addictiveness crossover discussion: 25:56 – 27:06
- DeepMind & Sebastian Mallaby interview: 28:38 – 52:44
- HatGPT game (tech news round-up): 54:51 – 71:59
Tone & Noteworthy Dynamics
- The episode blends sober analysis (legal, technical, and regulatory themes) with sharp wit, playful asides, and frequent meta-humor about tech journalism.
- The “HatGPT” segment distills complex, often absurd news into punchy riffs reminiscent of late-night banter; the chemistry between hosts is as much a feature as the content.
Essential Quotes for Understanding Episode’s Impact
-
On social media litigation:
“The juries have said your product is defective. What juries have not said is, here's what an okay product looks like.” – Casey (15:41) -
On AI leadership:
“He is somebody who really wants to be good... But can you be a strong, consequential actor in the world and still be good?” – Sebastian (47:10, paraphrased) -
On cybersecurity anxieties:
“These AI tools have gotten better than almost any human hacker... Every piece of code that exists is going to need to be rewritten.” – Kevin (65:23) -
On the future of tech content:
“I just watched a banana kiss a pineapple, and that's not in the Bible.” – Casey (62:18)
Summary for New Listeners
For those who missed the episode:
- You get a front-row seat to the legal and ethical battles redefining what counts as a “defective” tech product, particularly for youth online.
- The DeepMind interview is an unusually intimate look behind the scenes of the world’s most important AI lab, revealing the messy reality behind “responsible” AGI development.
- Finally, you’ll catch up on a week’s most surreal and significant stories in tech, all while enjoying the hosts’ signature blend of humor, skepticism, and truth-telling.
To Watch Out For Next:
- Will the new legal pathway for “addictive design” claims fundamentally reshape the internet?
- How will AI labs respond to growing regulatory and business pressures?
- Stay tuned: if the world ends, it might start in a bunker in Morocco—or on TikTok, with AI-generated fruit.
