Podcast Summary: Offline with Jon Favreau
Episode: "Big Tech’s Big Tobacco Moment"
Date: April 4, 2026
Guests: Casey Newton (Tech Journalist), Raul Torres (New Mexico Attorney General)
Overview:
This episode explores a watershed moment in tech accountability, drawing parallels between Big Tech and Big Tobacco. Jon Favreau examines how recent lawsuits against Meta (Facebook/Instagram) and YouTube could reshape the legal and regulatory landscape for the design and safety of social media platforms—especially for children. Joined by Casey Newton and AG Raul Torres, the discussion digs into the legal reasoning, potential impact, and tensions between user safety, free speech, privacy, and Big Tech’s business models.
Key Discussion Points and Insights
1. Tech on Trial: The "Big Tobacco" Parallel
- The episode opens with Jon Favreau framing recent jury verdicts as a historic turning point for tech industry accountability, not unlike Big Tobacco’s reckoning over decades of concealed harms ([03:01]).
- Favreau points out that despite years of government hearings and regulatory fines, meaningful accountability for tech leaders—especially Mark Zuckerberg—has largely not materialized ([05:07]):
"Mark Zuckerberg has basically escaped any kind of meaningful accountability for this or anything else." ([04:41], Jon Favreau)
2. The Landmark Verdicts: What Happened and Why They Matter
- In Los Angeles, a jury found Meta and YouTube negligent for designing platforms that harmed a teenage girl ("Kaylee"), citing internal company research showing awareness of addictive and dangerous features, particularly for kids ([06:30]).
- Another New Mexico jury found Meta violated state consumer protection laws, following an undercover sting where a fake 13-year-old's Instagram was instantly targeted by predators ([07:35]).
Notable Quote:
"Freedom of expression does not include the freedom to design an addictive product that you know to be harmful, especially to children."
— Jon Favreau ([09:37])
- These cases were not about specific content, but "defective design"—how features like infinite scroll, autoplay, notifications, and beauty filters are engineered for addictiveness ([08:32]).
- Over 2,000 similar lawsuits are pending, including a major federal case with 1,600 plaintiffs ([08:45]).
Notable Quote from Tech Journalist Casey Newton:
"Once a juror understands that [Meta] has been researching this... the worse stuff they found... then that research kind of gets canceled or the researchers get moved... it kind of does start to feel like a big tobacco moment."
— Casey Newton ([02:46])
3. The New Mexico AG's Perspective: Building a Legal Blueprint
What the Sting Revealed
- Raul Torres recounts the undercover Instagram operation: a fake 13-year-old profile quickly drew predator attention, while Meta recommended ways to "monetize" her following instead of protecting her ([12:36]).
- Internal communications confirmed the company knew about the harms and scale of predatory behavior but did not act ([13:37]).
The Lawsuit’s Ambition
- The case is about design, not content. Injunctive relief sought in court includes:
- Real age verification,
- Limitations on algorithms targeting kids,
- Ending infinite scroll and autoplay for children,
- An independent court-appointed monitor to enforce changes ([16:12]).
- The AG acknowledges that statutory fines are outdated—a $5,000 penalty per child would have been $40,000 if adjusted for inflation ([14:45]).
Notable Quote:
"If we can do this [in NM], we can actually establish a blueprint for what can happen around the rest of the country and around the world."
— Raul Torres ([16:54])
4. Section 230, Design vs. Content, and Legal Fears
-
Both Meta and critics argued the lawsuits were "content cases in disguise," trying to dodge Section 230, which protects platforms from liability for user content ([18:36]).
-
Torres insists these cases target platform design and corporate misrepresentation, not user or third-party content ([19:09]):
"When you build a product…that creates known harms and then you lie to people about those harms, that is outside of…Section 230."
— Raul Torres ([19:09]) -
Torres calls for iterative, tech-savvy legislative reform, not rules frozen in the '90s AOL era ([26:28]).
-
Both he and Newton express frustration with Congressional inaction and see litigation as a catalyst for broader policy engagement.
5. Privacy, Encryption, and the Risk of Overreach
-
The episode grapples with the tension between protecting kids and protecting privacy, especially around encryption in messaging ([21:01]).
-
Meta rolled back plans for encrypted DMs on Instagram during the trial—an unprecedented reversal and a flashpoint in the privacy vs. safety debate ([21:33], [44:44]).
“We shouldn’t have to give up our basic right to privacy so cops can make fewer phone calls.”
— Casey Newton ([45:21]) -
AG Torres claims the timing of Meta’s encryption moves was legally motivated to shield themselves, not to genuinely protect privacy ([22:07]).
Notable Moment:
- Both guests draw a distinction between private encrypted communication among known contacts and recommendations/DMs between minors and strangers—suggesting nuanced, age-based approaches ([24:52], [25:07]).
6. Existential Threat to Tech’s Core Model?
-
With over 40 state AGs and thousands of plaintiffs, legal expert Eric Goldman warns that platforms may soon have to "reconfigure their core offerings" ([50:39]).
-
Casey Newton clarifies the difference between addictive, individualized engagement (driven by sophisticated algorithms and behavioral science) and previous forms of media ([52:28]):
"We just have to kind of account for the growing technological sophistication of these platforms and how good they've gotten at hacking our brains."
— Casey Newton ([53:33]) -
Without legislative standards, future jury-driven design changes may force platforms to guess what constitutes a "safe" product, potentially stifling innovation or causing over-cautious moderation ([51:13]).
7. The Next Tech Reckoning: Artificial Intelligence
- Newton discusses Meta’s failed Metaverse pivot, struggles in AI, and the dystopian vision of personalized, AI-generated “slop” designed to keep users hooked ([53:33], [55:00]).
- Both panelists express skepticism that the coming "AI internet" will be less addictive or harmful—if anything, it may make current problems worse, especially for children and teens ([55:34], [55:55]).
- The gap between Silicon Valley’s AI evangelism and public skepticism is growing, partly due to the industry's anti-democratic posture and lack of public input ([58:41], [59:51]).
Memorable Quotes
-
On Big Tech’s Accountability:
"Most of us don’t want these tech companies to keep stealing more and more of our attention just so they can make another billion."
— Jon Favreau ([09:17]) -
On Design vs. Content:
"Let's try to find those design things that we can develop a consensus around...particularly when they seem to serve no real social purpose. I would argue that, like, Autoplay, Video, Infinite scroll are probably in that category."
— Casey Newton ([42:10]) -
On Algorithmic Persuasion:
"Maybe the platforms could just say, hey, stop that. Knock it off. Let's maybe roll back the last 15 things we did in that regard. Maybe they would be a little bit less hypnotic."
— Casey Newton ([52:11]) -
On Policy:
"We created Section 230 in 1997 and we walked away...We haven't changed the regulatory or legislative framework to keep pace with technology."
— Raul Torres ([26:28])
Timestamps for Major Segments
| Timestamp | Segment | |-----------|---------| | 03:01 | Favreau introduction and framing the Big Tobacco analogy | | 06:30 | Details of the LA and New Mexico verdicts | | 12:11 | Interview with AG Raul Torres begins | | 13:37 | Blunt findings from the undercover Instagram sting | | 16:12–18:36 | Injunctive relief, potential design changes, and limits of state action | | 18:58–21:33 | Section 230 debate and concerns over content vs. design liability | | 21:33–24:35 | Encryption reversal and the privacy–child safety dilemma | | 34:27 | Interview with Casey Newton begins | | 35:04–38:11 | Newton on damning Meta research and jury impact | | 39:33–44:44 | Section 230, consensus around design harms, and the tricky regulation of algorithms | | 44:44–48:02 | Encryption, privacy rights, and the risk of overreach | | 50:39–53:33 | Existential legal threat to platforms and design changes | | 53:33–56:50 | Meta’s struggles with AI and the future of algorithmic addiction | | 58:41–60:10 | The "AI gap" and public skepticism vs. industry projections |
Tone and Language
- The episode is candid, impassioned, and urgent. Favreau expresses personal concern as a parent about the manipulative design of social platforms. Newton delivers dry humor ("AG Torres needs to mind his business"), skepticism toward corporate motives, and clear-eyed technical analysis. Torres speaks as both prosecutor and reformer, emphasizing the "blueprint" potential of the New Mexico case.
Takeaways for Listeners
-
Legal Momentum:
A new legal theory—focusing on design harms, not content—may end Big Tech’s reliance on Section 230 as an impenetrable shield, and start to force transformative changes to platform features, especially those most addictive to children. -
Complex Tradeoffs:
The regulatory dilemma isn’t just between kids’ safety and profits, but also privacy, free speech, and the potential for government overreach or excessive censorship if reforms are too broad or ham-fisted. -
The Road Ahead:
With hundreds of lawsuits, a stalled Congress, and the AI era ramping up, how society balances accountability, privacy, and public health in tech will define the next chapter—not just for Silicon Valley, but for everyone online.
Hosts & Guests:
- Jon Favreau — Host
- Raul Torres — New Mexico Attorney General
- Casey Newton — Tech Journalist, "Hard Fork" co-host
For further context see:
