So to Speak: The Free Speech Podcast — Ep. 267: Social media = cigarettes?
Host: Nico Perrino | Guest: Mike Masnick (CEO & Founder of Techdirt and Copia Institute)
Date: April 1, 2026
Overview
In this episode, host Nico Perrino sits down with Mike Masnick to unpack two landmark jury verdicts holding Meta and YouTube liable for alleged harms to minors from social media platform design. They dive deep into the legal doctrines of free speech and Section 230, the popular “social media = cigarettes” analogy, and what these rulings could mean for both big tech and the future of open online expression. The discussion critically examines the separation of platform “design” from “content,” the role of internal company documents in litigation, and whether these cases pose an existential threat to the current Internet ecosystem.
Key Discussion Points & Insights
1. Background of the Verdicts (00:18–02:07)
- Case Summaries:
- California: Jury awards $6 million to “KGM,” a minor who argued design features—not content—of Meta’s platform fueled her addiction.
- New Mexico: State wins nearly $400 million against Meta for failing to protect children from predators.
- Both verdicts focus on platform design as a source of harm, deliberately sidestepping free speech (First Amendment) and Section 230 defenses.
2. Problems with Meta & Big Tech (03:25–05:49)
- Masnick’s Critique:
- “Meta is a terrible company that has spent years making terrible decisions and being terrible at explaining the challenges of social media trust and safety.” (03:25, Masnick)
- Meta’s approach undermines the open Internet, grows at all costs, and avoids building better, more competitive, user-empowering alternatives.
- Criticizes lack of firm, principled stances, especially on free speech.
3. Principles and Consistency in Content Moderation (05:00–07:21)
- Example of Meta’s shifting stances:
- Mark Zuckerberg apologized for content policies on the Joe Rogan podcast and later shifted blame to government pressure—only to offer censorship help to Elon Musk days later.
- “The idea that he has a principled stance I think is seriously undermined by just that one example.” (07:02, Masnick)
4. Why the Verdicts Frighten Free Speech Defenders (08:35–10:32)
- Implications for All Platforms:
- The precedent allows lawsuits over “design” decisions if harm can be loosely tied to feature use, opening the door for liability to anyone who hosts, displays, or recommends third-party content.
- Masnick warns:
- “These two verdicts should scare the hell out of you… what they will enable will impact every website or every situation where users might figure out how they want to display someone else's content.” (08:35, Masnick)
5. Separating Design from Content (10:32–16:10)
- Typical arguments:
- Plaintiffs claim features like infinite scroll, autoplay, notifications, and algorithmic feeds are addictive “by design,” distinct from the content itself.
- Masnick counters:
- Those features only “work” because of engaging content. “If all the videos on Instagram were paint drying, do we think that infinite scroll would matter?” (12:21, Masnick)
- Draws analogy to bookstores, TV, and newspapers—every content medium employs “addictive” design elements to engage users.
- Warns that this logic could be used to regulate everything from grammar and brevity tools to layout and formatting.
6. Addiction vs. Bad Habits: What the Science Shows (12:21–16:10)
- Recent studies: While many think they’re addicted to social media, only a tiny minority meet clinical criteria for addiction—most have manageable bad habits.
- Key point: “If you call something that is just a bad habit an addiction, it actually makes it harder [to address].” (14:55, Masnick)
7. Content Recommendation Algorithms & User Experience (16:10–19:54)
- All content platforms (online and offline) aim to keep users engaged—this is not unique to social media nor newly dangerous.
- Writing styles, newspaper layouts, and even app UX are all optimized for retention—which is only possible because users enjoy the content.
8. The Impact of Internal Documents in Litigation (21:44–28:53)
- Internal concern memos at Meta are used as “smoking gun” evidence, but Masnick argues these internal debates are essential to building safer products, anticipating both the upsides and trade-offs.
- Danger: Using such documents against companies discourages open discussion and product improvement.
- Encryption example: Meta’s roll-out of end-to-end encryption is cited as harmful in lawsuits, even though it protects huge numbers of people (e.g., from domestic violence).
9. The Free Speech and Section 230 Angle (29:40–35:59)
- First Amendment context:
- U.S. law doesn’t create broad new “harmful to minors” exceptions outside obscenity or established categories (defamation, incitement, etc.).
- The courts in these cases denied First Amendment and Section 230 defenses at the earliest stage—letting the case go to a jury solely as a design defect matter.
- Section 230 primer:
- Shields platforms and users from liability for third-party content. Designed to nurture vibrant online speech communities, not just for “neutral” platforms.
- Preemption: State laws can’t override Section 230.
- Without early dismissal, even defending a suit is ruinously expensive for anyone but the largest companies.
10. Competition & Small Players (35:59–37:24)
- Removing or blunting Section 230 disproportionately insulates incumbents (Meta, Google) while making it impossible for startups to operate.
- Masnick: “If we don’t have full protections of Section 230 working, we have the opposite of [competition].” (36:28, Masnick)
11. Addressing Common Criticisms of Section 230 (37:24–39:33)
- Section 230 isn’t a “special carveout for big tech”—it equally protects users, small websites, and the retweeting of content.
- Newspapers and similar outlets already benefit for third-party content (e.g., comments sections).
12. What If Section 230 Didn’t Exist? (39:33–40:59)
- Masnick argues we’d revert to a heavily moderated, TV-like environment. Without such protection, open, user-driven community platforms would be rare.
13. Contradictory Policies and Legislative Rhetoric (40:59–42:51)
- Many conservatives rail against “censorship” but simultaneously seek to remove Section 230, misunderstanding (or willfully ignoring) its operational effect.
14. The "Social Media = Cigarettes" Analogy Debunked (43:10–46:11)
- Cigarettes are a chemical harm with direct, proven causation—speech is not.
- Comparing the regulation of speech-based platforms to the regulation of toxic substances is “ridiculous… an emotional comparison.” (43:31, Masnick)
- Marketing restrictions for cigarettes are historically regulatory settlements, not clear First Amendment exceptions.
15. Platform Incentives, Public Opinion, and Long-Term Outcomes (46:11–48:10)
- Legal incentives are just one force; user demand and advertiser pressure also guide platform behavior.
- “Enrage to engage” content is bad business long-term, and companies have already retooled algorithms to avoid such pitfalls.
16. State of Social Media Harm Research (48:10–51:11)
- Multiple massive studies (UK, Australia) show no strong evidence that social media causes inherent harm to teens; it is likely beneficial for many.
- Some high-usage individuals suffer, but causation likely runs toward social media from pre-existing trauma, not the reverse.
- Plaintiffs and supportive voices who benefit from platforms are absent from such lawsuits.
17. Implications for Adults, Privacy, & Age-Gating (53:00–54:43)
- The legal theory applied to minors could easily extend to regulating all users, including adults—particularly endangering privacy (by undermining encryption) and requiring intrusive age/ID verification.
18. Parenting, Technology, and Responsibility (55:08–57:59)
- Parental education and gradual introduction to technology offer a sensible risk-mitigation pathway.
- “You don’t send a child who is not prepared to go downtown… You teach them how to cross the street.” (56:37, Masnick)
19. Is This an Existential Crisis for Social Media? (57:59–58:58)
- Masnick’s conclusion:
- “It will change social media in very dramatic ways, in ways that we can't quite predict, but ways I don't think we will be happy with. I think it really is an existential moment for the Internet, for the idea of an open Internet where free speech is enabled and encouraged.” (58:31, Masnick)
Notable Quotes & Memorable Moments
- “Meta is a terrible company… But that doesn’t mean that I think any lawsuit that goes against them is therefore a good result.” (04:48, Masnick)
- “If you care about free speech online, these two verdicts should scare the hell out of you.” (08:35, Masnick)
- “If all the videos on Instagram were paint drying, do we think that infinite scroll would matter?” (12:21, Masnick)
- “Addiction is... overpowering for you. Whereas just having a bad habit, there are lots of things you can do to fix those habits.” (15:05, Masnick)
- “Cigarettes are not speech. That’s the simplest version of it.” (43:31, Masnick)
- “Section 230 is not a special carve-out for big platforms. If it is a carve-out, it is a carve-out for users of the Internet.” (39:11, Masnick)
Suggested Timestamps for Key Segments
- Case overview and design vs. speech — (00:18–10:32)
- Section 230 and First Amendment deep dive — (29:40–35:59)
- "Social media = cigarettes?" analogy — (43:10–46:11)
- State of harm research & the missing context of positive uses — (48:10–51:11)
- Existential threat & concluding thoughts — (57:59–END)
Tone & Language
The conversation is candid, nuanced, and analytical, often mixing legal and philosophical argument with plainspoken skepticism about both big tech and regulatory overreach. Both speakers remain committed to open Internet values, while also recognizing the real-world challenges and public anxieties that fuel cases like these.
For listeners seeking to understand what’s at stake in the new legal push against social media “addiction,” this episode offers a comprehensive, jargon-free exploration—balancing law, research, and the enduring messiness of free speech in the digital era.
