The Lawfare Podcast: AI Regulation and Free Speech — Navigating the Government’s Tightrope
Archive Episode Summary (Originally aired Nov. 25, 2024; released Nov. 27, 2025)
Episode Overview
This archived episode of The Lawfare Podcast brings together Paul Ohm (moderator), Eugene Volokh (UCLA Law, Hoover Institution), Alan Rosenstein (University of Minnesota Law School, Lawfare), and Chinny Sharma (Fordham Law) at a Georgetown Law & Tech conference. They discuss pressing questions at the intersection of generative AI and the First Amendment, the doctrine of “cheap speech,” government regulation, liability, federalism, international spillover, and privacy. The conversation is candid, nuanced, and at times delightfully irreverent — rich with debate on how foundational legal concepts are challenged by rapidly evolving AI technologies.
Key Discussion Points and Insights
1. Cheap Speech, Generative AI, and the First Amendment
- Eugene Volokh reflects on his landmark “Cheap Speech” paper:
The internet and now AI substantially lower barriers to public speech; while the internet enabled more people to become speakers, AI complicates things — outputs are not always fully anticipated by software authors or prompters.- Quote:
“Even if the technology was nascent, you could really see the writing on the wall... But it's not so clear with AI... this does raise, I think, new issues of speech protection in a way that the internet did not really raise.” (04:36)
- Quote:
- First Amendment Status of AI Outputs:
Volokh contends that AI-generated content is protected speech, though subject to traditional exceptions (libel, incitement, etc.). - Listener Rights Approach:
Both Volokh and Rosenstein argue First Amendment protection often hinges more on the rights of listeners (and speakers “using” the AI as a tool), allowing courts to sidestep tough metaphysical questions about AI “authorship.”- Quote:
“I suspect that this listener theory... is going to very quickly become the sort of conventional wisdom among both academics and especially judges when this gets to the courts…” (15:54)
- Quote:
2. Is AI’s Inflection Point Different From Previous Tech?
- Panelists debate the novelty and disruptive force of generative AI:
Some technological features (like unpredictability of output) make AI a more distinctly complex regulatory and legal challenge.- Chinny Sharma:
“...I get frustrated when people call like, you know, 2021 or 2022... an inflection point... this has been happening behind the scenes... But... AI is no different than any other technology... that also feels wrong... it’s more complicated and it's like a both and situation.” (08:43)
- Chinny Sharma:
3. First Amendment, Tools, Content Neutrality, and Liability
- Tools for Speech Analogy:
Government can regulate tools (like bullhorns, printing presses, or LLMs) for content-neutral reasons (e.g., noise at night), but content-based restrictions face strict scrutiny.- Quote (Volokh):
“The law... distinguishes... between restrictions that are justified by the non-content features of the tool or... by the content.” (18:01)
- Quote (Volokh):
- Safety Audits and Content Neutrality:
Most LLM “safety” concerns revolve around content, not the tool itself. Some content-neutral concerns (e.g., addiction) are possible but rare.- Quote:
“Almost invariably when people talk about LLM safety, it's a worry that the LLM will output material that is harmful... that flows from the content of the output.” (21:01)
- Quote:
- Defamation Cases and Legal Standing:
AI outputs are taken seriously — real libel cases are arising, e.g., Bing merging bios and OpenAI hallucinating embezzlement.- Battle v. Microsoft (Bing) and Walters v. OpenAI discussed as ongoing cases.
- Quote (Volokh):
“If this had been a newspaper that did it, it would be a very solid libel case... Reality, people do pay attention to such things, and thus liability follows reality.” (28:22)
4. The Regulatory State: Agencies, Courts, and Innovation
- Pros and Cons of Agency Regulation:
Agencies bring expertise and speed but face resource challenges, politics, legal limits, and risk of overreach. - Dynamic of “Letting Things Cook” vs. Early Regulation:
Debate over dangers of premature regulation (risk to innovation) versus regulatory inertia (public harms).- Quote (Rosenstein):
“Premature regulation... I'm concerned about it whether or not it’s from the courts... It's from the agencies. Right. I do think. I think you got to let them cook a little bit.” (38:41)
- Quote (Rosenstein):
- Analogies: Cars, Nuclear, AI
Historic regulatory examples suggest both over- and under-regulation risks (e.g., too slow on cars, too fast on nuclear).- Quote (Rosenstein):
“If you just look at the scaling law grass. Even if we just stopped today... Just the effect of that ramifying through the industry and society and economy over the next 10 years would be massive.” (36:41)
- Quote (Rosenstein):
5. Federalism: State vs. National Regulation
- California as De Facto National Regulator:
Large state actions (e.g., SB 1047) have national effects. There is debate about whether federal preemption is warranted for issues with broad spillover. - Dormant Commerce Clause:
States may be constitutionally constrained from regulating in ways that excessively burden interstate commerce, especially when the focus is national or international harms.- Quote (Rosenstein):
“Congress should seriously consider federal law preempting state safety legislation... not primarily a California based concern. That is a concern for the nation as a whole.” (47:09)
- Quote (Rosenstein):
- Laboratories of Democracy vs. Uniform Federal Standards:
Some areas (e.g., copyright, broadcasting) are federally preempted for good reason—panelists suggest AI safety could merit similar treatment.- Quote (Volokh):
“We believe in laboratories of democracy... and we believe in uniformity.... I think it's a mistake to assume that everything needs to be... available for state action.” (74:54)
- Quote (Volokh):
6. International Dimension
- Brussels Effect & Balkanization:
Discussion about Europe’s regulatory impact, Chinese/Thai/Turkish restrictions, and the prospect of a fragmented (“balkanized”) AI ecosystem, possibly with local, censored models and restricted global interoperability.- Quote (Volokh):
“If the Thai government says, look, you...better make sure that [the AI] doesn't say anything to insult the king... AI companies might deal with that... they could be more aggressive… exporting their rules to us.” (64:05)
- Quote (Volokh):
- TikTok Precedent & Foreign Platform Influence:
Pending cases (e.g., TikTok law) could set precedents for when and how the US can (or should) restrict foreign-run AI and platforms for reasons of speech sovereignty.
7. Privacy, Personalization, and the Role of Agencies
- FTC as Potential AI Privacy Regulator:
Panelists agree the FTC is best suited to address privacy and personalization issues.- “The scariest thing I could imagine is a department of Artificial Intelligence Intelligence. ...I'm much more comfortable with the FTC taking a bite at that apple than... FTC doing AI law generally.” (54:05)
- Commercial Speech and Personalized Ads:
Personalized (but non-misleading) ads are protected, so courts would scrutinize any “pro-privacy” ban—annoyance is not enough to justify a legal restriction.- Quote (Volokh):
“It's not enough just to say they're icky or they're bad, or they might get you maybe too effective at getting you things to buy, that just means they're persuasive.” (55:16)
- Quote (Volokh):
- Liability vs. Privacy:
Pressure to use behavioral profiling for liability purposes can itself threaten privacy.
8. AI Agents, Code-as-Speech, and Criminal Acts
- AI-generated code that acts in the world (e.g., hacks) generally not covered by the First Amendment:
Code is only “speech” when meant for human consumption or communication.- Quote (Volokh):
“If it's not communicated to a human... that's action, that's not speech.” (81:14)
- Quote (Volokh):
- Sorrell v. IMS Health and Limits of ‘Data Is Speech’
Data as speech is protected only if it is, in essence, communication from human to human, not pure machine action.
Notable Quotes & Memorable Moments (Timestamps Included)
-
On Listener Rights and AI Output:
- “The rationale would apply here as well in two ways... [1] I want to read this racist argument... [2] listeners as speakers... I asked it to compose a Facebook post... I’m entitled to get that free from government.” – Eugene Volokh (10:54)
-
On Labs of Democracy vs. Federal Preemption:
- “Laboratory democracies are great until those laboratories create externalities whose negative effects are greater than the benefits.” — Alan Rosenstein (76:57)
-
On Regulatory Uncertainty:
- “If you were to ask what should the law do with railroads in 1860, you'd be like, I don't know, we'll have to wait and see.” — Alan Rosenstein (36:58)
-
On International Pressure and “Balkanized” AI:
- “If it just says no about Tiananmen Square, what other more subtle things there might be would there be? Should there be a different First Amendment rule for that...?” — Eugene Volokh (68:36)
Important Timestamps for Major Segments
- Cheap Speech & Origin of Issue: 04:36 – 07:20
- AI as an Unprecedented Moment: 07:21 – 10:27
- Listener vs. Speaker Rights and Legal Doctrine: 10:54 – 16:37
- Defamation/Libel Cases and Legal Standing: 25:48 – 32:06
- Agencies vs. Courts in Regulating AI: 33:36 – 41:50
- Content Neutrality in AI Regulation: 20:38 – 25:23
- Federalism and State vs. Federal AI Laws: 43:34 – 51:34
- International Spillovers, Foreign Influence, and Balkanization: 61:06 – 72:49
- Privacy, Personalized Ads, and Liability: 51:34 – 60:14
- AI Agents, Code-as-Speech, and Criminal Acts: 81:14 – 84:57
Tone & Style
The panel is lively, intellectually rigorous, and at times wry. The speakers examine each other’s arguments in detail, field practical and philosophical questions, reference real cases, and are not afraid to disagree or reveal uncertainty — all in the service of clarity. They also include humor and asides that make the complicated material welcoming even as it remains serious.
Useful for the Uninitiated
Anyone interested in the legal thicket surrounding generative AI, liability, free speech, national vs. state regulation, privacy, and geopolitics will walk away with a strong grounding in the key issues being debated today — and a sense of the uncertainty and stakes ahead.
