So to Speak: The Free Speech Podcast Episode 228: Does Artificial Intelligence Have Free Speech Rights? Release Date: November 1, 2024
Introduction
In Episode 228 of So to Speak: The Free Speech Podcast, hosted by Nico Perrino of FIRE (Foundation for Individual Rights and Expression), the discussion centers on the intersection of artificial intelligence (AI) and free speech rights under the First Amendment. Recorded live at the First Amendment Lawyers Association's fall meeting in FIRE's D.C. office, the episode features an esteemed panel comprising Samir Jain, Andy Thorps, and Benjamin Wittes. The conversation delves into the legal, ethical, and societal implications of AI technologies like ChatGPT and their relationship with free speech.
I. AI and First Amendment Rights: Theoretical Perspectives
Benjamin Wittes opens the panel with a provocative assertion from his March article, suggesting that AI models like ChatGPT possess First Amendment rights akin to those of a major media company.
Benjamin Wittes (05:07): "You have functionally, though not doctrinally, created a machine that operates within the space and protection of the First Amendment."
Wittes explains that while the courts currently do not recognize AI as holders of First Amendment rights, the way companies utilize AI to generate content brings forth complexities. He likens AI-operated models to traditional media outlets, thereby suggesting that the freedom to produce and disseminate speech could extend to the AI tools facilitating this process.
II. Liability under Section 230 and Generative AI
The discussion shifts to Section 230 of the Communications Decency Act, which shields service providers from liability for user-generated content. The panel examines whether this protection extends to AI-generated outputs.
Samir Jain articulates that the applicability of Section 230 to AI is fact-specific, particularly regarding whether AI systems contribute substantively to content creation.
Samir Jain (12:29): "If a generative AI system creates a hallucination... it can play at least some role in the creation of that content."
Andy Thorps concurs, emphasizing that tools like ChatGPT are more creators than mere hosts of content, potentially placing them outside the traditional protections of Section 230.
Andy Thorps (14:21): "ChatGPT... seems to me like it's more of the latter [creating content] rather than you're not just hosting."
Benjamin Wittes adds another layer by questioning the nature of content generation, especially when AI regurgitates training data without original user input.
Benjamin Wittes (15:45): "If they go out and take everything I've ever written and then regurgitate that, that is not in the domain of 'I submitted that content to them.'"
III. Intellectual Property and AI Training Data
The panel addresses ongoing litigation concerning AI training on unlicensed data sets, such as lawsuits by The New York Times and claims against OpenAI for using copyrighted materials without consent.
Benjamin Wittes categorizes these concerns under intellectual property law rather than First Amendment issues, expressing skepticism about the legality of using copyrighted, non-public domain content for AI training without permission.
Benjamin Wittes (16:31): "Anything that's not human generated is not copyrightable using the text of the act... it's a remarkable proposition... it strikes me as a [questionable]."
Samir Jain draws parallels with search engines and the use of "robots.txt" to manage website crawling, suggesting similar technical solutions might evolve for AI training data permissions.
Samir Jain (18:08): "A similar kind of technical solution... allows sites to sort of give permission or not for their content to be used as training data for AI models."
IV. Deepfakes, Misinformation, and the First Amendment
A significant portion of the discussion revolves around AI-generated deepfakes and misinformation, their potential harm, and how existing legal frameworks address them.
Samir Jain highlights the recent enjoining of California's deepfake laws on First Amendment grounds, pointing out the challenges in regulating AI-generated falsehoods that fall outside traditional defamation statutes.
Samir Jain (20:08): "There’s no question that disinformation and misinformation is a real problem... a lot of it's going to be just lies that are, notwithstanding the harm they caused, protected by the first amendment."
Andy Thorps shares insights from ongoing litigation in Georgia, where a lawsuit against OpenAI alleges defamatory outputs from ChatGPT. He underscores the complexity of holding AI companies liable and anticipates that existing laws on defamation and false light may apply more straightforwardly to human perpetrators than to machines.
Andy Thorps (24:45): "If someone ... spits out something that is in some way false and defamatory... that person could be a defendant."
Benjamin Wittes raises concerns about the scale and autonomy of AI-generated misinformation, particularly in scenarios where AI acts as an independent broadcaster of false information.
Benjamin Wittes (21:17): "In the political space... it can't be right."
V. Mandatory Disclosure and Watermarking
The conversation moves to the topic of mandatory disclosure of AI-generated content, including recent legislative efforts like California's watermarking law and FCC considerations for election-related disclosures.
Andy Thorps supports mandatory disclosure as a sensible measure with minimal opposition.
Andy Thorps (26:07): "I think it's a fantastic idea... pretty hard to come up with an argument against it, personally."
Benjamin Wittes contends that such mandates may be unenforceable due to the proliferation of open-source AI models, likening it to the ineffective Video Rental Records Protection Act.
Benjamin Wittes (26:29): "It's completely unenforceable... somebody will replicate that set of systems without the watermarking requirement."
Samir Jain differentiates between government-mandated disclosures and industry standards, emphasizing the potential First Amendment challenges of compelled disclosure.
Samir Jain (26:31): "There's a real question whether that kind of mandatory compelled speech or labeling survives First Amendment scrutiny."
VI. Comparisons with Past Technological Revolutions
When asked whether AI represents a fundamentally different technological revolution requiring new First Amendment approaches, the panel reflects on historical precedents like the internet.
Andy Thorps draws parallels with the internet's initial bewildering impact on existing laws, arguing that the legal system tends to adapt without overhauling foundational principles.
Andy Thorps (31:48): "The law will find a way... to tackle these issues without a real sea change in the underlying law."
Benjamin Wittes warns of a future where AI serves as autonomous content creators, complicating liability and regulation beyond traditional frameworks.
Benjamin Wittes (34:38): "If that becomes a predominant use case, that ends up injuring a lot of people... profoundly and just profoundly different."
VII. Audience Q&A: Truth, Accuracy, and Disclosure in AI Outputs
During the audience segment, several questions emerge regarding the reliability of AI-generated content, its potential for misinformation, and the necessity of disclosure, especially in interactive contexts like adult entertainment chatbots.
Benjamin Wittes discusses experiments demonstrating the ease with which AI can be manipulated to produce harmful content, stressing the difficulty in assigning liability when users disseminate such outputs in good faith.
Benjamin Wittes (39:46): "The user is acting in entirely good faith... it's difficult to figure out who, if anybody, is the bad faith actor."
Andy Thorps elaborates on ongoing litigation and potential negligence claims against users for uncritically republishing defamatory AI outputs.
Andy Thorps (45:59): "... someone who republish it to the world... could make a strong argument that doing so is negligent."
Kylie Work raises concerns about truth and accuracy in AI-generated political content, questioning where the line lies between misinformation and false advertising.
Kylie Work (52:49): "What if AI answered... how does truth and accuracy come in to play with AI online searching and AI-generated images?"
Samir Jain contemplates whether AI necessitates a reevaluation of existing legal standards concerning truth and falsity, given the scale at which AI can generate content.
Samir Jain (50:56): "... capacity of AI to change things in a way that we need to adjust where we say, actually this false content can't be lawful anymore."
VIII. Conclusions and Final Thoughts
In wrapping up, the panel emphasizes the need for thoughtful regulation that protects democratic processes and individuals without stifling technological innovation. Benjamin Wittes advocates for proactive regulation to manage AI's impact on democracy and individual rights, while Samir Jain underscores the importance of adapting legal frameworks based on the specific harms caused by AI technologies.
Benjamin Wittes (66:52): "The problem here is that it is going to develop in a wholly unregulated and way too libertarian environment."
Samir Jain (68:22): "How people are going to be able to differentiate between what's real and what's authentic and what's actually a deep fake is a second-order question..."
Notable Quotes
-
Benjamin Wittes (05:07): "You have functionally, though not doctrinally, created a machine that operates within the space and protection of the First Amendment."
-
Samir Jain (12:29): "If a generative AI system creates a hallucination... it can play at least some role in the creation of that content."
-
Andy Thorps (26:07): "I think it's a fantastic idea... pretty hard to come up with an argument against it, personally."
-
Benjamin Wittes (34:38): "Profoundly and just profoundly different."
Key Takeaways
-
First Amendment Implications: While AI systems like ChatGPT are not currently recognized as First Amendment rights holders, their role in content creation challenges existing legal doctrines.
-
Liability Under Section 230: Determining whether AI-generated content is protected under Section 230 depends on whether AI plays a substantive role in content creation, a question that remains largely unsettled.
-
Intellectual Property Concerns: The use of unlicensed data for AI training raises significant IP issues, with ongoing litigation highlighting the contentious nature of this practice.
-
Deepfakes and Misinformation: AI's ability to create realistic deepfakes poses new challenges for regulating misinformation, especially in politically sensitive contexts.
-
Mandatory Disclosure Challenges: While disclosure of AI-generated content is widely supported, enforcing such mandates is problematic due to the decentralized and open-source nature of many AI models.
-
Historical Context and Future Regulation: Comparing AI to past technological advancements suggests that while AI is a powerful tool, legal systems are likely to adapt existing frameworks rather than overhaul foundational principles.
-
Regulatory Path Forward: The consensus among panelists leans toward the necessity of thoughtful, nuanced regulation that addresses specific harms without unnecessarily hindering technological progress.
Conclusion
Episode 228 of So to Speak: The Free Speech Podcast provides a comprehensive exploration of the complex relationship between artificial intelligence and free speech rights. Through expert insights and rigorous debate, the panel underscores the urgent need for legal and regulatory frameworks that address the unique challenges posed by AI technologies, ensuring that free expression is protected while mitigating the potential harms of misinformation and unauthorized content generation.
