
Loading summary
Narrator/Host
Only Boost Mobile Boost Mobile will give you a free year of service. Free year when you buy a new 5G phone.
Senator Ron Wyden
New 5G phone?
Narrator/Host
Enough.
Boost Mobile Hype Man
But I'm your hype man. When you purchase an eligible device, you get $25 off every month for 12 months with credits totaling one year of free service. Taxes extra for the device and service plan. Online only.
Bluehost Advertiser
I'm no tech genius, but I knew if I wanted my business to crush it, I needed a website. Now, thankfully, bluehost made it easy. I customized, optimized and monetized everything exactly how I wanted with AI. In minutes my site was up. I couldn't believe it. The search engine tool even helped me get more site visitors. Whatever your passion project is, you can set it up with Bluehost with their 30 day money back guarantee. What have you got to lose? Head to bluehost.com to start now.
Isabella Royo
I'm Isabella Royo, intern at Lawfare, with an episode from the Lawfare archive for October 5, 2025. This week, OpenAI launched Sora 2, an artificial intelligence video and audio generation model that creates realistic videos based on text prompts. In the days since its launch, Sora 2 has lent new force to concerns about AI generated videos that infringe copyright, defame individuals, generate obscenity or child sexual abuse material, or otherwise violate the law. Whether and how OpenAI will be held responsible for different kinds of unlawful generated content is unclear. For today's archive, I picked an episode from May 2, 2023, in which Quinta Jurecic and Matt Peralt sat down with Chris Cox and Ron Wyden, the Authors of Section 230 of the Communications Decency act, which shields online sites from liability for tortious content posted by third party users. They discussed the applicability of Section 230 to AI generated output, the purpose of Section 230, First Amendment implications, and more.
Quinta Jurecic
I'm Quinta Jurecik, senior editor at Lawfare, and this is The Lawfare Podcast May 2, 2023. Today we're bringing you an episode of Arbiters of Truth, our series on the information ecosystem. Generative AI products have been tearing up the headlines recently. Among the many issues these products raise is whether or not their outputs are protected by Section 230, the foundational statute that shields websites from liability for third party content. Who better to talk through this question with than the people who wrote Section 230 in the first place? Together with Matt Peralt, who director of the center on Technology Policy at UNC Chapel Hill, I sat down with Senator Ron Wyden and Chris cox, formerly a US congressman and SEC chairman. Cox and Wyden drafted the statute together in 1996, and they're skeptical that its protections apply to generative AI. Two notes before we begin. First, Senator Wyden was only able to join us for the first part of this conversation, so you'll hear a discussion with him and Representative Cox before we continue the discussion just with Representative Cox. And second, a disclosure. Matt consults on tech policy issues, including with platforms that work on generative artificial intelligence products and have interest in the issues discussed. It's The Lawfare Podcast May 2 Cox and Wyden on Section 230 and generative AI.
Interviewer/Host
The two of you drafted Section 230 and you have repeatedly spoken since the legislation was passed in 1996 about the original intent behind the bill. You spoke recently to the Washington Post about how you think Section230 would map onto generative AI tools. Do you think Section230 is going to protect generative AI platforms?
Senator Ron Wyden
Both of us have said Section230 does not protect generative AI like ChatGPT or Google's Bard. And let me just give you the quick reason why AI tools like ChatGPT, Bards, stable diffusion, others being rapidly integrated into popular digital services couldn't be protected. And it's not a particularly close call. What Chris and I were working on all these years was protecting users and sites for hosting, hosting and organizing user speech. It shouldn't protect companies from the consequences of their own actions and products. Now, systems are different, but generally Bard, Chat, GPT and the like are something else. They are creating content. And the consensus from courts today is when a company creates or contributes for creating content, the company is also responsible for it. That is exactly what Chris and I wrote.
Chris Cox
Yeah, I'll just add to that that, you know, I've often been asked since we talked to the Washington post, how does section 230 apply to AI? As if it's a yes or no question. But what section 230 is very clear about is that if you are a content creator, you are not protected. So the real question is in any particular case, what has been the role of AI? AI itself defies clear description. It's a very broad field. The simple answer to that imperfect question is that when AI is the acknowledged creator of unique content, that's illegal. Section 230 will not be a defense. But that's only one way of thousands that AI is already being used and can be used. Even ChatGPT, which is so often discussed is just one example of generative AI, which is itself a subset of of all of artificial intelligence and even GPT4, the latest version of chat. GPT is itself a fancy version of a neural network. In other words, a mathematical system that learns skills by analyzing data. That's what powers the digital assistants on our phones and our devices like Siri and Alexa. So the broader universe of AI, because it covers so many existing and possible applications, it's necessary to know first what kind of application we're talking about before we can get to the section 230 analysis. Is it facial recognition or is it self driving cars? Is it something as straightforward as enhanced Internet searches? Or is it the creation of a novel or a Broadway play or a memo giving tax or medical advice? The answers to all those questions are going to be different in each case depending on the facts. So Ron is right. We agree that when AI is creating content, then by the terms of Section 230 itself, there's no Section 230 protection. The real question though, is what are the facts? What's really going on?
Senator Ron Wyden
Hey folks, let me offer one other thought because for 25 years, my friend Chris has always been giving such thoughtful and detailed answers to these questions. And let me try to boil it down. You know, on my end, what I've always felt is that section 230 is about protecting users. It's about expanding speech, it's about democratizing the Internet. So that is fundamentally different than ChatGPT. ChatGPT does not do those things. So I very much share Chris's much more analytical and detailed kind of answer. I wanted you to know what I just described to you is what I'm telling members of Congress who are asking me, they come up and say, ron, I know you've been involved in this stuff for a long time. And I tell them the stories about how Chris and I. And like in the CASA book, the story was Chris and Ron do lunch and we started talking about this. But I just wanted you to know that in terms of what members are picking up on, it's the difference between 230 and ChatGPT that I just gave you.
Chris Cox
Yeah. And as always, I completely am in accord with what Ron is saying about this as well. You know, there are some easy answers to this question. I did the hard part, but the easy part is copyright. And a lot of what is being stirred up right now about ChatGPT and about generative AI across the board is what is the role of copyright? Depending on the extent of an AI product's use of other creators content, it's easy to imagine a legitimate claim for copyright infringement. I'm a member of the Authors Guild, and the Authors Guild has been very active in this space. Section 230 expressly does not provide any protection against claims of copyright infringement. In fact, it doesn't have any effect on the application of any law pertaining to intellectual property. So if the question concerns whether AI created works are themselves copyrightable, or who exactly is legally responsible when AI infringes copyrighted works, those questions can all be answered in the courts based on the facts of each case and entirely without regard to section 230.
Senator Ron Wyden
And I'm an author's kid, so I second that response as well.
Narrator/Host
Excellent. So I want to put on the table a different view of Section 230's relationship with generative AI, just to kind of get you to respond and spell out your thinking a little more. So Jess Myers, who is a lawyer at the Chamber of Progress, has made the argument that section 230 would protect generative AI outputs on the grounds that, in her words, the systems are, and I quote, entirely driven by third party input and do not invent, create, develop outputs absent any prompting from an information content provider, I. E. User. And of course, importantly, Jess also notes that these systems do not, quote, expressly or impliedly, encourage users to submit unlawful queries. So I'm curious how you both would respond to that, since it's such a contrast with your own views of the statute.
Senator Ron Wyden
Let me tell you why I don't share that view. I mean, search engines and chatbots, and there's a lot of confusion about what's what's what, are very different. Search engines, which was our focus, of course, provides access to information and responds to user prompts with potential source of information from other websites. That is what Chris and I conceived of AS230 protected content. Chatbots are something else. They respond to user commands, but the answers they generate aren't merely pointing to somebody else's website or video. They are generating content. And chatbots are often advertised or promoted as, in fact, as providing complete answers. So if some search engines, at some point down the road, and this is what Chris was touching on, begin to look more like chatbots, the lines may begin to blur and we'll have to think about that. But we want people to understand that search engines and chatbots are very different.
Chris Cox
Yeah, I've read perhaps not all, but I hope most of what Jess has written about this, and I just refer to our earlier discussion that the facts matter a great deal. When we talk about AI or we talk about Chat GPT, it depends entirely on how you're using that AI and what you're asking Chat GPT in specific to do. If you ask Chat GPT to write a novel, which is entirely possible, there isn't any question at that point that Chat GPT is creating content.
Interviewer/Host
So could we move a little bit just from describing the state of the law to thinking a little bit about what the law should be? So let's assume that your view is correct. Judges agree with you that Section 230 does not apply to generative AI tools, or at least taking Congressman Cox's point to certain iterations of those tools. What's your thought on whether that's a good thing or a bad thing? Should generative AI tools have some form of liability protection that's similar to what Section 230 provides to content hosts?
Chris Cox
If I can. Ron, let me jump in on this one. This goes really to the heart of what Ron and I have thought from the beginning, which is that content create, there needs to be responsibility. The law needs to place responsibility on people who do bad things. And if, if you are the content creator and there's something illegal about your content, you are responsible. And so if ChatGPT or if some future form of generative AI is, by our general agreement, the content creator in a specific situation, then yes, absolutely. Normatively, that creator, the AI that created it, needs to be responsible. Which of course then raises questions like, you know, who exactly is going to be responsible in that situation? That's going to be a little tougher, possibly can be the source of new legislation. But I have also reason to think that we have so much accumulated law on the books that we could probably sort that one through on the basis of existing legal principles.
Senator Ron Wyden
Yeah, I think Chris makes another important point. You know, people ask, so generative AI, it's got to be different from early Internet startups. The fact was, what we were interested in doing is making sure that we could have innovation that helps the typical American, the typical American, the users who want to connect with each other, the small businesses and the like. Now all of a sudden the question is, what about these big corporations, multinationals, in some instances calling for innovation, you know, putting untested technology into use without figuring out whether it's harmful or it's going to benefit our society. And I've got a big stack of letters hollered in an NBA center documenting my concerns with various technologies that weren't used. Responsibly so as Chris said, I think we ought to be cautious about how we proceed here in terms of reopening section 230. And just for what it's worth to take a slightly into a different area. You know, there's scores of proposals now to change 230 and I've always had a two part test first, what does it mean for speech? Because what Chris and I said from the very beginning is that we wanted to encourage as much discussion and particularly among typical users and what does it do for moderation? And I haven't seen any bills frankly that come close to that two part, you know, test. And in fact the last time, and you know, you may, I heard you guys were going to ask possibly about, you know, projections for bills going, you know, to the floor. One other reason I think legislators are being more cautious is they're seeing that Sesta Foster that was ballyhooed much proclaimed as the big answer to sex trafficking has not worked out very well. You basically pushed the bad guys into the dark web and vulnerable people seem to suffer. And there are not a lot of people holding rallies today. For Sesta Fosta.
Chris Cox
Only Boost Mobile Boost.
Narrator/Host
Mobile will give you a free year of service. Free year when you buy a new 5G phone.
Chris Cox
New 5G phone, enough.
Boost Mobile Hype Man
But I'm your hype man. When you purchase an eligible device, you get $25 off every month for 12 months with credits totaling one year of free service. Tax is extra for the device and service plan on online only.
Fin AI Representative
AI is transforming customer service. It's real and it works. And with Fin, we've built the number one AI agent for customer service. We're seeing lots of cases where it's solving up to 90% of real queries for real businesses. This includes the real world complex stuff like issuing a refund or canceling an order. And we also see it when Fin goes up against competitors. It's top of all the performance benchmarks, top of the G2 leaderboard. And if you're not happy, we'll refund you up to a million. Which I think says it all. Check it out for yourself at Fin.
Vertex AI Representative
AI change isn't coming. It's already here. Commerce is going digital and tax complexity is multiplying. Tax rules evolve, rates shift, data floods in. Vertex connects it all. A global tax compliance platform powered by tax ready data and intelligent systems. Smarter Tech Continuous tax compliance built in confidence. Learn more@vertex inc.com.
Interviewer/Host
For some people in the tech industry like me, this is really the first time it's been possible to See the landscape that you confronted in 1996, when all the potential benefits and all the potential harms of the Internet were in front of you and you needed to decide the right regulatory roadmap for the future. I think Even critics of 2:30, I think many of them would concede this has been an impressive framework given how it's withstood the test of time, even if there are many people who would want to shift various different components of it. I'm curious, given your work then and given where you stood at that moment looking out at trying to think about what the right regulatory framework for the Internet would be, how do you think lawmakers should approach this issue now?
Senator Ron Wyden
Well, I think the practical step now is to focus on what we call the Algorithm Accountability Act. And the fact is there are very real harms that people can suffer. The typical person as a result. I mean, these algorithms can control, you know, who's hired, you know, who's fired, people buying insurance, people buying prescription drugs. I haven't gotten a chance really to talk to Chris much about, about this, so just, just know I'll be interested in his response. But I think the next step I'll be introducing legislation on this is, you know, again, to go to the kind of framework that makes sense to ensure innovation, but also to protect consumers. And I think we ought to have information about audits and things, things of this nature. And in the parlance of the United States Senate, I will tell everybody following this discussion, then I am happy to yield the discussion to my friend of more than a quarter century, Chris Cox. You will not find many differences between us. I know that when we were asked about ChatGPT, you know, we basically started finishing each other's sentences with respect to whether there ought to be section 230 protection. So I think you'll enjoy listening to Chris, and you can pretty much operate under the assumption that we're in agreement. So I apologize for the bad manners and look forward to continuing this discussion. Chris, you'll prosecute the case for both of us. Okay.
Chris Cox
Well, it's an excellent question, and my heart goes out to people in Congress right now who have to grapple with this and from whom answers are being demanded while they are standing at the front end of the great unknown. We don't know where this technology is headed and we have to use our best judgment. But there are a couple of things that should serve to calm things down. One is that the law, particularly in its written form, statutory form, positive law, needs to be not so much forward looking as Persistent. It needs to be good to the extent that human beings can make it so for all time, or if not all time, then for the indefinite future. It shouldn't be right or wrong depending on the next change in technology. There have got to be enduring principles. And that's what Ron and I were thinking of when we wrote section 230. This was really the first law written about the Internet. And so Legislative Council in the House and in the Senate wanted to define a whole bunch of technical terms and so on. And we pushed back on that very hard because we knew technology was rapidly changing. And if the law specifies with some technical particularity that you need to have a rammer frammer, you need a gizmo, or we're going to regulate these gizmos in a certain way, then practicing lawyers, which I've been one of for 20 years of my career, have to give advice to their clients that, you know, we're not sure what will happen if you stray from these specified technologies and do something new that the, that the law and the courts haven't. Haven't addressed yet. That's all going to slow down technology. So, so what you want to do is stick with, if you will, plain English and concepts that will endure and be applicable with the assistance as needed courts and judges to the specific facts of cases as the technology goes forward. So that's thing one. The other thing that should make us a little bit less concerned about all that could happen here is thinking without regulation, is thinking about what can go wrong with regulation, because there is a lot that can go wrong with regulation that should give us some humility. If you don't know what it is that you're regulating and most people don't yet, there's a great deal of damage and harm that you can do. So you want to have a little bit of humility. And so I think those two things together would be my modest suggestions to the people who have the very difficult task of dealing with these challenges today.
Narrator/Host
A lot of the arguments against substantially reforming or rolling back 230 often point to reliance interests that, you know, the Internet as we know it has essentially been built on the bedrock of section 230, such that altering that could drastically change how platforms behave and potentially lead to really negative consequences. And I would certainly agree with Senator Wyden's point that Sesta Fosta is a unfortunately good example example of that. When it comes to generative AI, though I've been wondering whether, you know, that same argument might not hold precisely because the technology is very new, it simply hasn't been around long enough for that reliance to have been built up. So I'm curious whether you think that that should change or shape in any way how we think about potential regulation of generative AI.
Chris Cox
Well, I don't know that there are deep reliance interests right now with things that are so brand. I mean, ChatGPT has been available for widespread commercial use for less than a year, it seems. So rather than worry too much about the reliance interests, I would just try and focus on the potential benefits and how we can maximize those while protecting ourselves from the obvious harm that could also come from runaway computer world that is under nobody's control, or worse yet, under malign control.
Narrator/Host
So the senator talked about the need to think carefully about algorithms and thinking about tech regulation, and that's a really nice lead in to the case that's currently before the Supreme Court, Gonzalez versus Google, which of course has to do with the extent to which platforms can be held liable under 230 for algorithmic amplification systems that they build. When the justices were hearing this case during oral argument, they seemed quite hesitant to kind of take the plunge and draw a line there. I'm curious what your views are on the case and what you think the outcome should be.
Chris Cox
Well, Ron and I filed an amicus brief in that case, so I'm very familiar with it. I listened to the oral argument and as you say, it was apparent in the oral argument that the justices were having trouble with the idea that YouTube's up next video feed was any different than search recommendations on Google. The justices were questioning how video recommendations were different than text recommendations. Since prioritizing material on the Internet is really what platforms do. Every publisher on the web has to organize and present third party content in some way, and that entails editorial judgment, which is protected by both the First Amendment and Section 230.
Narrator/Host
So let's then talk about the current reform landscape for 230. Matt and I have been working with a great team to keep track of the different legislative proposals to change 230 on a tracker on Slate, at least in my view. And I think I'm also speaking for Matt here. Our impression is that there's not as much activity now as there is has or has been in previous Congresses. First off, you know, are we right about that? And second off, if we are, I'm curious if you have any sense of why that might be.
Chris Cox
Well, I think you're right, and I think that the reason is that Congress has been, figuratively speaking, beating its head against the wall on this question for several years now with Democrats coming at the question from one direction and Republicans from the other, you know, based on particularly social media issues and their view of whether there is too much or too little content, moderation. And what they're finding is that it's very difficult to draw these lines and in particular it's difficult when the solution is built around putting the government in charge because ultimately these are questions about speech and you run into significant First Amendment issues when, when the government takes control. So I think there is, there's a little bit of learning, maybe a lot of learning that's gone on over the last several years and that has caused people to pull back from some of the, remember the Trump and Biden campaign slogans of, you know, repeal section 230 and that'll solve everything and so on. And as people have gotten more sophisticated in their analysis, they are coming up with certainly in Congress, more targeted solutions and things that don't facially offend the First Amendment. The states, for their part, have been, I'd say, a little bit more aggressive and a little more risk taking in their willingness to confront the First Amendment head on.
Narrator/Host
So I know we definitely want to talk about the state level proposals. Before we do that, though, I just, I'm curious for your sense of the increased sophistication, as you put it, in addressing 230. I will say that's certainly my impression as well. It seems like we've in some ways we've moved a long way from the sort of initial calls to repeal 230 to more targeted approaches. I guess my question to you is whether you would expect to see that kind of increased nuance and sophistication continue to grow or whether we've reached kind of a plateau insofar as, as you put it, Congress is kind of beating its head against the wall. You know, is, is there a world in which two or three years from now policymakers have had enough time to kind of sit with this and really think it through, that we could potentially get reform if folks still do want reform, that would make sense. Or have we kind of hit the ceiling of what we can expect?
Chris Cox
Well, rather than thinking of this as a plateau potentially, or, or thinking that someday we will reach the promised land where everyone is enlightened on these topics, I'd suggest that it's going to look more like a sine wave and that the sine wave is going to be determined by the calendar. We have elections every two years for Congress and every four years for President. And so the peaks on those sine waves are going to be in November of the even numbered years. And those peaks are going to represent not sophistication, but an utter lack of it, because people are going to be beating their chests and taking more extreme positions in order to curry favor with their particular base of support. The sophistication will be at the troughs of the curve and that'll be in between the elections. And that's kind of where we are right now. So that's a good thing. But I think there's going to be an ebb and flow.
Interviewer/Host
Obviously, the activity in this issue is not limited to what's going on in Congress or in the executive branch in Washington. It's also been an incredibly active area at the state level with legislation in Texas and Florida pushed by the right that's focused on limiting platforms ability to moderate content, and then legislation in New York pushed by the left which sort of takes an opposite view, encouraging platforms to do more moderation. You have as a member of the board of directors of NetChoice, which has taken a leadership role in fighting back against some of this legislative efforts. You obviously have been involved in some form in challenging some of these laws. I'm curious if you can speak a little bit to how you see the state developments. What's your view of where things currently stand and where do you see it going?
Chris Cox
Yeah, so you mentioned the Net Choice litigation. There are two leading cases, one in the Fifth Circuit, net choice against Paxton, and one in the Eleventh Circuit, net choice against Moody. Those are both First Amendment cases, although Section 230 appears in both of them as a peripheral issue. But the basic question is whether the state can insert itself as second guesser over the content moderation decisions of social media platforms, not to mention all manner of Internet websites of other kinds. The editorial discretion inherent in content moderation is protected by the First Amendment. And that protection is much broader than Section 230, which as we were discussing earlier, has a number of exceptions in areas where it doesn't apply at all, such as copyright. Those cases are pending grants of certiorari and the Supreme Court. So in the meantime, Texas and Florida, the two states involved, are enjoined from enforcing them.
Interviewer/Host
Congressman Cox, thank you so much for your time. It was great chatting with you.
Chris Cox
All right, happy to join you and do it again.
Quinta Jurecic
You've been listening to Arbiters of Truth, a Lawfare podcast series in the information ecosystem. The Lawfare podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a LawFair material supporter at patreon.com lawfair where you'll also get access to special events and other content. Available only to our supporters. The podcast is edited by jenpatje Howell and your audio engineer. This episode was known Mosband of Goat Rodeo. Our music is performed by Sophia Yan. As always, thanks for listening.
Vertex AI Representative
Bucknell's graduates aren't just working, they're thriving. That's why LinkedIn ranked Bucknell University its 1 liberal arts college for career outcomes. Discover the personalized career coaching, real world research and professional network that prepare Bucknellians for a lifetime of success at Bucknell. Edu. Welcome.
Podcast: The Lawfare Podcast
Episode: Lawfare Archive: Cox and Wyden on Section 230 and Generative AI
Air Date: October 5, 2025 (original interview: May 2, 2023)
Host: Quinta Jurecic (Lawfare), joined by Matt Perault
Guests: Senator Ron Wyden, Congressman Chris Cox (Authors of Section 230)
This episode revisits a critical Lawfare discussion with Senator Ron Wyden and former Congressman Chris Cox—co-authors of Section 230 of the Communications Decency Act—about the applicability of Section 230 to generative AI platforms, like ChatGPT and Google's Bard. The conversation, set against the explosive growth and challenges of generative AI, explores the statute's original intent, whether its liability shield should (or does) extend to AI-generated content, and the broader implications for First Amendment rights, innovation, and regulatory approaches.
[03:37–07:10]
“It shouldn't protect companies from the consequences of their own actions and products… Bard, ChatGPT and the like are something else. They are creating content.” (Wyden, 03:56)
“When AI is the acknowledged creator of unique content that’s illegal, Section 230 will not be a defense.” (Cox, 04:55)
[10:31–11:33]
Wyden: Draws a sharp distinction between search engines, which aggregate existing content, and chatbots, which create new content in response to user input.
“Search engines... provide access to information… That is what Chris and I conceived of as 230-protected content. Chatbots are something else. They… are generating content.” (Wyden, 10:31)
Cox: The facts of each AI’s use are crucial—if an AI writes a novel, it’s the creator.
[08:20–09:44]
“Section 230 expressly does not provide any protection against claims of copyright infringement.” (Cox, 08:20)
[12:12–13:55]
“If you are the content creator and there’s something illegal… you are responsible. And so if ChatGPT… is… the content creator… then yes, absolutely… that creator… needs to be responsible.” (Cox, 12:42)
[13:55–16:16]
“Sesta Fosta… was ballyhooed as the big answer to sex trafficking, has not worked out very well. You basically pushed the bad guys into the dark web and vulnerable people seem to suffer.” (Wyden, 15:31)
[17:34–19:52]
Wyden: Calls for targeted legislation to ensure algorithm accountability, focusing on transparency and consumer protection—e.g., audits—without stifling innovation.
“I think the practical step now is to focus on...the Algorithm Accountability Act.” (Wyden, 18:14)
He proposes moving toward frameworks that balance innovation with fair and safe outcomes for users.
Cox: Stresses the need for durable, technology-neutral statutes that can adapt to unforeseen innovations:
“The law… needs to be… persistent. It needs to be good… for all time, or… the indefinite future. It shouldn't be right or wrong depending on the next change in technology.” (Cox, 19:52)
Lawmakers should be humble in the face of technological uncertainty to avoid unintended consequences.
[26:20–29:57]
“As people have gotten more sophisticated... they are coming up with...more targeted solutions and things that don't facially offend the First Amendment.” (Cox, 26:20)
“The peaks… are going to represent not sophistication, but an utter lack of it… The sophistication will be at the troughs… between elections.” (Cox, 28:50)
[29:57–31:51]
“The editorial discretion inherent in content moderation is protected by the First Amendment. And that protection is much broader than Section 230.” (Cox, 30:47)
Ron Wyden:
“Section 230 is about protecting users. It's about expanding speech; it's about democratizing the Internet. So that is fundamentally different than ChatGPT.” (07:10)
Chris Cox:
“If you ask ChatGPT to write a novel… there isn’t any question at that point that ChatGPT is creating content.” (11:33)
Wyden on AI Accountability:
“We want people to understand that search engines and chatbots are very different.” (10:31)
Cox on Lawmaking:
“If the law specifies… you need to have a rammer frammer… then practicing lawyers... have to give advice... we're not sure what will happen if you stray… That's all going to slow down technology. So… stick with plain English and concepts that will endure.” (19:52)
Cox on Political Cycles:
“The sophistication will be at the troughs of the curve and that'll be in between the elections… there's going to be an ebb and flow.” (28:50)
This episode provides a rare, direct look from the architects of Section 230 at how the law applies—if at all—to new generative AI technologies. Both Cox and Wyden agree that Section 230’s protections do not (and should not) extend to platforms that generate their own content, drawing bright lines around the scope of statutory immunity. The conversation also delivers critical context for policymakers on what effective, durable regulation should look like in the face of fast-moving technology, and why humility and caution remain essential virtues in lawmaking for the digital age.