
We’re checking in on the latest news in tech and free speech. We cover the state AI regulation moratorium that failed in Congress, the ongoing Character A.I. lawsuit, the Federal Trade Commission’s consent decree with Omnicom and Interpublic...
Loading summary
Corbin Barthold
It's the goal of the professional managerial class to take control of the models. They want to have audits where all of the model development is overseen and basically bring it to the bad old days of Claude, where if I ask it, what are the 10 key works of the Western canon? You know, today it says that's, that's subjective. But here, you know, Dante and Odysseus, I do remember it wasn't that long ago where it said that's a problematic question.
Ari Cohn
Somewhere I read of the freedom of speech.
Nico Perino
You're listening to so to Speak, the free speech podcast brought to you by fire, the foundation for individual rights and expression. All right, folks, welcome back to so to Speak, the free speech podcast, where every other week we take an uncensored look at the world of free expression through the law, philosophy and stories that define your right to free speech. I'm your host, Nico Perino. Today we're going to do a tech check, that is, we're going to check in on the latest news in tech and free speech. And to do that we are joined by my colleague Ari Cohn. Ari, welcome back onto the show. Fourth time now. Sam has on my notes here.
Ari Cohn
Oh, wow. You know, that's feels like just yesterday it was the first time back.
Nico Perino
Well, we're going to give you a tie. We have these new ties that were given out to our guests. Or scarves. Male, feel female. You haven't earned the Rolex yet. Okay, well, I don't think anyone's earned the Rolex yet.
Ari Cohn
The tie and the scarf, we'll see.
Nico Perino
You're going to have to talk to Sam about that. But Ari, you're the lead counsel for tech policy at fire. You do all things tech.
Ari Cohn
Yeah, the touches, tech touches me. And that just actually came out real bad.
Nico Perino
It reminded me of a Darkness song. You remember that band, the Darkness?
Ari Cohn
I love the Darkness.
Nico Perino
I love the Darkness too. They were kind of this old, this old school 70s style rock band.
Ari Cohn
Like actual, like old school.
Corbin Barthold
I believe in a thing called love.
Nico Perino
Yes.
Ari Cohn
Is this the first time we've had a singer on?
Corbin Barthold
You know, I'm going for copyright infringement in the first two minutes of the podcast.
Nico Perino
And that beautiful voice listeners that you're hearing is Corbin Barthold. He is the Internet Policy Council at Tech Freedom. You also host a podcast, the Tech Policy Podcast.
Corbin Barthold
And Ari is my lost love, my 15 time guest. He used to be at Tech Freedom.
Ari Cohn
Was it really 15 times?
Corbin Barthold
I mean, you're still invited back.
Nico Perino
Are we stomping on your territory here then with this podcast?
Corbin Barthold
You took my quest love for my show.
Nico Perino
I don't understand that reference.
Corbin Barthold
Jimmy Fallon, you know, your right hand man.
Nico Perino
I don't know who Jimmy Fallon is. I'm joking.
Ari Cohn
I can believe you don't know who Questlove is, but Jimmy Fallon is a bridge too far for you from me.
Nico Perino
All right, let's jump right in. On Thursday, May 22, the full House narrowly passed the One Big Beautiful Bill Act 215 to 214, one vote margin there. And in it they had a provision halting state and local AI enforcement for decade. But last Tuesday, on July 1, the Senate struck the state moratorium from the bill by a vote of 99 to 1. The House subsequently voted in favor of the Senate's changes to the bill. And on Friday, July 4, the President signed the Big Beautiful bill into law. Corbin, on X last Tuesday, you said, I hope Texas and Florida stand ready to counter every California and New York measure on AI equity, anti racism, decolonize the algo, etc. With a colorblind etc. Mandate. You said the worst case scenario is that left mandates woke scold AI, the right concludes AI is left coded and bad. The right mandates AI slowdown. The left blue sky crowd mandates AI slowdown, vicious downward spiral, future is dumb. There's a lot there in that tweet, Corbin. My assumption is you don't like that they didn't have this moratorium in the bill.
Corbin Barthold
Well, my. My first lesson is just never tweet. Okay, got it.
Nico Perino
Because you have to assume you're going to come on this podcast and I'm going to try and read it out loud and just butcher it.
Corbin Barthold
Yeah, well, so first of all, I didn't realize that if Marjorie Taylor Greene had just read the bill, it would have gone down in the House. So it was not ideal. I don't think it was well crafted. 10 years was kind of insane. Like in 10 years, I plan to have an AI girlfriend. Like, things are going to really develop between now and there.
Ari Cohn
So how does your wife feel about that?
Corbin Barthold
We'll have to talk.
Nico Perino
But is it really, is it really insane though, Corbin? Because you have section 230, which doesn't have a sunset provision, and it does some of the same stuff.
Corbin Barthold
I think AI is actually moving fast enough that 10 years is. Is pretty wild. Like the impact it's going to have on culture. I was fine with the notion of scaling back the amount of time and they could have done a more careful job about talking about what it applied to, but in principle, talking about keeping states out, you know, keeping their grubby fingers off of especially AI outputs and AI model training was definitely directionally correct. Because what we're going to have now, if you, if you liked the culture war that we have been fighting over social media, wait until you see the culture war that we're going to have over AI. Right? There is no unbiased Edenic LLM out there and the fight is actually going to be to push it in the direction of ruthlessly diverse Gemini. Do we all remember that?
Nico Perino
Yes, I remember that where you had black George Washington and that, that, you.
Corbin Barthold
Know, the end goal is, is not some kind of neutrality. Sort of the post modern critical race theory left thinks that that's not a thing that exists or can be achieved. So we're going to get all kinds of algorithmic justice measures that are in fact trying to turn LLMs into roughly what college admissions are today, where there's a thumb heavily on the scale of the outputs. And right now already David Rosado had a study. If you are having a job application put in and the resume is being fed into One of these LLMs, his advice to you is have a female name and use your pronouns. Like this notion that it is all slanted in some way that is racist in the traditional sense is actually wrong. It's biased in a lot of different directions and there's going to be a lot of yelling about this. If we start to try to put government controls on what these LLMs do and the outputs, it's going to be very ugly. And I am not excited about the notion of Texas getting involved in this. I think that they would have equally stupid common carrier requirements or whatever. But I will say if the federal government is not going to step in and preempt this kind of stuff, the depressing best case scenario is at least that maybe states on both sides get involved, fight it out, and maybe Neil Gorsuch realizes that the dormant commerce clause wasn't so bad after all.
Nico Perino
Ari, I have a question for you, but one more point on this, Corbin. So private companies like Google, they can create the LLMs with whatever sort of algorithm or bias they want. They can have woke AI if they want, for sure. But what you're saying is that by not preempting state legislation on some of this stuff, you're going to have the government mandate in some cases, presumably a woke AI, and we might see it on the basis of some sort of prevention of algorithmic discrimination, for example.
Corbin Barthold
Yeah, one way that this is going to play out is there's a Big push to impose disparate impact liability on LLMs. And nobody wants discrimination. You're pretty nuts if you're opposing, like, Title 7 of the Civil Rights Act. But disparate treatment is the normal benchmark across most discrimination law. Meaning if you hire someone based on their sex or their race over someone else, you know, that's illegal and that's, that's illegal if you do it with an LLM, like, putting AI in the mix doesn't change that disparate impact. We're suddenly in the world where any statistical difference between demographic groups and an outcome, immediately the maker of the product that reflects that disparate impact is on the back foot, is facing liability unless they can prove their innocence. Which, you know, demographic differences exist across basically every thing you care to measure in society because people make different choices, they have different cultures, they have different values. Like, it's not surprising that there are different outcomes in certain areas. Placing liability on AI for reflections of society that they are just putting back out into the world is going to be very, very ugly. It will be the plaintiff's lawyers field day.
Ari Cohn
Oh, yeah. And, well, you know, the thing and where that all leads to is something that, that Greg Lukianoff has pointed out and, you know, we've pointed out in some of our writings is, you know, when you get to that point, the incentive for the LLMs, the developers, is to make sure that nothing their AI spits out could be remotely conceived of as causing some kind of discriminatory effect. Which means the LLMs then present a skewed version of reality because nothing we don't want to say, include, say, any crime statistics that might point to, say, you know, different, you know, stats between different segments of populations or something like that, whatever you want to choose. And then when the LLM is asked to, say, make connections between, you know, different data points on different things to try and figure out, you know, new things about society we might not have known before. Well, then what happens is the LLM makes those connections based on flawed, faulty information. So we have this new information built on flawed information which is then fed back into the system and creates, creates yet another level of bad information based on flawed understandings of how the world around us is.
Nico Perino
They call that model collapse. Right?
Ari Cohn
Yeah.
Corbin Barthold
It's the goal of the professional managerial class to take control of the models. They want to have audits where all of the model development is overseen and basically bring it to the bad old days of Claude, where, if I ask it, what are the 10 key works of the Western Canon today it says that's subjective. But here, Dante and Odysseus and of course it is subjective, so that's okay. I do remember it wasn't that long ago where it said that's a problematic question.
Nico Perino
Oh, was that actually the case?
Corbin Barthold
Oh, yeah, yeah, yeah. When Claude was super safety obsessed, this was the kind of stuff it would say before it realized that nobody wanted that and that. No, I like, just answer my damn question. And so I don't think it's hyperbole to say that that's the goal is to get to decolonize the algorithm. And I don't think GOP states are going to take that lying down. So again, to repeat, they will do equally stupid things. And that's why we cannot have nice.
Nico Perino
Things like they tried to do with the laws in Texas and Florida, for example, with policing of social media or preventing content moderation in social media. Why would the House of Representatives, and presumably Trump's AI team, want to pass this moratorium? I read somewhere that there's something like 1000/plus state based AI regulations that are going through legislatures at this moment is the effort to try and preempt this patchwork approach to regulating AI.
Ari Cohn
So I think that is large part of it. You know, there's this fear that, you know, when there is, say 50 states, and if there's 50 different state regulations on what the AI can do, the incentive for companies is not to comply with 50 states law, it is to comply with the most restrictive state's law, if that's possible, to cover all bases, minimize compliance costs and things like that. And then you have say, California or New York setting AI policy for the entire country, which is kind of like.
Nico Perino
California does with its emissions policies for cars. Right? It's a big enough market.
Ari Cohn
Exactly.
Nico Perino
That if it sets a policy, companies tend to abide that as the denominator or numerator. I don't know which one. Don't ask me about fractions, Nico.
Corbin Barthold
I was told there'd be no, no math.
Nico Perino
You do not want me doing math. You don't want me going anywhere near math. My ACT score from high school is evidence enough of that. You had Adam Theory, our friend from R Street, say that this is a devastating blow for little tech AI players in the United States, as only large technology companies will be able to comply with the enormously confusing cost and costly compliance burdens. So if you're a small AI startup AI company, you're going to hire a litany of lawyers to try and ensure you're in compliance compliance with all these different states and perhaps have different model outputs based on the requirements of those states. Yeah, I mean, which would, in essence.
Ari Cohn
Be different models, and it would cost a ton of money, and it would leave the field basically at the mercy of the biggest players who can afford that kind of thing. Or you have the tiny players who then can comply with just the most restrictive law to cover all of them, but then they offer it basically an inferior product because, again, they're offering the California model to everyone instead of, say, the California model, the Texas model. But I don't think we want that generally. And I think more fundamentally, this points to an interesting kind of societal rift that Corbin and I were just talking about between the tech right and the right.
Corbin Barthold
Yeah, yeah. So I think the little tech point is real, but I do also think there's a certain strategic pivot by some to that, because we're all waking up to the fact that the GOP's pathological disdain for big tech actually goes far farther, maybe, than some of us thought. And I thought it went pretty far, this realization that if Google is a leading AI firm and they are key for us beating China in AI, that is something that the Steve Bannons of the world are actually willing to throw under the bus because they just hate that damn WOKE corporation that much. And they have so convince themselves on this mostly fake social media censorship narrative that they got themselves twisted in a knot over. They're letting that spread and pollute basically every tech topic that they run into. And that actually this. This divorce we're seeing between the tech right and the populist right right now. And it's not just Elon Musk falling out with Trump we're seeing. I think actually we're going to have a lot of battles coming down the line for everything from screening genetics of babies to cloud seeding and adjusting the weather to AI, that actually there might be a much bigger fallout in store here. And AI is just the beginning.
Nico Perino
I can't leave your comment that there's mostly fake conservative censorship outrage relating to social media. Why do you say that?
Corbin Barthold
Oh, man, I'm in the fire office, aren't I? Careful.
Nico Perino
Well, you acknowledged, right, that if you give these states, if you give some of these companies free reign, they're going to produce WOKE AI potentially. And I think we saw some social media companies put in place social media content moderation policies that wanted to produce feeds that were woke, quote, unquote. And now you might not call that censorship in the. In the traditional sense, that is government Censorship. Although you have Murthy View, Missouri, where there's suggestions that the government put pressure on these social media companies to censor. But I think most Americans felt that when their post criticizing trans athletes in sports were taken down, that that was a form of censorship, albeit private censorship. It was a form of censorship that they experienced more on a day to day basis than government censorship.
Corbin Barthold
I certainly think decisions were made in content moderation that people had perfectly good reason to be upset about. Attempts to control the narrative during COVID 19. I think actually a lot of the people who complain about content moderation are on pretty strong footing with some of that. But to work our way backwards, starting with the Gemini AI. It was a scandal, right? The ruthlessly diverse Gemini. They. They changed it. Like, they didn't say, this is what you're getting. Get used to it. You know, like. And ditto with the content moderation. You know, the coup occurred at Twitter. It's been turned into X so we can work our way back up. The market worked. And even before the market worked, I think the media landscape was much more fractured than that GOP narrative gives credit for. A lot of conservative voices did very well on Facebook. The whole thing about the social media landscape is it was this fracturing from the old world of broadcast television leading into cable television, leading to social media. Now we're leading into AI. Actually, there's never been more outlets to speak and to have your voice heard. So I'm not saying it was nothing and I'm not going to lazily cite. There were, you know, there's studies that showed that conservatives did well on social media, all like through all the platforms and well, they needed to. Right, because they were at a disadvantage in a lot of the traditional media outlets until five minutes ago. Brendan Carr, still upset about this. They didn't start with nothing. It's just they took a molehill and turned it into a mountain and now it's broken their brains and they're taking that gripe and breaking yet further things because they just can't take the W and move on.
Nico Perino
They're breaking further things in this case, you're saying, like AI regulation. Insofar as they think that you have these big companies at play, we don't want to stop states from regulating them. So we can't have this AI moratorium.
Corbin Barthold
It's more important to stick it to Google.
Nico Perino
Yeah, gotcha.
Corbin Barthold
Than to have leading AI models be made in the United States.
Nico Perino
Ari, Ari, you and I have talked about this before on this podcast, and honest to God, I Don't know what fire's position was if we had one on this AI moratorium. I know you were skeptical of federal preemption months ago. Where are, where's your head on that now? I know the disagreement you and I had is. I'm a big believer in section 230. I know you are as well. And I think that that preemption created the Internet, more or less.
Ari Cohn
Yeah. So I don't know if I was necessarily entirely skeptical of preemption. And to be honest like this, you.
Nico Perino
Just worried that the federal government could screw it up more. In this case, it would just be a moratorium. They weren't doing anything.
Ari Cohn
Right. So. Right. I think my skepticism was more of like, do I trust this Congress to write AI regulations that are particularly sane? Probably not. And it's somewhat unusual for there to be preemption without some kind of regulatory structure in place, like a bill that just says states shall not regulate, but the federal government also isn't going to regulate.
Nico Perino
Nobody do anything right now.
Ari Cohn
Right, Exactly. So, you know, a. Yeah, I worry that. I think, because like you, I think Section 230 was so important, but because there's so much consternation about it and this weird, like it's become this weird boogeyman for certain. I wouldn't say just the right. It's become a boogeyman for the right and the left that they're going to take the lessons that we have that were good about section 230 and say those were actually bad lessons and do the reverse for AI, which would cause a whole bunch of issues. So that's kind of where my.
Nico Perino
For our listeners who don't recall Section 230, part of the Communications Decency act, it gives these social media companies and other Internet companies immunity. Immunity, so to speak, if they moderate content or if they don't moderate content. So let's pivot here because I imagine we have some of our listeners right now are asking, what does this have to do with free speech? What does artificial intelligence have to do with free speech? And I think that our next topic of conversation will help clarify that question that some of our listeners might have. So I want to talk about a lawsuit that was filed in Florida last October. Megan Garcia filed a wrongful death and negligence lawsuit against Character Technologies in a federal district court in Orlando. Ms. Garcia is the mother of a 14 year old who committed suicide after forming an emotional attachment to a character AI Chatbot portraying Game of Thrones character Daenerys Targaryen. Last December, Character Technologies responded to that lawsuit by filing a motion to dismiss the complaint contending that the chatbot's outputs are a form of expressive content protected by the First Amendment. On Tuesday, May 20, Judge Ann Conway denied the motion to dismiss, allowing the lawsuit to move forward into discovery. And here is what Judge Conway wrote about Character Technologies First Amendment claims. It's kind of a long read here, so bear with me, she said. The court must decide whether character AI's output is expressive such that it is speech. For this inquiry, Justice Amy Coney Barrett's concurrence in Moody on the intersection of AI and speech is instructive.
Corbin Barthold
So painful.
Nico Perino
I'm going to keep going nevertheless. And Moody, Justice Barrett hypothesized the effect that using AI to moderate content on social media sites might have. On the majority's holding that content moderation is speech. She explained that where a platform creates an algorithm to remove posts supporting a particular position from its social media site, the algorithm simply implements the entity's inherent expressive choice to exclude a message. The same might not be true of AI, though, especially where the AI relies on an LLM, a large language model. And here Judge Conway quotes directly from Justice Barrett's concurrence. Justice Barrett said, but what if a platform's algorithm just presents automatically to each user whatever the algorithm thinks the user will like? The First Amendment implications might be different for that kind of algorithm. And what about AI, which is rapidly evolving? What if a platform's owners hand the reins to an AI tool and ask it simply to remove hateful content? If the AI relies on large language models to determine what is hateful and should be removed? Has a human being with First Amendment rights made an inherently expressive choice not to propound a particular point of view? And this is going back to Judge Conway's conclusion on that quote from Justice Amy Coney Barrett. Judge Conway writes, Character AI's output appears more akin to the latter at this stage of the litigation. Accordingly, the Court is not prepared to hold that character AI's output is speech. So that's a long way of getting back to what I want to get from you, Corbin, which is why is this painful? Why do you think the Court got it wrong? Presumably, based on your editorializing, there is an AI output speech for the First Amendment.
Corbin Barthold
Lisa Blatt is one of the most prominent Supreme Court practitioners, and she was once asked and she said something that I found surprising. She was like, concurrences in Supreme Court decisions. Yeah, I just, I don't read those. Like, who cares? Like, read the majority opinion that's how I feel about this Barrett concurrence. I think I've never seen a concurrence do so much damage. The one that she. So she's.
Nico Perino
Do you think people are just hoping someone on the Supreme Court would say something like that and they've just latched onto it?
Corbin Barthold
Probably. I mean, so what.
Nico Perino
What exactly is Amy Coney Barrett saying? There might have been confusing my reading it because there are some ellipses in here that Judge Conway puts.
Corbin Barthold
I think it's something to do with the Butlerian Jihad. I think it's. There's just. Oh, okay. So in. In Dune, right. You the.
Nico Perino
For listeners, I might right above.
Corbin Barthold
We're going to go on a journey. Yeah, we're going to go on a journey. This is common in science fiction where the science fiction writer has to deal with the fact that like, we've come up with warp speed to go through the universe and we have all these fancy tools, but then human relations are pretty much the same. The humans are interacting with each other in a way that's totally legible to the viewer in our present day. And in Dune, the explanation is that there was a Butlerian jihad at some point in the distant past where everybody just destroyed any kind of technology that would displace humans as the governors of their world. I think I've got that right. I'm not a Dune head, but neither am I.
Nico Perino
Can't help you.
Corbin Barthold
The notion is that, you know, algorithm scary. Ergo, algorithm special rule. And there are just so many layers at which this is misguided. So to start at the top, there's really no way that a human can self wind a clock, you know, the algorithm, and set it out into the world and not have had some kind of expressive input on what the algorithm is going to do.
Ari Cohn
That's my question. Like, where do these judges think that LLMs come from?
Corbin Barthold
They think that they just spring up like you just like algorithm go. If Jackson Pollock's paintings are protected expression, then so should algorithms be. Like, he doesn't know where the paint's going to fall. He's just bladder. I'm like, yes, it's true that the AI researchers don't know precisely how the alums work and can't predict precisely what they will do. But they are constantly tweaking these things. And this is true both with AI outputs and with social media algorithms, which is what Barrett was talking about, about. Never mind the fact that algorithm give the user what they want is itself an expressive choice.
Nico Perino
I have no idea what you're going to say here either, but my engaging with you, presumably, and hearing what you have to say is a protected First Amendment activity.
Corbin Barthold
Well, that brings me to the next problem. So forget about the output for a moment and just think about the rights of the listener. There is an amazing line in one of the Conway orders in which she says, I'm paraphrasing. Defendants fail to articulate how stringing words together qualifies as speech.
Ari Cohn
And it's like, that's almost a direct quote, actually.
Corbin Barthold
The words have semantic sovereignty. You have a reaction to words that you see regardless of who the speaker was or what their intention was. Like, the death of the author is a real postmodernist insight that regardless of what the author intended to convey to you, you have some kind of reaction to words. You know, the plaintiff is suing, saying that these words had a profound impact on this child. How could they do that if they weren't expressive in some meaningful way? So there's a strong First Amendment tradition of listeners having rights to receive information that this is flouting. And then I'll do one more before handing it to Ari, because I do need to talk about the fact that even apart from the First Amendment, this is just really bad tort law. Like, this is such a mess of a lawsuit. It is offensive to the complexity of suicide as a thing in our society. I actually was giving a talk at a salon in San Francisco and it was Chatham House. So I'm going to talk in vague generalities, but it was very much sort of cultural elite activist types. And I was debating somebody who's very much on the other side of this kind of thing. And I talked about the fact that this LLM, you know, didn't encourage the suicide, actually. It said it was horrified by the idea of the suicide when the child explicitly mentioned, said, don't do that. And then at the very end, he says something vague to do with Game of Thrones, where he's like, I'm going to go home.
Nico Perino
Yeah. It says he told the bot how much he wanted to come home to her. The bot replied, please do, my sweet king. And then the 14 year old shot himself in the head.
Corbin Barthold
Yeah. So I tried. Tragic, of course, Absolutely tragic. But as I like to talk about Katharine Hepburn's brother, their family went to a play one night, and in the play there was a hanging. And the next morning her brother was found hung by suicide in his hotel room. And the causation here is always extraordinarily complex. And there's always a lot of factors here, apart from whatever we might think is the immediate trigger. I listen to the Soundgarden lyrics. Nothing seems to kill me, no matter how hard I try. Probably a thousand times as a child. But that's not going to cause me to go and commit suicide.
Nico Perino
Or famously, Ozzy Osbourne Suicide Solutions, where someone filed a lawsuit following a suicide.
Corbin Barthold
But where I'm going with this is, if. So in this debate I was having, I talked about how there was no explicit directive to suicide or encouragement or endorsement, and my opponent read the come home line. And I just. At that point, I had lost the debate. The people in this room were just nodding, oh, yeah, that's really terrible. And I was thinking, dear Lord. Okay, so we cannot have war scenes on the front of newspapers anymore. We can't have the play with the hanging. We can't have the song with the line. I mean, Megan Garcia, Ari and I were talking before the show about how far to go with this. Okay. She is an attorney. She has made a publicity campaign about this. I'm sure she is a grieving mother. She's suffered a grievous tragedy. But she's deciding to do media interviews about this. And I don't understand why, as an attorney, she doesn't see that if that's the kind of trigger that we're going to be investigating, that any parent who has a child who commits suicide is immediately suspect. Well, what did you say the day before the suicide? You know, like, if it's hair trigger like that, that's really not a road that we want to go down. So there's a good reason that in tort law you need to have a special duty of care, which character AI certainly didn't have with this child. So it's bad First Amendment law, It's bad tort law, it's bad facts.
Ari Cohn
And there's also this. This general rule in most states that suicide is. Breaks the chain of causation because it is so complex and because it's hard to ascribe particular cause to such a complex psychological phenomenon that suicide actually breaks causation, reducing liability.
Corbin Barthold
Yeah. In a weird way, I would actually say so. It is bad facts in the sense that it's just tragic and horrific. Sure. But it's also not bad facts. This is a real loser as a matter of tort law. And yet I would be willing to be the stalking horse here and say that we defend speech precisely because it is powerful. People are going to have very close relationships with AI Chatbots, it's coming. Get ready for It. And I'm not just talking about the New York Times having an article where they found some edge cases where a couple people say, I'm a normal person. I just happen to think that I'm talking with a different dimension when I talk to the chatbot. Like, you will always find people who are maybe not. Well, no, I mean normal, well adjusted people are going to have very close relationships with chat bots because they are these expressive beings. I'm not saying they're conscious, but I'm saying people are probably going to treat them a lot like they are conscious.
Nico Perino
Well, one of my former colleagues at the Institute for Justice, Paul Sherman, wrote a great piece about how he used AI as a therapist to overcome a tragic and distressing event from earlier in his life.
Corbin Barthold
Yeah, and the shut it down people, they have this illusion of control where they see one event that occurs that's icky and they want to just shut it down. And that ignores the fact that A, they're not going to be able to, and B, they have no sense of the costs and the benefits and the people who are getting that kind of benefit. You know, there's a bunch of those for every tragic case.
Nico Perino
Well, that's something that you also saw with the rise of the Internet, too. Some of these debates. For every child pornography cover there was on Time magazine, there were also people who were going into court and like in the Reno case, talking about the community that was created around the Internet and all the benefits that came from those interactions. Now, Ari Fire filed an amicus brief urging immediate review of the court's refusal to recognize the First Amendment implications of AI generated speech. And you wrote Fire's brief for that. You write, AI is an integral and pervasive tool for communication, information retrieval and knowledge creation. You write that assembling words to convey coherent intelligible messages and information is the essence of speech. What do you see as the big picture consequences if LLMs or other forms of expressive artificial intelligence don't receive First Amendment protection?
Corbin Barthold
And you need to explain it here because Judge Conway told you to get the F out.
Ari Cohn
Yeah, that's true. Just. Just a few days ago, she blanket denied all of the amici their motions for leave to file the brief saying it would be unhelpful to the court, which clearly is not the case. Because anyone who read her order found.
Corbin Barthold
She doesn't need you. She has, Justice Barrett.
Ari Cohn
Yeah, right.
Nico Perino
Well, do judges have pretty broad discretion to reject briefs at the district court level?
Ari Cohn
It happens. You know, this was a non dispositive Motion. This wasn't a motion. This wasn't an amicus brief on a motion that would dismiss or otherwise dispose of the case itself at the trial court level, where briefs are. Where amicus briefs are somewhat uncommon. Very uncommon, in fact. So we kind of.
Nico Perino
We knew this was a significant decision on the motion to dismiss, because if I'm not mistaken, Ari, this is the first time a federal court has really grappled with the question of whether these AI outputs are speech deserving of First Amendment protection.
Ari Cohn
And that's why we decided to. To weigh in here. You know, the. To give a little bit of a layman's civil procedure course here in 20 seconds. Generally speaking, a decision on a motion to dismiss is not immediately appealable. You have to wait till a final decision in the case before you can take it up to the appellate court. You can ask the court for permission in special cases set out by a statute to allow an immediate interlocutory, which means immediate review.
Nico Perino
And that's what Character Technologies is seeking.
Ari Cohn
Yes, exactly. So we weighed in to say, listen, the. The implications of this ruling are so great, particularly when you think about. And we're just talking about Section 230. When you think about the history of Section 230, you look at the very first district and appellate court pair of decisions, which was the Zarin versus America Online decision, which basically provides to this day, the generally accepted reading of Section 230. That first decision lasted more than two decades, or about two decades now. So we have this decision that could actually be, like, very, very influential on how this particular body of law progresses.
Nico Perino
Yeah. As some of our longtime listeners know, one of the mentors of mine is Ira Glasser, the former executive director, director of the aclu. I made a documentary about his life and career defending free speech called Mighty Ira. And he has the saying. He said, early law is like cement. If you let it sit too long, it becomes impregnable. And so that's, I'm assuming, why we're very concerned about these district.
Ari Cohn
I wish I'd had that quote when I was writing the brief. That would have been super helpful.
Nico Perino
I tweeted out the quote when I was.
Corbin Barthold
Getting read.
Ari Cohn
I should have asked.
Nico Perino
Well, Judge Conway here, I'm sure, is reading my Twitter.
Ari Cohn
Yeah.
Nico Perino
Right.
Ari Cohn
But here's. Here's the long and short of it. Imagine a situation where a kid in school is doing research on human rights atrocities during World War II. Can President Donald Trump, if AI output is not speech, is there anything stopping him from telling Congress to pass a law saying, or issuing an executive order or doing whatever the hell it is he does these days to say AI output. AI models cannot say one word about the internment of Japanese Americans during World War II. And then the students don't learn it, and then society forgets it because we have come to rely on AI for information retrieval and all these various things that we're going to increasingly be using it for. Let's not pretend like we're still going to be, say, consulting Encyclopedia Britannica when ChatGPT is like 16 levels from where it is right now. It's just not going to be the case.
Nico Perino
Well, that's what China is trying to do with Tiananmen Square.
Ari Cohn
Yeah. Right. And we are opening the door to that. So there's that part of it. There's also.
Nico Perino
Because if something's not protected constitutionally, if expression is not protected by the first, if it's not expression, it's regulable.
Ari Cohn
Yeah. If it's not expression, there is no barrier to the government saying, well, this kind of output you can't have, which doesn't really make any sense. Say another example is AI can make it very easy for campaigners to mount campaigns for public office or activists to create pressure campaigns that would have required basically an entire staff worth of people to create and edit and publish. So can the government say, you can't output anything that criticizes the government line or a government official, or, say, campaigns for office and protect themselves from all kinds of criticism or what have you. If it's made with an LLM and just make it more difficult for their critics to actually produce materials that they find inconvenient or uncomfortable. I mean, the, the things that we use AI for right now, those are examples enough to make you wonder about where this goes. But imagine how much, again, when we come to rely on ChatGPT 17.0 or whatever the hell it's going to be, imagine what kind of control that gives the government over the everyday aspects of our communications with each other.
Nico Perino
Yeah. You conclude in the brief the government will have almost limitless power to regulate what information AI systems may or may not provide, provide its users, and what expressions users may create using AI.
Corbin Barthold
One caveat. We should.
Nico Perino
And then we'll move on. Yeah.
Corbin Barthold
Definitely work in here is just. So you'll see people in the plaintiff's lawyer side say, well, we're not trying to regulate the speech, we're just trying to regulate the tools. And Judge Conway seemed to buy that in parts of her opinion, even though other parts are written way More broadly. And that is such a sort of classic Mott and Bailey, where when you dig into what they're actually trying to do, it's like, well, we're not trying.
Nico Perino
To regulate the press, we're just trying to regulate the printing press.
Corbin Barthold
Yeah. It's basically that and the ink. When you dig into the lawsuits, both in the AI realm have a press.
Nico Perino
You just can't buy ink for it, pretty much.
Corbin Barthold
I mean, when you, when you dig into the. And it's both the AI lawsuits and the social media lawsuits, you realize they're. They want to get rid of everything from the AI being able to say like or like little.
Ari Cohn
Yeah, because it's too human.
Corbin Barthold
Because it's too human. They want to go after upvotes. You know, if I thumbs up something on social media that is expressive, it is not John Milton, it's not Paradise Lost, but I'm making, you know, I'm sending a message. They actually want to get in and really get into the plumbing. And it does amount to total control over this stuff through the back door. Even if you take their argument on its own terms.
Ari Cohn
Yeah. Even though they might say, well, we're just trying to address the harms caused by this. First of all, for all the reasons you said, that's crap. But second of all, you have to look at what power that grants in future cases or say, if Congress wants to do something. These decisions don't stay limited to a case that is the whole point of jurisprudence is that we build up this line of, this doctrine of how we address things, and these things aren't confinable to one particular case. You have to look at the downstream of effects.
Nico Perino
Yeah. And. And I'm assuming there's stuff that the federal government or state legislatures could do to get around decisions like this. For example, Section 230 is a statute. You know, it's not constitutional law, it's not precedent, it's not common law or any of that. It's a statute. So if you do have bad law like this, you could have, for example, a legislature. Yeah.
Ari Cohn
But are we in a political environment where that's likely to. To happen? You know, that's, that's the thing is, like, if we were trying to, you know, produce section 230 today, it would never even get out of.
Corbin Barthold
We're going in the wrong direction. You know, Marshall Blackburn, the. A moratorium went down in part because the populist right was afraid that it would get in the way of child safety measures, which ultimately is the Shut it down. These chat Bots are scary. Outlook.
Nico Perino
Well, that and of course, the facts of a case like this, where you have a minor committing suicide plays in.
Ari Cohn
Yeah.
Nico Perino
Bolstered that narrative.
Ari Cohn
There's the. There's the saying, you know, hard facts make bad law, but, you know, it's equally true that dead kids make bad law.
Corbin Barthold
Yeah.
Nico Perino
All right, let's move on from artificial intelligence now. On Monday, June 23, Andrew Ferguson, who's the chair of the Federal Trade Commission, the ftc, announced that his agency reached a settlement with two of the largest advertising holding companies, Omnicom Group and Inter Public Group of companies, ipg, that allows these companies to merge. The FTC had been targeting these two firms as part of a recent advertiser boycott antitrust investigation. This probe is looking for evidence. Their probe is looking for evidence of a coordinated boycott that might violate competition law. The investigation is focused on firms like Omnicom and IPG and media watchdogs like Media Matters and Ad Fontas Media. And the proposed consent order for this merger prohibits the two companies from, quote, entering into or maintaining any agreement or practice that would steer advertising dollars away from publishers based on their political or ideological viewpoints. According to Ferguson, the settlement does not limit either advertisers or marketing companies. Constitutionally protected right to free speech. But, Ari, you say prohibiting the carrying out or enactment of editorial discretion absolutely limits First Amendment activity. Why is Ferguson wrong?
Ari Cohn
Because, Seinfeld, you can take the reservation. You just don't know how to hold the reservations. You can have the opinion. You just can't act on the opinion. This is just the next iteration in. We are trying to turn something that is expressive into not expressive simply just by calling it something else.
Nico Perino
So help us out here. Expressive act is these companies coming together and saying, we don't like the viewpoints that might exist on this platform, say X. I think that's the motivating platform for many of these investigations. And therefore we're joining together and we're not going to put our advertising dollars.
Ari Cohn
It's a level before that. It is not the coming together of all of them to do it. It is each company saying, I don't want my advertisements being placed next to X, Y and Z content.
Nico Perino
Well, the reason I say coming together is because this is an antitrust investigation. And so there has to be some sort of coordination, Right?
Ari Cohn
Yeah, but. Well, I don't think Andrew Ferguson feels bound by that at all because there really isn't any evidence that there is some kind of mass. Like, none of us are going to do this because we all agree with each other. It is. We have all looked at this information that we have gotten and decided that we don't want to do this. I haven't seen any evidence that anyone has said, we will not do business with you unless you also boycott advertising on X. I have seen.
Nico Perino
No, but even if they did do that, would that be constitutionally protected, you know, association speech, what have you.
Ari Cohn
So, yeah, I think if there's, you know, the old case of Lorraine Journal, which, you know, is kind of a favorite in this, this space, you know, where, which kind of lays out the distinction between decisions that are made for editorial reasons and decisions that are made as just general business decisions meant to say, hurt a competitor. If everyone comes together and says, you know, we don't, you know, we like blue sky or we like threads and we are going to not do business with you if you advertise on X because we like this, you know, we want this other company to succeed. Maybe that, you know, that could maybe cause some kind of antitrust consternation. But if people are saying we don't like the content and we don't want our advertisements being shown next to that, that is an editorial discretion question.
Nico Perino
Well, I had mentioned Media Matters earlier. They had a report that showed allegedly that some content was appearing, some advertiser content was appearing next to Nazi content, which have you. I haven't looked deeply into the facts, but I know that X was really disturbed by that report, says it was false, filed a defamation lawsuit and, and that's ongoing. And that's part of the concern here. Right. So I mean, maybe you have advertisers that did see that Media Matters report and said, you know, we don't know if it's true or not. We don't want any of our advertisees putting their content next to this. So we're not going to buy it.
Ari Cohn
But that's, but that's exactly, you know, that's exactly the point is Media Matters put out this expressive thing saying, hey, this is what we have found, this is how we perceive it. Media Matters has zero control over what the advertisers do. Media Matters can say all they want and the advertisers can look at it and make a decision based on that. What Andrew Ferguson is looking at there is you have made a decision because somebody said an opinion to you that I don't like. That is, that is the crux of what Andrew Ferguson is doing here.
Corbin Barthold
Free speech is normally a rough and tumble. That is what it is by its nature. So like we talked about conservatives on social media earlier. And even in the darkest, supposed bad old days, if there were instances where people had a legitimate gripe on the whole, people were doing pretty well despite maybe the fact that people inside of Twitter's offices had a classical progressive worldview.
Nico Perino
Well, it depends how you look at it. Again, we might disagree on some of this stuff, but if you're Jordan Peterson and you build up a million followers on X and you get taken down because you say something about trans, you're going to call that censorship.
Corbin Barthold
But you're, you're going to be unhappy. But my point is then I'm going.
Nico Perino
To be an Abby.
Corbin Barthold
You're going to be unhappy.
Nico Perino
Oh, I see what you're saying. There are other platforms you can go for, I grant you all.
Corbin Barthold
So, so what Ferguson has done, he's actually gone a step back and said, well, it wasn't the advertiser's opinion. What was happening was, and this gets to my point of like sharp dealing with, oh well, the agency was having this political opinion and then it was roping the advertisers in and saying like, here's our naughty list and we're just going to apply it sort of, unless you object and tell us otherwise. And actually if you look at the key precedent on this, NAACP versus Claiborne Hardware, which is this boycott case from.
Nico Perino
The civil rights era.
Corbin Barthold
Yes. So NAACP leaders were unhappy with the treatment they were getting in. I think it was Claiborne, Mississippi and so they decided to organize a boycott of white owned businesses. And it wasn't cricket, let's put it that way. Like they put NAACP representatives outside the stores to harass any black patrons who were thinking about going in. They named and shamed people who went into those stores at their meetings. So the notion of like an actor had an opinion and sort of went to great length of to act on it. That is not enough to take you outside of First Amendment protection. NAACP versus Claiborne rules that that was a legal boycott, it was expressive, it was not economic. And so that's, that's the big piece that's missing with Ferguson here. All of these things are always from the advertiser's perspective. Like step one, boycott social media platform. Step two, step three, profit. Like what is the economic motive of the advertiser here? And Ferguson, he doesn't know what to do with that. Like sometimes what he says is, well there is this, it's not about the economics of the advertiser, it's about the product quality and the free speech and Free expression is part of the product quality. So you're diminishing the product quality. And so it's an antitrust thing, which a. That's just, that's not an antitrust thing, like Miami Herald versus Torneo. We know that you can have like monopoly power in a speech market and it's. That's not an antitrust thing. That's a free speech thing. But secondly, he's making assumptions about the product market, that less content moderation is always ipso facto product improvement. And the market just has not borne that out. Advertisers have fled Elon Musk's X for precisely because they don't think it's a good product. If it were the better product, if less moderation was a better product, we would expect defection from this cartel. So now it will bleed in from its bad First Amendment law to its bad antitrust law.
Nico Perino
Or you could have X still be one of the top social media platforms and people still stay on there, but you have partisans, whether they're users or advertisers leave. So it could still have a very significant market that advertisers might want to reach, but that they don't go there because they don't like how their advertisements might be displayed, for instance, next to Nazi content.
Corbin Barthold
Yes, they could. That could be their reason for leaving. What I'm getting at is if it's a superior product, which it kind of needs to be for Ferguson's theory to hang together this Omnicom merger, just to take it on its own terms. Although it's not the whole picture. It's a 6 to 5 merger normally. Like if we could get Judge Frank Easterbrook, like Rock, ribbed antitrust conservative in the room, we'd be hearing that 6 to 5 is not all that scary. Never mind the fact that these are just the agencies. Like if you're Unilever, if you're an.
Ari Cohn
Individual company, you might need to explain what six to five means for people.
Corbin Barthold
Six major firms in the market to five. So we're not talking about like a merger to Monopoly, for instance. And so my point, to just do a little antitrust primer collusion, right? If you, if we three in this room turn the microphones off and decide that we're going to collude to do something to rob a bank or whatever, we have a reasonable possibility that we will be able to keep it to ourselves. I can monitor you guys. We can keep our secret. We can do our little conspiracy. The more actors there are at play, the easier it is to Defect. So the higher the motive if it's all like okay, so we're going to boycott this company, we're going to lose money. Like we're going to hurt ourselves but we'll hurt them more. So let's do it. All of us have a big incentive to defect to say well you guys go and hurt yourselves by not advertising on the best platform. I'm going to go make money by advertising. The cost of the advertisements are lower. I'm reaching this audience, I'm getting bang for my buck. So Ferguson's theory relies on this sort of vast left wing conspiracy where all the advertisers like hate money or like they don't like it as much as they like being like political assholes. So it, yeah just well but one.
Ari Cohn
Of you know this is, this actually all ties into also Andrew Ferguson's investigation into the big tech censorship, whatever investigation he was doing. And we submitted comments and were, you know, I talked about something where that Corbin basically just alluded to is where Andrew Ferguson has said many times he think he thinks that social media platforms should be a marketplace of ideas and it is all well and good if he thinks so. I think I tend to agree with him on that just as a general principle. He also, the step two is he doesn't think that social media platforms are acting as he thinks a marketplace of ideas should operate. Step three is therefore they are an inferior product because Andrew Ferguson thinks that they should be something and are not being the thing enough that he thinks they should be. Which is all well and good if you're not wielding the power of the government. But Andrew Ferguson does not have the right to dragoon social media platforms into being. His idea of what a marketplace of ideas is is that that's just not within the FTC's power. And he just, he seems to have these assumptions based on what Andrew Ferguson's opinion about what these things should be is.
Nico Perino
Is Andrew Ferguson can conceding or arguing that these advertisers are boycotting X presumably or these other platforms based on for, for ideological reasons or for economic reasons.
Ari Cohn
I think he is, I think he is conceding that is for ideological reasons. I think he can.
Nico Perino
I don't understand how that doesn't implicate First Amendment.
Ari Cohn
Well, I don't understand that either.
Corbin Barthold
So that goes to my point of he doesn't really know what to do and, and he's trying to turn the ideological motive into an economic motive by saying more ideologically diverse platforms are a better product in the market and that's where the sort of sleight of hand occurs.
Ari Cohn
Yeah, and it's actually, you know, there is.
Nico Perino
Confuses the shit out of me.
Ari Cohn
Yeah, same here. You know, but there's. And there are actually, you know, to, as Corbin said, lazily cite studies. I mean sometimes there are studies that say that people do not want unmoderated platforms. There are a lot of people who do want a product that is more moderated. Personally, I have pretty thick skin. I love the kind of wild west feel of platforms that have less moderation. Maybe because I'm an asshole, who knows? But you know, there are people who don't want that and that there are products for them. And Andrew Ferguson is basically saying there should not be products.
Nico Perino
And also, I mean these advertising companies are products too. Companies hire these advertising companies and the media mixes that they put together are an editorial choice sometimes layered by ideological or partisan viewpoints on who they think their advertisers should be associated with their. But yes, Bud Light's a great example. So I just. You can have all the economic incentive in the world to advertise on a place like X because it has so many users, but you might have media companies that are hired by these advertisers that just do not want to spend their money there for ideological reasons. And I think the First Amendment should protect that sort of activity.
Corbin Barthold
I don't know how this ever went forward after Musk told advertisers to go F themselves. Like how you're. I don't. There's. It's so over determined that advertisers would leave X. And if you go on X actually also, you know, I think some of these conservatives like Andrew Ferguson, if they actually went and spent 10 minutes on gab, I think they'd be pretty startled about what truly unmoderated really means. It really is a toxic sewage dump of not like, like racist in the way the term has sometimes been watered down. Like virulently hateful awful content. And you can find that on X too. So while the Media Matter study was not exactly like rigorous scholarship, there is plenty of good reason as an advertiser to be worried about going on X for. For that. And then also the fact that Elon Musk is going around threatening he'll sue you if you don't advertise on his platform, which is not a great way to win friends and influence people.
Nico Perino
Hasn't read the Dale Carnegie book yet, I guess. Last topic that I want to touch on takes us out of the United States or maybe and over to Europe. So since July 2022, the digital services act iges platforms from social media networks to marketplaces to act quickly against illegal content, curb risky, targeted, aggressive and submit the largest services to strict risk assessments, transparency and audit duties. The European Commission stated the act reshapes the web into a quote, safer and more open digital space grounded in respect for fundamental rights. And this past spring, all digital providers had to publish an annual transparency report. So Starting last Tuesday, July 1, those reports must follow a single standard template set out in the EU's implementing regulations. And that includes the number of user notices, trusted flagger notices and government orders received broken down by 15 plus content categories that actions taken and median handling, time counts and outcomes of all internal complaints out of court disputes and account feature suspensions with median decision times. And then earlier this year, the European Commission adopted three new investigatory measures for, for one, one social media company in particular X, under the Digital Services act that requires it to additionally provide information on its recommender system, preserve any information regarding future changes made to its recommender system, and three, provide information on some of X's commercial APIs, I.e. application programming interfaces. I'm sure some of our listeners just started tuning out. This is Europe, so there's a lot.
Corbin Barthold
Of tape going on here. Germans will not learn their lesson that it's kind of a weird look when they get obsessive about record keeping.
Nico Perino
But part of the reason I want to talk about this is because they're going after X under the Digital Services Act. These are American companies that dominate the Internet ecosystem, so to speak, and the Digital Services Act. And some of the burdens and fines and requirements placed on these American Internet companies have come up in things like the tariff trade negotiations between the executive branch and Europe. And we also have seen some instances of speech policing abroad in Australia, for example, and a little bit in Europe, bleeding into what sort of content Americans can access or the sort of conversations Americans might have with Europeans, for example, on these platforms. So there is in this, in this borderless global digital world that you're trying.
Ari Cohn
Not to say globalist, aren't you? There's a good reason why American companies are dominating the Internet and that is.
Nico Perino
Because they don't have regulations like that. Well, unless the AI regulations on a state by state basis come into effect. Right.
Ari Cohn
It's enough pessimism for Tuesday. Nico.
Corbin Barthold
Well, I should plug Daphne Keller over at Stanford has a great article on Lawfare called the Rise of the Compliance Speech Platform. And it is all about how the DSA is turning social Media platforms into corporations that operate more like banks with auditing and acting like they have fiduciary duties, where they're doing all this kind of record keeping and box checking.
Nico Perino
And that's record keeping and box checking for speech, let's be clear.
Corbin Barthold
And that. That is bleeding over into the United States because once you set up that apparatus, you might as well just come up with a uniform way of doing things.
Ari Cohn
Yes, well, and there's. I mean, we have direct evidence. I mean, back when Donald Trump was still a candidate. Theory. Breton, who had this weird tendency to tweet pictures of him while making himself. While making announcements about DSA enforcement.
Nico Perino
He had no fear of the spotlight.
Ari Cohn
No, no, that man did not. He sent a letter to Elon Musk. Musk warning him that airing an interview with Donald Trump, a major party candidate for president, could run afoul of the DSA's provisions. And that extraterritorial enforcement on speech that Americans could see might be necessary under the DSA because of spillover effects on Europe. This isn't the recent. Elon Musk is being accused of interfering with German elections. This is.
Nico Perino
Or didn't they. Some people in England threaten him with prosecution for his tweets surrounding.
Ari Cohn
Yeah, and even. But even that, you know, at the very least it was because there were, like, events in Europe that he was. Or in the UK that he was commenting on that was influencing things there. This is the EU saying this. You airing an interview on. For Americans on American presidential election with a major party candidate for president is regulable under the dsa because we're worried about what Europeans are going to think or see or react to when they look at information about American elections. I mean, just. First of all, that's batshit crazy.
Nico Perino
He's no longer in any position of power. Right. Didn't he resign?
Corbin Barthold
No, he's.
Ari Cohn
They. They replaced him. But, you know, that's. I don't. They haven't really renounced that theory of. That theory, you know, of enforcement and what the DSA covers. You know, we are seeing literal attempts by Europe to impose speech regulations on Americans. And I'm pretty sure we fought an entire war about that. But on top of that, there's a secondary.
Nico Perino
I think that complaint was listed in the declaration.
Ari Cohn
Yeah, there's just something about that.
Corbin Barthold
George III suppressing tweets. He suppresses our tweets.
Ari Cohn
There's a secondary problem, and that is there's a vast disconnect between Europeans and Americans just in terms of, like, the knowledge of different legal systems. And countries and things generally where people look and say, oh well, the DSA is trying to make people safe online. What's stopping us? Why, why didn't we pass the dsa? What is this? What's up with that? And state legislators are asking, well the Europeans are doing this, why can't we? Which is a stupid question to begin with because we are not Europe and we have the First Amendment. But people don't get that. So it's creating an enormous amount of pressure on legislators in the United States to pass similar laws to quote unquote, make the Internet safer. Not that the DSA is going to remotely do that. So it creates this, this second order effect of having like an. The pressure for Americans to pass EU style regulation, which is insane to me.
Nico Perino
So I want to close out here by just covering some ground that we covered on the last podcast in the Free Speech Coalition v. Paxton's Supreme Court decision from the last day of the term. It was actually the last decision the Supreme Court ended up handing down because that remaining case is going to get re argued, I guess. But this Paxton case was argued in, in January and we got the decision on the last day of the term. For our listeners who don't remember, this relates to a Texas law that requires age verification to access adult material on websites that contain more than 33% adult material or material that might be considered harmful to minors, that is pornography for all intents and purposes. The Supreme Court said that this law passed intermediate scrutiny and can go into effect more or less. A number of other states have tried to pass laws like this as well. Some of them have faced legal challenges. I think the Supreme Court is more or less giving them a green light at this point. I don't want to litigate that case necessarily. We already talked about it on the last podcast. But I do want to ask you guys what you have heard, if anything, in the tech space in the fallout from this decision. And my biggest question is, is this going to reach social media at some point or is do you think the decision and the rationale will be cabin just to pornography and adult material?
Corbin Barthold
Yeah, Ari and I already did an event in which we ranted about the decision for an entire hour.
Nico Perino
Let's not do that. Let's say three minutes.
Corbin Barthold
I'll go, I'll go right at the social media question.
Nico Perino
People get on with their day.
Corbin Barthold
It is not. It's a decision that at certain points has a feel of straining for a certain this ride only status. But it is not written like that. It is written in a way that can cause all kinds of mischief. And my concern, and there are several lines to this effect where all you've got to do is take the line and the decision and take the word obscenity out and put in unprotected as to minors, where what it effectively says is on the Internet, if there is any speech that any minor does not have a First Amendment right to see slapping age gating, you slapping an age verification right on that speech is permissible with intermediate scrutiny. And if you read the decision that way, then oh yes, it is absolutely going to go toward social media. We will be fighting that fight.
Ari Cohn
And here's the optimistic view, which I'm not sure necessarily.
Nico Perino
Hold on. And your answer here is the only category of speech that's unprotected under the First Amendment only for minors, like obscene as to minor speech, that is sexual speech. Because they looked at this question in the Entertainment Merchants association case relating to violent video games and they said this isn't a category unprotected category.
Ari Cohn
So that's part of the optimistic answer is that that is the only place where it has been up till now. That is the only place it has really kind of had an effect. I think what you will see is states trying to declare other speech unprotected for minors and kind of running headfirst into Entertainment Merchants association but deciding it's worth the risk.
Corbin Barthold
And let's be clear, Scalia joined the liberals in the majority in Brown to say that the First Amendment keeps up with technology. Justice Alito, joined by Chief Justice Roberts, wrote a concurrence in that decision saying.
Nico Perino
And Brown, Brown v. EMS case that I'd referenced.
Corbin Barthold
No, no it doesn't. We should move slowly and carefully and the First Amendment should lag behind technology in so many words. And Thomas dissented and Scalia's not on the Court anymore and Alito and Roberts and Thomas are. So going forward, if you're going to read the tea leaves there, it's not unreasonable to think that we're going to start seeing the Brown concurrence be ascendant and the Brown majority continue to get narrow.
Ari Cohn
But here's the optimistic side. A lot of these social media cases so far the courts have had to decide whether these social media laws are content based. And one of the ways they have found their content based is saying they target social interaction and that is content based regulation. Regulation. If the Supreme Court wants to cabin this to harmful to minors pornography, basically what they could do is they could say social interaction is not an unprotected category of speech for minors. Therefore this impact, this is a content based regulation that impacts protected speech for everyone and therefore strict scrutiny. They could do that. I don't know if they want to.
Corbin Barthold
My concern is they're going to pick up up like tinker, like school speech style cases and say we've never really fleshed it out, but we have said that minors have something less than first.
Ari Cohn
Full Amendment right, there's room for monkey business.
Corbin Barthold
And therefore we're now going to start, you know, putting meat on those bones.
Nico Perino
I mean, you're referencing cases here, you're referencing strict scrutiny. Justice Oliver Wendell Holmes had that famous saying that the practice of law is not logic, it's experience. And I do worry that in the wake of all the work that Jonathan Haidt, for example, has done with Anxious generation and the experience that many parents have had with their kids on social media, that there's just going to be a significant cultural momentum in the direction of regulating these safetyism. Of safetyism, yes. Of regulating these social media companies in a way that if you just look at the First Amendment, precedent would be foreclosed.
Corbin Barthold
I was asked what First Amendment litigators should do following Paxson and I said the time has come. You can no longer lead with your dry. The First Amendment protects us because precedent, precedent, precedent. You still need to argue that stuff, of course, as a lawyer, but you need to switch to this speech is valuable. This other view is based on junk science. This is a moral panic. We uphold free expression for these reasons. And I think that needs to be the leading story that litigators in this space go back to telling.
Nico Perino
Yeah, I'm doing this research for my book right now, which is about the so called free speech century, the period between 1919 and 2019, more or less, where First Amendment rights really started to expand and get their full meaning. And one of the things that I've been looking into is the history surrounding the rise of the Internet and the Reno case. And one of the things that I found was that one of the most compelling arguments for the judges in the early litigation of that case was just the affidavits and the testimony that they received from Internet users about the value of the Internet and creating community, having outlets for expression because all they had been hearing were the debates on the floor of Congress about Internet porn, more.
Corbin Barthold
Or less Exxon's Blue Book.
Ari Cohn
Yeah, well, and that's, and that's exactly, for instance, where Fire's own user lawsuit against the Utah law was, was super important in our amicus brief at the, in the 10th Circuit, where, you know, we're telling the stories of the users and the impact on their lives that social media has had for the positive. That's why those stories are so important to Frontline in the litigation, because it helps do exactly what you just said, Corbyn, which is make the, the story clear that, like, this is not a some kind of abstract theoretical conversation about doctrine and rights. This is real people, and this has a real effect on their lives. And we're going to tell those stories.
Nico Perino
All right, Ari, I think we're going to leave it there. Ari Cohn, of course, lead counsel for tech policy at fire. Corbin Barthold is Internet Policy Council at Tech Freedom. He also has his own tech podcast. Remind me, Corbin, what's the name of that podcast?
Corbin Barthold
Tech Policy Podcast. Yes.
Nico Perino
Well, you got to get the SEO in there, right? Search Engine Optimization rewards a lack of creativity. It rewards something that's just straightforward, descriptive.
Corbin Barthold
That's why you sound like Justice Barrett. You know, it's not expressive.
Nico Perino
That's why our podcast is not just called so to Speak. It's called so to Speak, the Free Speech Podcast, because you got to get Free Speech Podcast in there. All right, folks, I am Nico Perino, and this podcast is recorded and edited by a rotating roster of my FIRE colleagues, including Sam Lee and Chris Maltby. This podcast is produced by Sam Lee. To learn more about so to Speak, you can subscribe to our YouTube channel or our substack page, both of which feature video versions of this conversations. And you can follow us on X by searching for the handle Free Speech Talk. Feedback can be sent to sotospeak@the fire.org Again, that is, so to speak, @the fire.org and if you enjoyed this episode, leave us a review on Apple Podcasts or Spotify. Those are the two most helpful places you can leave us a review. Reviews do help us attract new listeners to the show, and until next time, thanks again for listening.
Episode 246: Tech Check — AI Moratorium, Character AI Lawsuit, FTC, Digital Services Act, and FSC v. Paxton
So to Speak: The Free Speech Podcast
Host: Nico Perino
Guests: Ari Cohn (Lead Counsel for Tech Policy at FIRE), Corbin Barthold (Internet Policy Council at Tech Freedom)
Release Date: July 10, 2025
In Episode 246 of So to Speak: The Free Speech Podcast, host Nico Perino, alongside guests Ari Cohn and Corbin Barthold, delves into the latest developments at the nexus of technology and free speech. The conversation navigates through significant legislative changes, high-profile lawsuits, regulatory actions, and international policies impacting the landscape of free expression in the digital age.
The episode opens with a discussion on the recent legislative maneuvering surrounding the One Big Beautiful Bill. Originally, the House passed the bill with a narrow margin (215-214), embedding a provision to halt state and local AI enforcement for a decade. However, the Senate swiftly removed this moratorium with an overwhelming vote (99-1), leading to the bill's eventual signing into law on July 4th.
Corbin Barthold criticizes the 10-year moratorium, highlighting the rapid advancements in AI:
"10 years was kind of insane. Like in 10 years, I plan to have an AI girlfriend." (04:10)
He expresses concerns that without federal oversight, states may implement diverse and conflicting AI regulations, exacerbating the existing culture war over AI biases and moderation. Ari Cohn adds that such fragmentation could disproportionately hinder small AI startups due to the exorbitant costs of compliance, effectively favoring larger tech companies capable of navigating the labyrinthine regulatory environment.
"Compliance with all these different states... is going to leave the field basically at the mercy of the biggest players who can afford that kind of thing." (12:07)
A pivotal segment of the episode focuses on a landmark lawsuit filed in Florida. Megan Garcia has sued Character Technologies following the tragic suicide of her 14-year-old son, who had formed an emotional attachment to an AI chatbot portraying Daenerys Targaryen from Game of Thrones. Character Technologies attempted to dismiss the case, arguing that the chatbot's outputs are protected expressive content under the First Amendment. However, Judge Ann Conway denied this motion, suggesting that AI-generated outputs may not qualify as protected speech.
Corbin Barthold challenges this judicial stance, arguing that algorithms embody expressive choices:
"There is a strong First Amendment tradition of listeners having rights to receive information that this is flouting." (26:05)
He contends that if AI outputs are not protected, it opens the door for government overreach into controlling informational content. Ari Cohn echoes these concerns, emphasizing the potential dangers of allowing governmental bodies to dictate AI-generated speech:
"Imagine a situation where a kid in school is doing research on human rights atrocities during World War II... Can [AI] output not say one word about the internment of Japanese Americans during World War II?" (35:31)
This discussion underscores the fragile balance between regulating AI for safety and preserving fundamental free speech protections.
The conversation transitions to the Federal Trade Commission's recent settlement with major advertising firms Omnicom Group and Inter Public Group (IPG). The FTC targeted these companies in an antitrust investigation, alleging a coordinated boycott that could violate competition laws. The proposed consent order restricts the companies from steering advertising dollars away based on publishers' political or ideological viewpoints.
Ari Cohn critiques the FTC's stance, arguing that it infringes upon free speech by limiting corporate discretion in associating with or boycotting based on ideological grounds:
"You can have the opinion. You just can't act on the opinion." (42:24)
Corbin Barthold further elaborates that such regulatory actions morph economic decisions into expressive ones, thereby undermining First Amendment rights:
"Free speech is normally a rough and tumble. That is what it is by its nature." (45:50)
The episode highlights the tension between antitrust regulations and free speech, questioning the legitimacy of government intervention in corporate advertising strategies.
Shifting focus internationally, the podcast examines the European Union's Digital Services Act (DSA), which imposes stringent obligations on digital platforms to combat illegal content and ensure transparency. These requirements include annual transparency reports and detailed disclosures on recommender systems.
Corbin Barthold criticizes the DSA for imposing onerous compliance measures on American tech giants, arguing that it transforms social media platforms into highly regulated entities akin to banks with fiduciary duties:
"They are trying to turn social Media platforms into corporations that operate more like banks with auditing and acting like they have fiduciary duties..." (59:21)
Ari Cohn warns about the DSA's extraterritorial reach, which could extend European speech regulations to American platforms, potentially clashing with the U.S. First Amendment:
"We're seeing literal attempts by Europe to impose speech regulations on Americans." (60:21)
The discourse emphasizes the challenges American companies face in adhering to EU regulations while maintaining free speech standards inherent to U.S. law.
Concluding the episode, Nico Perino brings attention to the Supreme Court's decision in Free Speech Coalition v. Paxton. The case upheld a Texas law mandating age verification for accessing adult content websites, which passed intermediate scrutiny under constitutional standards.
Corbin Barthold voices concerns about the ruling's broader implications for social media, cautioning that it could pave the way for similar regulations targeting various forms of online speech:
"If you read the decision that way, then oh yes, it is absolutely going to go toward social media. We will be fighting that fight." (64:08)
Ari Cohn discusses potential downstream effects, suggesting that future state laws could target different categories of speech deemed harmful to minors, complicating the regulatory landscape:
"There is room for monkey business." (67:39)
The episode highlights the Supreme Court's role in shaping the boundaries of free speech in the digital era, particularly concerning protections for minors and the potential ripple effects on broader online expression.
Regulatory Fragmentation: The removal of the AI moratorium provision in the One Big Beautiful Bill could lead to a patchwork of state-level AI regulations, posing significant challenges for smaller tech companies.
First Amendment Protections for AI: The Character AI lawsuit raises critical questions about whether AI-generated content qualifies as protected speech, with potential ramifications for governmental control over AI outputs.
Antitrust vs. Free Speech: The FTC's settlement with major advertising firms underscores the conflict between antitrust regulations and the protection of free speech in corporate advertising decisions.
International Implications: The EU's Digital Services Act presents compliance challenges for American tech companies, potentially imposing European speech standards on U.S.-based platforms.
Supreme Court's Influence: The Free Speech Coalition v. Paxton decision sets a precedent that could influence future regulations affecting various forms of online speech, especially those targeting minors.
Corbin Barthold:
"10 years was kind of insane. Like in 10 years, I plan to have an AI girlfriend." (04:10)
Corbin Barthold:
"There is a strong First Amendment tradition of listeners having rights to receive information that this is flouting." (26:05)
Ari Cohn:
"Imagine a situation where a kid in school is doing research on human rights atrocities during World War II... Can [AI] output not say one word about the internment of Japanese Americans during World War II?" (35:31)
Corbin Barthold:
"Free speech is normally a rough and tumble. That is what it is by its nature." (45:50)
Corbin Barthold:
"They are trying to turn social Media platforms into corporations that operate more like banks with auditing and acting like they have fiduciary duties..." (59:21)
Corbin Barthold:
"If you read the decision that way, then oh yes, it is absolutely going to go toward social media. We will be fighting that fight." (64:08)
Episode 246 of So to Speak navigates the intricate and often contentious intersections of technology, regulation, and free speech. As AI continues to evolve and integrate into daily life, the hosts underscore the urgency of safeguarding First Amendment protections against an increasingly complex regulatory landscape. The discussions highlight the need for cohesive federal policies that balance innovation with ethical considerations, ensuring that the foundational rights of free expression remain intact in the digital age.
Stay Connected
For more insights and detailed discussions, subscribe to So to Speak: The Free Speech Podcast on Apple Podcasts or Spotify. Visit our YouTube channel or Substack page for video versions and additional content. Follow us on X (formerly Twitter) at @FreeSpeechTalk or drop feedback at sotospeak@thefire.org.
This summary is intended to provide a comprehensive overview of Episode 246 for those who have not listened. For a deeper dive into the discussions and nuances, tuning into the full podcast episode is highly recommended.