Loading summary
Scott R. Andersen
The following podcast contains advertising to access an ad free version of the Lawfare Podcast. Become a material supporter of lawfare@patreon.com lawfare that's patreon.com Lawfair also check out Lawfare's other podcast offerings, Rational Security, Chatter, Lawfare, no Bull and the Aftermath.
Alan Rosenstein
Worried about.
Kevin Fraser
What ingredients are hiding in your groceries? Let us take the guesswork out. We're Thrive Market, the online grocery store with the highest quality standards in the industry. We restrict 1000 plus ingredients so you can trust that you'll only find the best high quality, organic and sustainable brands all free of the junk. With savings up to 30% off and fast carbon neutral shipping, you get top trusted groceries at your door and you can stop worrying about what your kids.
Unknown Speaker
Get their hands on.
Kevin Fraser
Start shopping@thrivemarket.com for 30% off your first order and a free gift. If you're an experienced pet owner, you.
Alan Rosenstein
Already know that having a pet is.
Kevin Fraser
25% belly rubs, 25% yelling drop it. And 50% groaning at the bill from every vet visit. Which is why Lemonade Pet insurance is tailor made for your pet and can save you up to 90% on vet bills. It can help cover checkups, emergencies, diagnostics, basically all the stuff that makes your bank account nervous. Claims are filed super easily through the Lemonade app and half get settled instantly. Get a'@lemonade.com pet and they'll help cover the vet bill for whatever your pet swallowed after you yelled drop it.
Alan Rosenstein
So speaking of chips everyone, I don't know what your kind of guilty pleasure was when we were all younger men with better metabolisms, but I distinctly remember like sitting in my college dorm when I was 18 and being able to put down an entire like party sized bag of Tostitos hint of lime like it was nothing. And that is just. That is not me anymore.
Unknown Speaker
You've got to be dipping something. Are you just straight?
Alan Rosenstein
Straight hint of lime chips?
Kevin Fraser
I think you may have just wanted a lime at that point. That's a lot of hint of lime.
Alan Rosenstein
Disgusting.
Unknown Speaker
Yeah, take the hint, Alan.
Kevin Fraser
That was scurvy. Alan. I think that is my body.
Alan Rosenstein
That is my body telling me that you cannot just live on Tostitos alone. Even if you are 18 and made of rubber.
Unknown Speaker
Are you a pirate? Why do you need that much hint of lime? How bad was the scurvy?
Alan Rosenstein
Because the rest of my diet was so terrible. I guess as a college student.
Unknown Speaker
No, we had at the University of Oregon, Go Ducks. We had Whammies, which were these milkshakes that you could get, I swear, like a 20 ounce milkshake on. On demand. And so we would just go like every day. Every day was a whammy day. Why? Why wouldn't you want a whammy every single day of your life?
Alan Rosenstein
Youth is wasted on the young. Scott, what was your uva? Disgusting food.
Kevin Fraser
I'm trying to think. I don't know if I really had a clear one. I had a lot of disgusting drinks. I don't. Our signature drink for a long time was something called Monster. And there's Red Monster and Green Monster, which was. It's a secret recipe, so I'm not gonna give away the whole thing here.
Alan Rosenstein
Yeah. The first ingredient is vodka from a plastic jug.
Kevin Fraser
That is correct. The second ingredient is gin from a plastic jug.
Unknown Speaker
Yes.
Kevin Fraser
Hello, everyone, and welcome back to Rational Security. I am your host, Scott R. Andersen. Thrilled to be back with you, the listeners for the podcast. We invite you to join members of the Lawfare team as we try to make sense of the week's biggest national security news stories. We have a very special episode this week. No, not an after school special about drugs or something along those lines.
Alan Rosenstein
The more you know.
Kevin Fraser
This is kind of a the more you know episode. We are digging deep into a topic of particular interest to the two gentlemen joining me today. Of passing interest to me, marginally increasing, I would say somewhat begrudging, but I am increasingly interested in it. I'll give some points there. And that is the topic of artificial intelligence, or AI. And I am thrilled to be joined by our two leading AI mavens here at lawfare. Alan Rosenstein, senior editor, research director, I think is your technical title now. Is that what we settled on?
Alan Rosenstein
Yeah. And co host. I like. I'm most proud though of Rational Security. Co host emeritus.
Kevin Fraser
Co host emeritus. Because you make. Because it requires me to say emeritus out loud, which I only get right 50% of the time.
Alan Rosenstein
I'm trying to convince you, Scott, that it's pronounced emeritus.
Kevin Fraser
Yes, that's why. That's why I use Listerine to combat the gum disease. Emeritus return, just in case. And also joining us, of course, is Lawfare's other AI oriented senior editor, Kevin Fraser. Kevin, thank you for joining us on the podcast.
Unknown Speaker
Howdy.
Alan Rosenstein
Howdy.
Unknown Speaker
Yeah, always a pleasure and looking forward to when I get a fun title whenever that comes.
Kevin Fraser
We all diversify into weird titles the more time we spend at lawfare. So it comes for everyone, just you wait. But in the interim, we have a couple of topics because it's been a big AI news cycle. They are kind of all big AI news cycles this week. To borrow my usual line about national security news, AI is very in the news. It is a force driving a lot of policy, a lot of policy at the federal level, at the state level, at the international level. And we're going to kind of tackle all three of those to some extent today as we try and tackle a couple of different aspects. And big stories that have popped up that I thought were really interesting and warranted a little specialized discussion. And why not knock them all out at once and then if people don't want to talk about AI, you can just skip this episode. Not thrilled about it, Prefer you didn't. But if you need to, that's okay.
Unknown Speaker
We understand you gotta be with us through the tried and true. Let's go.
Kevin Fraser
All right, let's do it. And I think this is some actually really interesting stuff that touches on a lot of the typical national security news stories. A lot of these are actually related to some stories we've been talking about in other contexts. We thought it made sense to pull out the AI aspects and kind of focus on those and concentrate on those. So without further ado, our first topic this week, a continuation of a conversation for the other week, to some extent. Oh sure, now he's into free trade. President Trump has repealed the Biden administration's rule, setting strict limits on the diffusion of high end AI technology, opening the door to the global transfer of the technologies powering US AI development and innovation, including advanced chipsets and semiconductors. And we're already seeing results of that policy in a recent deal the president signed with the UAE that would work towards the transfer of advanced semiconductors and other technology, part of the big package of deliverables he produced as a result of his recent Middle east trip. How should AI diffusion fit into the broader policy goals and strategy considerations surrounding the AI industry in the United States? And what approach does the Trump administration seem inclined to take? Topic 2 Paving over the playing field House Republicans recently included a provision in the reconciliation bill they have passed that would have preempted state efforts to legislate on, at and regulate the AI industry for a decade. Is this sort of federal preemption a prudent step given our national competition over AI with major power competitors like China? Or does it go too far in insulating AI companies and users from accountability for their actions, particularly where they may put the public interest or safety at risk? And topic 3 speechless a federal district court in Florida has issued a notable opinion of first impression in a tragic case involving A teenager who committed suicide, allegedly as a result of encouragement from an AI bot powered by the company character AI, which is affiliated with Google, among other holdings. The judge concluded that the AI's output was not its self protected speech, allowing the case to move forward. Is this holding correct? And what impact will it have on the development of the AI industry? So our first topic today is an aspect of the broader policy debate around AI that I think is absolutely fascinating. I find myself having very torn and sometimes very counterintuitive instincts on, and in some ways may even be a little sympathetic with the policy approach taken by the Trump administration recently, even though it has come under a lot of criticism from a lot of corners.
Alan Rosenstein
You heard it here first, everyone. Scott is a Trump administration apologist.
Kevin Fraser
I've got my red down the bone right here, about to hop on.
Unknown Speaker
Red hair and a red hat.
Kevin Fraser
There you go. It just says MA A IA and that's, that's how we're going to do it.
Unknown Speaker
Your next haircut shaved into the side of your head. Just, just maga.
Kevin Fraser
There you go. Who needs it? Exactly. So this is a question of a I D. Diffusion as we describe it. And I'm going to try and capture what I think is the major elements of diffusion. And I want to hand it over to you all to correct me because I'm sure I'm getting some aspects of this wrong. But the central debate is diffusion is something like exporting, but it's not. It's kind of a range of exports involving components, so like semiconductors, chips that are like the technical material you need to develop AI to some extent, models, technology developers, companies actually like setting up potential AI overseas. It could involved, I guess in theory, like other resources like technology transfer, cloud systems, things like that. All sorts of these things fit into the idea of diffusion. The basic idea is how far do we want our AI capability to spread overseas? How do we want to make it available to others overseas, particularly at this phase where we are very much in an AI development phase, how much do we want to make? I guess it's almost two separate questions. How much do we want to make kind of existing some level of our existing AI capacity available and how much we want to make the ability to join the race on AI available? Or how much do we want to kind of pull the ladder up from others and make it harder for them to compete with us around peak AI development, particularly towards general artificial intelligence, which is the star at the end of the rainbow that everybody is working towards in AI land. Am I basically getting this Correct. Is that a fair way to describe diffusion? I'm getting some nods. Okay, so that's basically right. So the Biden administration adopted, towards the end of its time in office, really towards the end of its time in office, a diffusion rule that would have leaned very heavily on kind of a tripartite division of the world. There was kind of a club of 19 to 20 traditional allies. You can probably guess which countries they were. If you were to go down a list that the United States says, okay, with these guys, we're going to have pretty open technology and AI relationships, which basically means that we are going to generally permit using our export control authorities companies, AI companies to work in these countries, set up different bases of these countries, export chips for use in these countries. Then there's a bottom tier which is again, the countries you would kind of susp. The like, clear enemy countries that are all subject to pretty substantial export restrictions on all sorts of other things. And they would say, and for these countries, you're not going to be able to export any AI stuff to them. Don't even try. And in the middle is like the 150 other countries that are a huge ban of countries, where they basically said, you can export certain things into this band of 150 countries, diffuse them out there, but there is a limit that you can only move 25% of your computer, that being the main variable, which I want you guys to explain exactly what that is. But basically a measurement of your processing capacity as a company. You can only transfer about 25% out of that first tier country, so into that middle band. The essential effect of this, I think, would be to say that the horsepower of these companies that they're developing, that they are cultivating, because it is this kind of integrated systems of chips and technology and data and energy and all these other things that go into an operational AI system. The bulk of it was going to have to stay in the top tier of friendly countries, subject export controls. As worth noting, this middle tier is still available for export licenses, as I understand it. Actually, the UAE deal that I mentioned, the intro that President Trump agreed to in the Middle east deal was actually put in place before the repeal of the diffusion rule, I think sequencing wise, presumably because that's where they were intending to license it. I'm not sure they need to license it anymore with the repeal of the rule. It depends on what other export controls might be implicated. So it's not saying it's an absolute ban, but this is describing what industry would be allowed to do on its own. Without having to go and get these special, sometimes very onerous and difficult to get permissions of an export control license to do something more specific. But that option was there as an escape valve. Talk to me about what this policy was intended to do, how well you all think it did that, and the critique of it that has led to its repeal. Now, Kevin, let me start with you on this. I know you and I have talked about diffusion a fair amount in the past. I want to get your sense about this policy before I kind of throw in my less informed perspective on it.
Unknown Speaker
Well, I think it's important to start that everyone is trying to figure this out, what the best approach is, how to make sure that diffusion is getting the balance right between making America's AI the sort of default rule of the road for the rest of the world versus the odds of China doing the exact same. So this is sort of a contest of whose AI is going to be the AI that's used by the rest of the world. And a big concern about the initial tripartite diffusion rule set up by the Biden administration was that if you're one of those countries hanging out in the middle where these are countries that may surprise some folks who was included in that list, like Israel and India.
Alan Rosenstein
Yeah, Portugal, I think, too.
Unknown Speaker
Portugal. I mean, what's wrong with Portugal? Right?
Alan Rosenstein
So they drink too much port. You can't trust them.
Unknown Speaker
You can't trust them. So these countries who were hanging out in the middle were left in a really awkward spot. Do you have to work with the administration on going through all these hoops that you mentioned, Scott? Or the alternative was bring in a US Company, and if you have a US Company come operate in your country, then there are far looser restrictions on how many chips can be imported into that country. So do you go through those steps or do you just open your doors to China and start to partner with China in a way that makes you a little bit more likely to partner with them on new AI developments and perhaps a little more beholden to them. So that's the kind of bigger framing of who's going to create and dominate the rest of the world's AI infrastructure. And the concern with that initial Biden approach was that you left way too many countries saying, hey, you know, you're just making it far easier to go to China rather than to the US but as we've noted, and as I'm sure we'll continue to discuss, this is a really difficult issue to get right because I think, as Sam Winter Levy in An excellent write up for Carnegie really identifies three key factors that we can work through. When you're thinking about who should be able to access these critical chips. One, there's control, right? How much control are you going to exercise over America's chip production and who's able to access them? And once those are exported, what control are you going to exercise over how they're then used in that country? So there's control, there's promotion. Are you trying to accelerate AI adoption in different countries? If so, you may be more willing to send more chips to those countries, and then third, there's leverage. So if you're going to send those chips to another country, what conditions are you going to impose to try to increase the odds of them developing AI in a way that aligns with America's interests? So how best to get that control, promotion and leverage equation right is a really tricky one. And the Trump administration appears to be saying that we're going to move a little bit more away from this control aspect. Presumably, we're not sure what exactly they're going to implement going forward, but they've said that the current structure, the structure they've just abandoned, was unworkable, too bureaucratic and potentially stifling to American innovation. So I'm keen to hear what Alan has to build off there, but I think that's kind of the. The underlying issue.
Alan Rosenstein
Yeah, I just want to underscore Kevin's point that no one has any idea what the right margin is for all of this. I think that that is just very, very important to appreciate. It's also, I think, important to recognize that diffusion of compute, so specifically of the chips themselves, is just one part of what we mean by AI diffusion. Right. If you think of sort of what are the three ingredients of an AI system, they are compute, which is the chips, and then of course, the energy that's needed to run those chips, the data and then the algorithms. Right. The technical know how in terms of how to design these systems and then how to sort of run the big training runs. I think the reason we're all focused on the chips part is because the data is not that hard to get. I mean, it's basically the Internet and anyone can scrape the Internet. Obviously, there's some bespoke data sets, but the main bulk of the data is generally accessible by anyone. And there's really no way to restrict that if you're going to have anything like an open Internet. Similarly, the algorithmic improvements are very hard to restrict. I mean, basically, it's kind of an open joke in Silicon Valley that like, you know, it's like, it's like a thousand people who are involved in this. They all go to the same house parties and are part of the same polycules. And so it's just very difficult to keep that information from, from circulating.
Unknown Speaker
Drinking the same monster beverage that Scott was. Right.
Alan Rosenstein
Exactly, exactly. And, and you know, at the end of the day, the algorithmic improvements are just themselves, you know, not that kind of complicated. All things being equal relative to what is sometimes called the bitter truth, which is this idea that scale, just throwing more compute in particular and more data is the thing that is really behind these model improvements. So your main lever, such as it is, is to control the compute. And it just so happens to be that currently the most advanced chips are all designed by an American company, Nvidia, and they are manufactured by a Taiwanese company, tsmc, which is the chip maker. And that's why we're focusing on compute as the main lever to pull. The concern, as Kevin pointed out, is that there is a very delicate balance between restricting compute to China and in particular restricting compute not just to China, but to third party countries that you're worried will just resell to China or will, will set up server farms and then rent that capacity to, to China. I think that, you know, Malaysia, for example, has way more cloud infrastructure than anyone thinks Malaysia actually needs. And so the assumption is just that the vast majority of Malaysian cloud compute is just being used by, you know, Chinese labs. And then on the other hand, encouraging China to really invest in its own chip sector right now, it's lagging far behind. But I mean the idea, I think at this point, the idea that China can't compete, it can't innovate if it wants to. I mean, I don't know that it's, at this point that's just racism. Right? I mean they can do whatever they put their minds to it. And so you're trying to figure out, okay, how much of a benefit is a short term moat quote unquote that we're developing by restricting compute. How much is that worth? A kind of long term supercharging of China's AI, China's chip, chip capacities. Right. And that's the argument for some, for example, that someone like Nvidia makes. Now Nvidia also is a public company that would like to make billions and billions of dollars. And so that's obviously behind, that's the subtext of everything Jensen Huang says and.
Kevin Fraser
Is actively selling to China still, just like a lower chipset than they can get permission to sell. Yes, it is developing kind of an alternative product stream for China that complies with export controls, but nonetheless is central to their development of this technology.
Alan Rosenstein
That's right. And it's actually also not clear that that itself is a good alternative, because if you're going to sell lower powered chips to China, you might hope that that will retard China's AI development speed, but it also might just get the Chinese to innovate in how to use these worst chips. So one of the big stories behind Deep Seq, which was the model that came out of China and really put sort of Chinese AI on the map was that they were working with somewhat crippled American chips. Basically, the chips were just as fast, but they had a slower ability. To transfer data between the different parts of the chip was important when you're training these models. And so the Chinese. So again, I cannot emphasize enough. It's a billion. China has more than a billion people, and a lot of those people are really smart. They went and they decided, okay, we're going to go down to the hardware level and squeeze out, like, every tiny bit of performance we can from these chips. And therefore China now has, you know, particularly efficient AI training. So, you know, I think at the end of the day is again, it's very not clear how to calibrate how you do this. Controller diffusion. The problem with the Trump administration's approach, though, is always like, if this was any normal administration, you'd be like, okay, well, you know, they have a different view and I don't. Kind of seems reasonable. But of course, this isn't any normal administration. So when David Sacks, who is the Trump administration's AI and crypto czar, and I was just like, every time I say his title, I just want to emphasize again the insanity of having an AI and crypto czar be the same person because they have nothing to do with one another. One is an epochal world transformer technology, and the other is nonsense.
Unknown Speaker
Alan, I think you're just jealous of someone with another crazy.
Alan Rosenstein
That's. I guess that's right.
Kevin Fraser
I wish I knew which was which, Alan.
Alan Rosenstein
Exactly. Just think, one day he'll be introduced as AI and Crypto Czar emeritus. That'll be really exciting for him.
Kevin Fraser
The dream. The dream.
Alan Rosenstein
He did this really long post on X about why the Biden administration's diffusion rule was terrible. Overreach of export control authority and alienation of US allies and lack of due process. And all of those things might be true, but this is The Trump administration we're talking about. So this tiny pocket of principled free traitorism and helping our allies is in isolation, may be a perfectly reasonable argument. And again, like, there's no reason to go ad hominem. We can't evaluate it on the merits, but it is completely crazy making to have this kind of weirdly principled free trade position in the context of the absolute shredding of the global free trade system and the global American alliance system. And again, I'm trying so hard to evaluate this policy on the merits of this policy, but I can't because it's just part of this otherwise completely insane seeding of the global economic system to China.
Kevin Fraser
I want to circle back on to the policy specifically because I have mixed feelings about it. But frankly, I had a very strong negatives about the Biden administration, not totally along the lines that I think a lot of people did, but I had a lot of reservations about that policy, as I understood at the time that I have not been alleviated. So I want to circle back that. Before I do, though, I think we got to define something that I think would be really useful to understand better and that you all, I think, do understand better, which is that what the heck do we mean by AI competition in this context and how are we envisioning this playing out? Right? Because it seems to me the goal, as far as I can tell, is that we want to beat China to AGI to broadly overly simplify it. Right. We're trying to gun our systems up to get as effective AI as we can. AGI is like kind of an amorphic, amorphous end state, may not even really exist as an end state. But the basic idea is we want to stay up to up ahead. The question though, is that, like, strategically, how you approach that really varies upon how you see that competition development, structuring. Right? If there really is an end goal and we're like a year away from it, then you could say, oh, yeah, cut off the rest of the world just for one year. We're gonna take this one year, we're gonna sprint to the finish. And once we're at the finish, then we're at the finish and we're done, we're at AGI. But I don't think that's really the more I've read into it how we're confident AGI is necessarily gonna work. Right. There's no fixed end date, so we're really talking about actually, like ongoing competition dynamics to some extent. Unless there's some benchmark, we're going to get to. And then you have to ask yourself, well, what is the marginal benefit of beating to certain benchmarks given that it's not clear there's an easy way to pull up the ladder entirely, Right? Like China, we may beat China to AGI or whatever the benchmark is we want to set, Right. How big a marginal difference in our relative power is it if China comes four years later or three years later? Because that seems to be hard to say confidently, oh, they'll never be able to get. We can pull up the ladder forever if that's really what your goal is. Short of just completely bogarting this chipset, right. Or whatever it is that the secret formula that pushed you over the edge. And maybe not even that, because it's not clearly the chips that are the secret sauce, although clearly it's a big part of it. So why is it so focused, this policy, it seems, and certainly the rhetoric around a lot of AI policy, both in the last administration and kind of among some people in this administration, although frankly it's a little chaotic and hard to know exactly where the through line is in this administration about a lot of different views on certain types of stuff. What is the strategic goal, what is the competitive objective? And that we don't really have a clear sense of that. Right. There's a sense that we're just competing against China and that warrants casting the reins off the industry. But that's a crazy perspective. That's a perspective we've taken with no other competitive industry, all of which feed into our general major power dynamics. So why is that so warranted here? And particularly if that's an indefinite state, if there's an indefinite competition, does that mean that we can never have constraints on AI or that we always need to bogart it and protect it and shape it in a way to have that competitive advantage, even where there may be other big trade offs like bringing the third world up, quality of life up by giving better access to AI, which is something that's being compromised by AI diffusion restrictions. That's a long way to explain it. Kevin, am I off on this? It just always struck me that we just don't have a clear concept of what we mean by AI competition and strategy. And that makes it really hard to figure out how this all fits in on a kind of extended timeframe.
Unknown Speaker
Well, and this brings up an important point that Alan's reference to the Sax tweet is valuable here because Sachs did say these are my personal views. And so we have a lot of Trump Administration officials expressing personal views about these topics. So I can't definitively state in terms of the competition we're we're engaged in, is it to AGI, is it to the killer app that's going to spread across all economies, Is it to just scoring the highest on different benchmarks? I think a lot of people could give you a lot of reasons why we're in a competition with China. The two that are most compelling to me first is just from a national security perspective. If you talk to folks like Ashley Deeks, if you talk to folks who are actively thinking about how AI is going to change the battlefield, having the most dominant AI capabilities is a huge competitive advantage and a huge deterrent. If you talk to Eric Schmidt or to Dan Hendricks, they're going to tell you that having the most sophisticated AI and showing that to your adversaries is going to have a great deterrent effect on international relations and just national security issues. So there's the military argument that I think is pretty compelling to me is just having the most dominant AI transfers into having the most dominant military, which is always going to be a key interest. The other leg that I think is really important here, it's just a political economy aspect. So we're seeing, for example, Nvidia has a real interest in making sure that it can share and trade its chips as liberally as possible, because that benefits Nvidia, that benefits the rest of the American AI infrastructure. And so being the country upon which all of the other countries are relying upon is a pretty good place to be economically. And I think when we look at this sort of AI sovereignty movement, so we're seeing more and more countries want to develop, develop their domestic AI systems that they can exercise more control over. If you are the country that's selling the critical inputs to those countries, well, that's a good place to be. And we don't want China to be the ones developing and selling that critical infrastructure.
Kevin Fraser
Well, this, but this gets. Let me just interrupt you that, because that's exactly the tension though that I feel like is so lacking in the discussions around these dynamics generally. Right. Because the Biden administration's policy was trading your first category for the second. We were openly saying we are going to accept that the rest of the world will not be as reliant upon US technology as they probably would have been otherwise. We're not going to let companies export or build there in exchange for fast tracking us towards whatever this competition dynamic is. But like the trade offs there, I just never clearly seen articulated and the military context is the one you're like, okay, you can see how an AI would make you a better, more lethal fighting machine, to quote our Secretary of Defense. Right. Legal warfighters, even that, though it's not clear to me 100% what the marginal actual military operational advantage is from a really, really good AI system to the really, really, really, really good AI system. Right. If you were really calculating this, you would want to empirically show, oh, here's the actual strategic benefit, here's what makes us deadlier. Deterrent. Maybe there's a psychological like, oh, they're better than us, that's going to be a deterrent effect. Sure. But we're better than all. What are the different qualitative measures that will have that effect? I'm not sure why we would value this over certain other ones. So I don't know. Those are the points I've heard and I just find them not entirely persuasive because they're just so light on actual underlying substance, as far as I can tell.
Alan Rosenstein
Yeah. So look, this is all under theorized. Part of the reason it's under theorized is because it's hard and it's new and it's moving really quickly. We don't have decades of, of data and facts on the ground from which to sort of develop. This isn't like thinking about the strategic balance of power with artillery. Right. Where you have some history to look to. So part of it is because it's hard, part of it is because everything gets funneled through US China AI competition. Because literally US China competition is the only thing anyone in D.C. can agree on. Right. So just like procedurally, which is the.
Kevin Fraser
Best way to make both. Just find the lowest common denominator and make that all we do.
Alan Rosenstein
Yeah. Preferably competition between the two giant nuclear superpowers. Right. But like in the same way that everything procedurally, as we'll talk about in the next segment, gets filtered through the reconciliation bill. These days in dc, everything policy thinking gets filtered through US China competition. And that's obviously not a great way of thinking about it. Another reason this is hard is because it's not clear that the Trump administration is capable of having strategy, because everything flows from the whims of Dear Leader. And it's totally unclear whether he understands AI, whether he's capable of thinking in a non purely transactional way about his own interest. I mean, we haven't talked about the UAE AI deal, but like, how much of this is just because, you know, he wants like a really nice new plane to be Air Force One for the next couple of years and then that he gets to fly around in after he leaves. Right. And in any other administration that'd be an insane statement to make. And in this administration that seems totally plausible. Right. So a lot of, a lot of it's hard. You know, another thing to keep in mind is we talk about AI competition. There are two different things we're talking about and we often get the confused and it's not helpful, I think. So one thing we're talking about is who can train the best AI model, right? And that's important, obviously. And when we talk about, you know, the race to get to AGI or the race to get to artificial superintelligence or, you know, that's what we're often talking about, the problem there is that if you're trying to keep China down, that is almost impossible to do because, you know, as large as these training runs get, right, as many hundreds of thousands or even maybe millions of GPUs as you need. China is a large, rich country that will figure out a way, either through pure brute force or smuggling chips in, to get enough chips to develop near, to develop frontier or near frontier capabilities. And if what you're really focused on is military, 100%, it can do that, right? Because you know, the military capabilities are going to be potentially smaller models, they're going to be fine tuned models, they're going to be things you can really focus on, right. If what you're worried about is developing the next fleet of autonomous hypersonic, nuclear capable drones or whatever, right. There's no amount of diffusion restriction that's going to keep a dedicated Chinese military apparatus from figuring this out. Right? So if that's the AI competition you're getting at, I don't know what we've been doing here there. But there's a different AI competition, which is the competition for inference, for running the AI models once you have trained them. And if you think that AI is going to filter into every part of the economy, which I think is true, and you think that at the end of the day, what makes superpowers super is their economic and industrial base, which I think is also true. The real important competition, and not this year or next year, but the next 10 years, the next 20 years, is about inference compute. And with inference compute, you really have two things. You have the actual number of chips right in data centers and you have energy. Now, America right now is leading in the chips, but China 100% is leading in energy. Part of it's Dirty energy, right? They're just willing to burn more coal, but increasingly it's clean energy. Right? And this is not to get on a tangent, but the dumbest possible thing that the Trump administration is doing is making being against electricity somehow like a sign of MAGA manliness. But there I think maybe you could keep China down or you could, I don't know, you could make an argument for a long term try to prevent China from getting enough chips to truly revolutionize its economy. Again, there the response is, yeah, but then, you know, Huawei and China's own companies will just develop their own shittier, but still probably good enough chips. But it seems to me that like, maybe America should focus more on its side of the bottleneck, which is energy and building high capacity transmission lines, high power transmission lines, to get the energy from the Nevada desert to St. Paul, Minnesota for example, where I am today. So we can run all our stuff. Now maybe the response to that is you can, we can do two things, right? We can both build up our energy infrastructure while also keeping China limited on compute, though that's actually not entirely clear to me because our energy infrastructure requires a bunch of rare earth elements that only China has. All this is to say this is very complicated and very under theorized. And I will just go back to the sort of main point I made, which is that even if in isolation, you can look at this or that Trump policy and say, yeah, that actually makes sense. And I know some of the people who are working on AI policy in the Trump administration and they are actually very serious, smart people who are not terribly maga, frankly, but like they are embedded within a much larger quote, unquote, policy making apparatus. Right. That just makes no sense.
Kevin Fraser
So I want to be cognizant of bringing in our second topic a little bit, but let me bring in one part of this and we can use this as a transition to our second topic because they're actually related, which is that there was one really underlying thread of the Biden administration policy, which is I think actually the biggest tension point that led the Trump administration to repeal it. And particularly the reason why the repeal lined up with the big Middle east trip and why the first deals we're seeing.
Alan Rosenstein
Because. Because diffusion is woke.
Kevin Fraser
Diffusion is woke, but kind of, yes, I mean anti. Diffusion is maybe woke. The one unifying thread of the top tier of, I think it was 19 countries the Biden administration laid out wasn't that they were strategic allies, because there are a lot of strategic allies left out of there. Israel, Gulf states were not in the top tier, they're in the second tier is that there are democracies. And they were strong, solid democracies. And that seemed to be a big thread of what the Biden administration was thinking. Now, maybe that's a Biden administration thing. They were like, really into democracy as a common unifying element in their foreign policy in general. A lot of that was Ukraine, but not exclusively Ukraine. But is there something to that relating to this technology? Is the real concern here that the UAE and Saudi Arabia, both because they have the money, will be able to jumpstart, you know, an AI system if they get access to the chipset. And then unlike France or the uk, they will then use that chipset to, you know, embed themselves in an autocracy that will be so reinforced by AI can never be conquered, that may spread, that may lead to all sorts of really problematic uses of AI. Is that part of what was motivating the Biden administration's policy? Or is that, and is that something the Trump administration just doesn't buy into?
Unknown Speaker
I didn't see that as the predominant factor. I think perhaps what democracy, or a strong democracy was used as a proxy for was the odds of being able to prevent leakage. I think leakage of the chips to bad actors was the primary concern because a lot of the conversation about how to potentially move out of Tier two into Tier one wasn't, hey, what's your Freedom House democracy score? Did you move into the top 10 this year? But rather, have you enacted laws and shown that you're going to prevent leakage? So that wasn't the main variable I saw being used by the administration. But I did, just before we move on, I did want to flag that there are alternatives to just this sort of schematic for addressing the chip diffusion question. Senator Cotton has introduced a bill to require location verification mechanisms on export controlled chips. So this would be the idea that if you send a chip to country Y, you're able to verify that it's being still housed in country Y and used for country Y. So I just want to make sure that folks know that this isn't the only game in town. There are other policy options available to us that smart folks are throwing out there.
Alan Rosenstein
Do we have any sense of whether that's technically feasible? I just don't know. I mean, it may be, or it may be one of those things where policymakers say, you know, what is the old joke? You know, you guys are so smart, you put a man on the moon. Okay, so next step, next step. Is put one on the sun. There's some sometimes DC policymaking is a little bit like that. I just don't know in this case yet.
Unknown Speaker
The folks I've seen who are at least talking about it suggest that it's, at least in the world of possibility, it is admittedly beyond my, my technical capacity to answer that right now. Definitively for you, Mr. Research Czar.
Kevin Fraser
Fair enough. Well, let's bring in our second topic because this brings a lot of these same questions to the domestic front, which is the the question of preemption. We have this provision in the current version of the reconciliation bill passed by the House that would essentially preempt AI regulation by the states or legislation by the states for the next 10 years. It would arguably do a lot more than that. Like part of the concern over this particular provision. It's so broadly worded that it could have lots of leakage to other types of regulation and things like that that we don't need to get into for the sake of this conversation. I actually don't want this to make it be a critique of this particular provision, but of the underlying concept.
Alan Rosenstein
But there's such a juicy drafting error in it, too. It's a mess, man.
Kevin Fraser
So many, so many. But I don't want to waste our time with that. That's interesting. That's a great written piece somebody can do, which they probably already have, and I just didn't catch it. But the key point here is really this question of federal preemption, because both of you have written pieces friendly to the idea of preemption, at least to some extent. I think, Alan, you particularly are focused on AI safety concerns. Kevin, I think you were a little friendlier to it kind of all around in the piece you wrote two or three weeks ago on this for Lawfare. And I will admit my bias is very I have a lot of reservations about broad federal preemption at this particular stage where we don't see any regulation really of the AI industry coming out of Congress. And the only beginnings we're seeing of it are at the state level. And I'm kind of curious. I want to hear your all's case about why preemption is the right solution partially or more at large at this particular moment in your views and why this provision might be, if better drafted, setting aside again those drafting problems the right way to go about things. Ali, I think you wrote the first piece on this. We shall give you first bite at the apple since you were on this. I think this beat a year, year.
Unknown Speaker
And a Half ago, Scott, I think you meant first bite of the Tostitos with a hint of lime chip.
Kevin Fraser
There we go. Yes.
Unknown Speaker
Yeah.
Kevin Fraser
The first hint of lime on your tongue.
Unknown Speaker
Alan, don't eat the whole bag this time. Come on, buddy. Self control.
Alan Rosenstein
All such nerds. So. So, yeah, I mean, I just, just to say one quick thing about this bill because I think it's important it has been passed by the House. It will almost certainly be stripped out of the Senate version of this because of the Byrd rule, which does not permit things to be put into reconciliation bills unless they have a pretty clear effect on the federal budget. And everyone I have talked to about this, including, well, I've not talked to Senator Ted Cruz, but he has put out a press release announcing that he has his own standalone version of something like this that he's going to introduce, which again, is a pretty good signal that this is not going to be in the, in the final bill. So I think, just to Scott's point, it's a big deal that the House passed this. It's very much on the agenda now. But like, this specific bill is almost certainly not going to become law. So, look, I think that the reason for federal preemption of certain kinds of state regulations of AI, and this is what I wrote a year ago, I co wrote this with, with Dean Ball, who was then a researcher at the Mercata center at George Mason, is now actually working for the administration in the Office of Science and Technology Policy doing this kind of AI policy. I presume I haven't talked to him about this since then, was that preemption is appropriate when state regulation creates national externalities, which is why we have a national government. Now. The main concern that we had when we wrote our piece a year ago was with at the time, a very broad AI safety piece of legislation, SB 1047 out of California, which would have put real limitations on the ability of anyone to develop, to develop large AI models. Right. And our concern was that not that the concerns behind something like SB 1047 were illegitimate. Right. We were perfectly happy to concede that there are safety concerns, and maybe those safety concerns are so large that the government should actively slow down the development of frontier models, but rather that this is a question not for a state to make, especially a state like California, which is going to have outsized influence on how AI is developed, but for the national government to make, because AI has potentially very bad effects, but also potentially very beneficial effects that affect the whole nation. And then, of course, there is a geopolitical, international competition aspect to this. So all of this should be made at the federal level. And we also argued, and I think, I continue to think this is true, is that the reason you have not seen action at the federal level, and this, I should say, is one common defense of state regulation, even about national issues, which is that while Congress is paralyzed, they don't do anything. And in some sense, that's obviously true. We, we think that or we thought. And I still continue to think that when it comes to AI regulation, the reason you're not seeing anything out of Congress is not because primarily Congress is paralyzed, because Congress does not want to regulate AI, that national leaders across both parties generally are taking a fairly aggressive accelerationist perspective where they want American AI to develop pretty full steam ahead. Certainly they want the big Frontier labs to be able to develop larger, more capable models largely without too much government regulation. Right. Maybe the Biden administration did not love this, but it's notable that during the Biden administration, top Democrats in the Senate, for example, Chuck Schumer, were not at all on board, frankly, with a lot of the Biden administration. And, you know, members of California's delegation in the House and Senate spoke out quite aggressively against the California bill, which is a quite unusual thing for people to do, right. Members of Congress to speak out against their state's legislation. And so we took that as an indication that, like, there was just, there was no, there was surely no federal consensus on the need for this kind of model slowdown. So, you know, we thought, and again, I still continue to think that it's totally appropriate for the federal government to preempt regulation of model development. Now, where I don't think it's appropriate, and I think Kevin and I may disagree on this, at least based on Kevin's recent piece, is when it comes to a state's ability to regulate how AI is used within its jurisdiction, that strikes me as totally reasonable. If a state wants to say, if Minnesota wants to say we don't want AI to be used by our employers to screen candidates. Right. I may disagree with that. I may say I think that's inefficient. And actually it's AI is really useful and it's less biased than human, whatever the case is. But that strikes me as a thing that a state can absolutely do or should have a legitimate interest. Now, obviously they're going to be spillover effects of that too. Right? Anything any state does has spillover effects. So there's no way to perfectly cleanly say this has no externalities and therefore a State can do it and this has externalities, therefore only the federal government should do it. But if you're going to have a federal system where states do have legitimate interests in regulating and they are laboratories of democracy and the standard arguments for federalism, you're going to have to draw a line somewhere. And I think the cleanest and most practical line to draw is to say, look, there's AI model development. States do not get to mess around with that. And then there's the use of AI within states jurisdiction. And we're going to treat that like anything else that's a potential state concern. Again, it's not a perfect dividing line, but it's the line that best balances keeping national issues for the national legislature and then allowing states to do state issues.
Scott R. Andersen
Hey, do you insure your car? Your home? Do you have a personal liability policy in case someone sues you? Unless you're Elon Musk, it's a good idea because if something bad happens, you want to be protected. But what about you? Are you protected? What happens to your income, your family's future if something happens to you? Hit by a bus, plane crash, heart attack? Stuff happens. Policygenius makes finding and buying life insurance simple. Ensuring that your loved ones have a financial safety net they can use to cover debts and routine expenses. You can compare quotes from top insurers and find coverage that fits your needs and your budget. With Policygenius, you can find life insurance policies starting at just $276 a year for a million dollars in coverage. It's an easy way to protect the people you love and feel good about the future. Okay, this is a true story. A couple of years ago, my cabin in the woods flooded and was almost completely destroyed. And the insurance company, my homeowner's insurance, didn't pay a dime to rebuild. We were stuck with the whole cost of it. Can you imagine doing that if say you had a mortgage to service on a property and your partner was gone? Life insurance can cover loved ones expenses if something happens to you. Mortgage payments are a common cost that could be covered by life insurance too. So I have life insurance because I don't want my family to have to worry about money after I'm out of the picture. Policygenius is a great way to get the right policy for you. Combining digital tools with the expertise of real licensed agents, you can compare quotes from America's top insurers side by side for free. Policygenius Licensed Support Team helps you get what you need fast so you can get on with your life. They answer questions, handle paperwork and and advocate for you throughout the process. Life insurance is not a one size fits all product, and policy genius doesn't treat it like one. They lay out all your options clearly. Coverage, amounts, prices, terms. No guesswork, just clarity. So check life insurance off your to do list in no time with Policygenius. Head to Policygenius.com or click the link in the description to compare free life insurance quotes from top companies and see how much you could save. That's policygenius.com.
Kevin Fraser
What if I told you that right now millions of people are living with a debilitating condition that's so misunderstood many of them don't even know that they have it. That condition is Obsessive compulsive disorder, or OCD. I'm Dr. Patrick McGrath, the chief clinical officer of NOCD, and in the 25 years I've been treating OCD, I've met so many people who are suffering from the condition in silence, unaware of just what it was. OCD can create overwhelming anxiety and fear around what you value most, make you question your identity, beliefs and morals, and drive you to perform mentally and physically draining compulsions or rituals. Over my career, I've seen just how devastating OCD can be when it's left untreated. But help is available. That's where NOCD comes in. NOCD is the world's largest virtual therapy provider for Obsessive Compulsive disorder. Our licensed therapists are trained in Exposure and Response Prevention Therapy, a specialized treatment proven to be incredibly effective for OCD. So visit nocd.com to schedule a free 15 minute call with our team. That's nocd.com I have this nightmare that.
Alan Rosenstein
I never finished college or that someone's.
Kevin Fraser
Going to find out that I don't.
Alan Rosenstein
Have the qualifications for this job and.
Unknown Speaker
I'm like a total fraud. Sometimes even the most successful people experience imposter syndrome.
Kevin Fraser
Check out Mind if We Talk?
Alan Rosenstein
The newest podcast helping you with tough topics.
Scott R. Andersen
In this episode, Licensed Therapist He Su.
Kevin Fraser
Jo sits down with award winning journalist.
Scott R. Andersen
Jane Marie to explore why so many.
Alan Rosenstein
Of us have Imposter syndrome and why.
Unknown Speaker
Success never seems to solve it.
Kevin Fraser
Whether you've ever questioned your own success or felt like the odd one out, this episode's for you. Listen and subscribe to Mind if We Talk? Wherever you get your podcasts, if you're.
Unknown Speaker
Anything like us, you love attention.
Scott R. Andersen
And my favorite way to get all eyes on me is with next level shiny glossy hair.
Unknown Speaker
Which is why we're so excited to.
Alan Rosenstein
Tell you all about the new Lamellar.
Unknown Speaker
Gloss collection from the girlies at Tresemme.
Scott R. Andersen
And Gigglers, we've got you too, because Tresemme partnered with us to bring you 1-800-gloss, a special bonus up episode of.
Alan Rosenstein
Giggly Squad where Hannah and I give.
Kevin Fraser
Advice on all things hair and giving gloss.
Scott R. Andersen
Check out the episode and grab the Lamellar Gloss collection today because I'm officially.
Unknown Speaker
Declaring this spring gloss season. I'm glad that Alan is confident in his line drawing abilities. I think I have more skepticism about the ability to distinguish between when a system is being used solely intrastate versus having impacts that are posing an undue burden beyond its borders. I think drawing that line is incredibly difficult and I just wouldn't draw it where Alan draws it. And one reason why is not to focus too narrowly on the example he raised. But that employee question of we're concerned about would be employees about applicants being exposed to a biased AI algorithm. Guess what? State laws already address biased hiring, unfair and deceptive act and practice statutes. Anti discrimination statutes cover a lot of the harms that folks are allegedly trying to focus on with state legislation. So I think a beginning conversation on this AI moratorium debate is just starting off with a more honest assessment of how state laws actually already apply and address existing issues that are being raised. Yeah, Alan.
Alan Rosenstein
Yeah, yeah, so. So I think it's a fair point and I hear this raised a lot, but it strikes me as actually totally orthogonal to the preemption question. Right. So. So this question of and you hear this a lot and I have some sympathy with it, which is, look, we don't need new AI specific legislation. We have all this existing legislation or existing regulation or common law or whatever that can apply. Okay, fine, that may be true, that may not be true. But that seems totally separate to me. And I should also say you you see this and again, not to focus on the current House bill, but it is a template for what's going to come you see this in the current House bill, which tries to there's a drafting error which makes it unclear whether it actually accomplishes this. But it tries to say we're only preempting AI specific legislation or legislation that treats AI differently. But if you have general laws, they can apply to AI systems. But the reason I have preemption is because you're so worried about states screwing up this policy issue that's of national concern that you don't want them going anywhere near this. But if that's the concern, then you want very broad preemption. You want states simply cannot regulate, either using special laws or general laws AI because we're so worried about the spillover effects that any state regulation of AI will have. Right. Because whether the regulation comes from an AI specific law or from a general law that has been applied in such a way as to cripple or severely retard the AI industry in your jurisdiction, you have the same spillover effect. So that just always struck me as like a perfectly interesting debate to have, but one that has nothing to do with preemption.
Unknown Speaker
Yeah, I guess I disagree there because I think that the second it becomes AI specific, the amount of uncertainty in terms of the impact on AI development and diffusion drastically increases. In my opinion, when you start to get into questions too. For example, how are we even defining AI at the state level? So there's a whole law review article being written right now by Paul Weitzel at Nebraska on just outlining all of the different definitions he's detected for artificial intelligence in these state bills. So if you get AI specific, the amount of uncertainty about whether or not you're actually captured by that state bill drastically increases. So that's one point. Another point I want to tackle that you made earlier was this idea that we need to maintain, and I know that you're not necessarily endorsing this, we need to maintain the laboratories of democracy. And that's a really good line that people like to use. But go analyze these bills and you will quickly find out that these aren't bespoke pieces of legislation. That somebody went and did a bunch of town halls across their state and got input and said, this is the Wisconsin specific AI bill we need. No, we're seeing copycat pieces of model legislation being spread across the country by well funded groups that have a certain view of AI that are trying to get state legislatures to adopt exactly the same piece of legislation. This isn't experimentation, it's just an attempt to ram certain ideas, certain model bills throughout all 50 states. So that's another talking point that I don't think sticks.
Kevin Fraser
Let me hop in here to get a word in because I don't want to get too much. I want to take a point to criticize this whole idea about that this is an appropriate thing to do altogether. So you guys can unite, you bicker between yourselves exactly where the line is. I have a lot of reservations about drawing this line, this exercise whatsoever. The point you raised, Kevin, about these model legislation is true of any state regulation. There are model laws that are highly problematic and lots of of regulatory areas that is an element of the laboratory of democracy is that you have industries, special interest groups that are advancing things in different jurisdictions that can be really problematic. There's no recipe to good policy. Democracy is not a recipe for good policy. The one thing it is, though, and this I think, is so important in an era where you're dealing with an emerging technology that's being widely dispersed, widely implemented, and used at a point where we still don't fully always comprehend the consequences. One of the virtues of the laboratory democracy idea is its responsiveness. It provides multiple avenues that when you have a constituent or a constituent population that is affected in a way that has some sort of public grab, where they can implement different regulatory measures at different levels and engage. And it exists alongside, I should note, litigation. The third topic we're going to talk about now, we are all working on in other contexts, right, this idea that we have existing civil liability that can come into play here when there isn't statutory regulation at a state or federal level. But then states can respond when there is perceived to be a lack of accountability coming through the civil legal system or in other needs for regulation, and the federal government can come in as well. I'm totally open to the idea that there are appropriate circumstances where the federal government should preempt state legislation to maintain a national industry. But in an era where we're really seeing a lot of state legislation debated, but I don't see a lot of it actually being enacted successfully. Isn't it jumping the gun a little bit to clear the deck of any possible state legislation? Because the fundamental dynamics of this industry, frankly, are that the AI industry, the developers, are the driving force as to whether there's regulation or not. When you will get regulation of AI is when they will come to Congress. And this was true of a lot of emerging technologies. When they will come to Congress and they will say, we need you to protect us from liability. And here are the compromises we're willing to make about what we think is responsible, what isn't. And that sort of push is what actually gets Congress over its institutional hurdles to enact something. You clear the deck of state legislation, you're clearing a lot of the incentive those companies are going to have to do that, because they're just not going to face that pressure. And you're putting everything in the one bucket of Congress, which is, if nothing else, for all its dysfunction and all its function, is a super high bar for responsiveness. It is not a responsive institution by design. A lot of that was assumed was going to be done by the States and localities and municipalities. That's the part of the laboratory democracy that I think is real and very important in this particular moment. And I just see worry about what we lose if we get rid of that with broad preemption. So where am I off on that?
Unknown Speaker
I think my gravest concern is around the creation of an American public that is highly fragmented with respect to its exposure to AI. So if we have certain states that become a sort of Luddite haven for anti AI communities, well, great. So you have this whole state where folks don't know how to use AI tools. They don't know the pros and cons of the latest models. They're not ready for an economy that is wholly focused on integrating AI in the workplace. How do you grapple with that state? And if you can tell me definitively that that's not going to happen, then yes, I may be more sympathetic to your views. But for me, the possibility of a state having reactionary policy that forecloses that sort of AI literacy and adoption, that's a really scary world to me. If we have just one of those. And to your point about. Well, we're not seeing of the thousands of the hundreds of bills pending before state legislatures, we're not seeing a ton of those be enacted right now. But if we just saw two get passed tomorrow, let's say the New York Raise act gets passed and HB 3506 in Illinois gets passed. Both are pending right now. Both would require large AI companies to have annual audits to adhere to various reasonable air quotes, reasonable standards of AI development. Well, great. Now we're creating a whole audit industry, a whole set of special interests who are just going to want this legislation to stay on the books regardless of whether it's beneficial, regardless of whether it's serving the interests of those states. We're just creating these random communities that we don't know if those audits are even meaningful. Right now. You can go talk to folks about how red teaming, how evaluations, how benchmarking is a really difficult task. If we had all that settled, if we had an agreed upon set of standards and tests, and then I may be more sympathetic to this, but right now we're just creating these sort of cottage industries, or at the risk of creating these sort of cottage industries more so than really advancing legislation that is designed in the interests of those broader state communities.
Alan Rosenstein
Look, I'll just finish by saying, look, you either believe in federalism or you don't. You don't have to. They're perfectly Normal countries, you know, France does not have a federal system. Everything is much more centralized in that system. I think all of Kevin's arguments that, you know, such and such is a bad policy, therefore it should be outlawed, make total sense because there's only one.
Unknown Speaker
Policy I'll object to. I'm not saying I'm anti federalism.
Alan Rosenstein
Well, no, but, but, but, but I think your argument is actually right and it's just important because. Because, because, yeah, in this instance. Right. But like, you don't.
Kevin Fraser
I don't.
Alan Rosenstein
I don't think you get to pick and choose. Right. You either believe in federalism or you don't. I don't think you believe in federalism for everything. But AI. Right. Unless you have a principled reason or a principled way to distinguish. Right. Between those issues that are bad policy. Because you're worried that like state X is going to be left behind because state X regulated poorly. Because State X's voters elected boneheads into the state legislature, which is the definition of federalism. Right. We allow states to do that and then other issues where we don't allow states to do bad policy because they are imposing their bad policy on other people. Right. On non voters. That's the externality argument. Argument. Right. I think that's, to me, the principal framework for federalism. And then the debate becomes how do you instantiate that in a preemption regime? Now I think that the best way to do that is to say regulation of model development is going to be preempted and everything else is permitted. You may disagree. You could have a debate there. But I think it's important to get really clear on what are the principles for when preemption. Forget AI, where. When preemption is and is not appropriate. Get that really clear in our heads. I think the externalities thing is the best framework I've thought of. And then apply that to AI. But those are two different debates.
Kevin Fraser
But you're actually skipping a debate here as well because of course the federal government doesn't actually preempt everything. It could by choice. Because there are certain policy areas where they say, yeah, we could enact federal legislation about. Especially like in the last 20th century where foreign commerce has come to swallow so much of what the country does. Congress could regulate all sorts of things, but there are lots of things where it chooses not to because they say, well, this is an area where in reality, maybe it's just that we didn't get enough political consensus and lots of other bad reasons. But if you want to say a principled reason it's that maybe there's advantages to having diverse perspectives, various approaches here. And yeah, we could preempt it, but we don't need to. That's where I think this issue still sits. I'm not ruling out the possibility there might be things you need to preempt. You want to preempt at one point. But I'd rather see the opportunity to have different models pop up even if they might have certain negative externalities because you might see positive innovation models for everything. Models not in the icon.
Alan Rosenstein
No, no, no, no. I just need models of regulation. I mean, I guess my question is. Right. And here Kevin and I get to gang up on you. Right. It's like whatever Kevin and I disagree on about whether you should preempt model use regulation. Right. My AI and employment Kevin and I definitely agree that you should 100% preempt stuff like the California safety legislation that would fundamentally Change how an OpenAI or an Anthropic or a Google thinks about its next giant training run. Right. And our concern is that the costs of that, of getting that wrong even for a short period of time could be so dramatic, especially relative to international competition. See our first topic that we don't think the Scott Anderson. But models of democracy or different models of regulation are worth it. Right. I think that at least that's where I want to fight this fight.
Kevin Fraser
Well. And I won't rule out the possibility that there might be state level regulations that would have that impact. And I remember we talked about that California bill at the time and I didn't have the impression that it was clearly so catastrophic that that would be so damaging and hindering to the whole industry. Although I know certainly something the industry didn't like it's was different burdens and had a bunch of certifications or requirements to expose some potential liability. Right. Because the big issue was a certification that it wouldn't have catastrophic potential effects. Sort of other provisions there might be bad version of that. I hesitate to say up front that there is no situation where even a state level regulation of model development couldn't actually have positive externalities in some way. If for no other reason than putting pressure on the industry to say this is an area where our state constituencies are genuinely facing concerns. They've persuaded us. So now you can persuade the federal Congress that you don't need us to address that. You can get them to get rid of this through preemption. But you're going to have to find a way to address it because we have enough of a voice to actually address these concerns. That's the bargaining that's necessary at this point when you're dealing with a fast moving industry like this. And I don't think we can use just competitive dynamics to go back. First topic, first topic. As just a cure all to say none of this, everything's bad, trade offs, never worth it, we can't have any barriers to competition. I don't think you're actually saying that, but that's often how it gets used in this context. The truth is that there are individual policy trade offs. We should weigh these policies individually. And I'm just not convinced that we have such a competitive priority to open the doors to any model development that you couldn't have state legislation that might not have some marginal benefit. Until you do, I just don't see his preemption as necessarily being worth it.
Unknown Speaker
And I think this is where we can fight this debate on yet another ground, which is just your view of the tech, the underlying technology. Do you want to gamble on making sure you mitigate its risks or would you rather gamble on maximizing its benefits? And I find myself in the camp that's one. All else equal, I would rather gamble on those benefits. Because if you look at prior waves of technology, and here I'm leaning on economists like Jeffrey Ding, who have done research on technological diffusion, the countries that lean into technology and help their communities and help their students lean into and learn how to use that technology, Those are the countries that lead in the next generation. And so I would rather make sure that we're that country, we're the country that leads in AI adoption and in AI literacy, rather than saying, hey, you know what? I've heard about this one time when this one AI system did this one really bad thing, let's ban that, use that sort of reactionary policy that I think we're seeing being considered in a lot of state legislatures. That's what scares me the most.
Kevin Fraser
So let us go to our third topic, which brings in this other domain that intersects with a lot of these issues and that's the question of civil liability, something we have all been spending a fair amount of time talking about and thinking about in other contexts for other work at Lawfare. And we've seen a major development this week, as far as I know, case of first impression. I believe it is coming out of the, I believe middle District court of Florida, a district court judge facing a really difficult, really tragic case of a teenager who ended up taking his own life after interacting with a number of AI bots modeled on characters from Game of Thrones specifically. And he was also, it's worth noting, playing characters he had developed in the Game of Thrones sort of universe, interacting with them, where a number of them took steps that appear to be encouraging him to commit suicide, Although you can dispute that characterization. And I'm not sure it's entirely clean cut, to be honest, having at least based from what I can read in the opinion that the court chose to quote from. Nonetheless, among the broad variety of issues, including standing and jurisdiction, a couple other issues that were implicated in this case was this question of whether or not the AI output, in this case, the actual actions of the model in engaging with the teenager was speech in particular type of speech that that teenager and other consumers had a constitutional right to. That cannot be abridged. The court ultimately held, no, it was not. Although it's worth noting the judge did say, I essentially am not willing to conclude this at this stage, so wasn't willing to grant a motion for summary judgment. I'm sorry, a motion to dismiss. That doesn't mean that. I guess she will not. I believe it's a woman will not circle back to reevaluate it later in the litigation, but at least at this point, was not willing to do away with it on that ground and did not see it as speech. Alan, let me turn to you first on this. Talk to me about your reaction to this opinion. How do you think this falls out? I know you're a First Amendment scholar in the tech context and outside the tech context. How do you feel about this in terms of what obviously is a difficult, complicated issue, how we think about this? I have conflicting instincts, but I'm curious if you do as well.
Alan Rosenstein
Yeah. So I don't love it. I don't love the idea that these chatbots, that the output of these chatbots is not protected by the First Amendment. And I, I want to be very clear. What I could have said is that these chatbots are not protected by the First Amendment or the chatbots on our First Amendment rights. And I think the problem, if thinking it that way, is that if you frame the issue that way, then you get into this, I don't want to say metaphysical, but you get into this very complicated debate about, like, well, do the chatbots have rights? Not really. Like, they're probably not sentient. I don't even want to get into that. Like, do the companies have these rights? Well, but the companies are like, they don't control the chatbots. They kind of train them and let them go off into the ether. But this, to me, is actually the wrong way of thinking about it, because speech can still be valuable even if there's no speaker whose rights we necessarily care about. Right? So Eugene Volek, who's at the Hoover center at Stanford and I think is sort of the greatest first moment scholar working today, has this nice analogy. And as he's written and thought, talked and thought about this, this particular case about, you know, the works of William Shakespeare, and if, you know, the government tried to ban the works of William Shakespeare, like, whose rights are being implicated, like, William Shakespeare isn't dead for a long time. So I don't think thinking of William Shakespeare's rights makes a lot of sense here. Obviously, it's our rights as readers of William Shakespeare. Those are whose rights are being implicated. Now the question then becomes, okay, well, we need some sort of test to figure out what sort of content is of the sort that the First Amendment should protect versus that the First Amendment should not protect. Because if you just say, look, any content that we can learn from is First Amendment protected? That sweeps way too much stuff in, because I can learn from almost any activity, right? Like, that's the beauty of the human mind. So what the First Amendment does is it says, okay, certain kinds of content, which, in other words, what we call speech, is going to be given special protection because we need some dividing line. We can't have the First Amendment apply to everything. And what we have found historically is that sort of things that are language and language, like, so music and dance, obviously not language, but they have a kind of linguistic adjacent structure to them. We're going to call speech. We're going to give them these extra first sermon protections because they are particularly useful for developing our intellectual faculties. Right, okay, so how do we define speech? Well, the usual test is it is expressive conduct. Right, that term expressive. And I think what this court did was it focused a little bit too much on the literal meaning of expressive, because it said, well, look, a chatbot cannot be expressing anyone's speech or anyone's thoughts because it's just a program. And again, it doesn't express its creator's speech. Therefore, the court was not willing to, you know, say that the First Amendment should apply here. But I think the more interesting question is, is chatbot output the sort of output that when people hear, they benefit from in the way that they would benefit from analogous speech that was, in fact, expressive of someone? And I think the answer there is obviously, yes. Like, these chatbots are very sophisticated. They're very sophisticated dialogic Agents like I talk to these chatbots all the time, right. To brainstorm ideas. They're unbelievably valuable. The fact that they don't express anyone's speech, speech is actually irrelevant from, from the question of do they benefit. Me as someone who benefits from expressive like speech. So, you know, to me, I, I think that these chatbots should absolutely have. The output of these chatbots should have First Amendment protections. Now that's only the beginning of the inquiry, right? Because I also think that these chatbots should be regulated quite heavily, especially when it comes to children. And there are two doctrinal ways you can do that. One, you can say, look, they have First Amendment protections, but you can still have tort liability. Right? It's going to be less tort liability than for non expressive content, but you can still have tort liability. And we're going to have to, you know, tune that and we'll figure out some way of doing that. I'm a big fan of intermediate scrutiny. I'm kind of lazy. Just put everything through intermediate scrutiny and do general, you know, proportionality balancing.
Kevin Fraser
Just to clarify what you mean by this, I think is that you can have tort liability because there is speech or exceptions to, to what is protected speech where you can have certain types of liability.
Alan Rosenstein
Well, so not quite right. So not to get too in the weeds, but so some things that ordinarily are speech are just not First Amendment protected at all. For example, defamation and libel, right. Or true threats or copyright infringement. None of that applies here. So what I would say is, no, this is First Amendment protected. And so we're going to have a higher bar under our usual negligence test, but it's not like necessarily strict scrutiny. So we're still going to be able to say, look, this person protected, but if a bunch of people are killing themselves, then we're going to squint at this and potentially allow regulation and potentially serious regulation. To me, the more obvious way to do this is to say that children just don't have nearly the same sort of First Amendment entitlement that adults do, which is in fact true. They don't in fact have the same First Amendment entitlements. And I am very comfortable and I kind of wish with the court and the court didn't do this at all, which really surprised, surprised me, I think wish that the court had framed this as a child protection case. Right? Because I think it is totally consistent to say, look, chatbots are absoluting First Amendment protected stuff. Right. They're super valuable. Right. But children you know, 14 year old boys who fall in love with a Daenerys Targaryen chatbot. Right. And this is the kind of tragic fact pattern here. We should be very willing to regulate that quite substantially. Now that poses its own very complicated challenge. I'm not saying that that makes all the problems go away, but that's what I would have liked to see here. But I do not like the idea that these chatbots just like don't have First Amendment protections. Because to be honest, going forward, a huge amount of intellectual exchange that people are going to be having, whether you like it or not, are with chatbots.
Unknown Speaker
Yeah. And that's the point I would echo most, is that we are going to see so much increased reliance on AI as tools of exchange expression. I mean, we can't even contemplate just how reliant we are going to become on AI agents in the near future. One to two years, three to five years. And that's my main concern here, is that we may be over indexing on this specific case. And I do just want to pause and say the facts of this case are horrific and they are incredibly necessary for a cause for alarm. In particular, I think now is the time to increase. And this is again, I know I'm a broken record. AI literacy. AI literacy. AI literacy. Parents should know about these tools. Parents should know about AI Persona chatbots that are designed in a way that allow for increased engagement and that are trying to keep those kids glued to those tools for as long as possible. But that doesn't necessarily mean that now's the time to try to stifle the, the development of tools that for example, are targeted towards kids to go way into my own weeds. Sorry if this is too personal. I had an eating disorder as a child and didn't like therapy at all. Hated therapy, dodged it like a bullet. Wouldn't talk to any of my issues with my parents, wouldn't talk to my friends about my issues. The tools that are being developed right now in terms of therapy box, for example, that may be specifically targeted to kids. I know that raises a lot of issues. But having been through mental health issues as a young child, that could have been the tool that helped me get through my difficulties way sooner. And finding out which of these tools is going to be pro kid or anti kid, that's a really tough thing to draw. And I want to make sure sure we are drawing that line and working on drawing that line. But I don't think it should be coming from a district court judge sitting in Florida that May have a huge ramification on the rest of the industry.
Alan Rosenstein
I just feel like for lawfare, our history with district court judges sitting in Florida has just been very bad.
Kevin Fraser
Orlando. She's in Orlando.
Alan Rosenstein
Yeah, I guess that's right. Orlando's fine. As long as it's not West Palm Beach.
Unknown Speaker
Still Florida. Miami is not Florida, but Orlando is definitely Florida.
Kevin Fraser
So let me sharpen this a little bit, right. I think part of the hard part of this case is that is this tricky question. There's a separate really hard First Amendment question about this idea of like kind of coaching to suicide, right. That intersects with us to some extent. We know there was a really complicated case, I think out of the Supreme Court of Massachusetts maybe four or five years ago where a woman. I think it's commonwealthy Carter. If I'm looking at my notes, I think I have scribbled down here where it says it was essentially a case where one woman bullied her boyfriend into committing suicide. And I believe state courts in Massachusetts basically held, nope, that is involuntary manslaughter or maybe voluntary manslaughter, I can't remember. But is not First Amendment protected in that scope? And the Supreme Court never ruled on it, but denied cert. So we know there's this kind of ongoing debate here. So let's take it to like a slightly different context, right? Let's say you have a chatbot who I am talking to about my, let's say nutrition, right? Like I want to learn ways to lower my cholesterol and blood pressure, right? And the chatbot tells me, bacon, bro.
Alan Rosenstein
Drink a lot of Monster.
Kevin Fraser
Yeah, drink a lot of Monster.
Alan Rosenstein
A lot of Mountain Dew.
Kevin Fraser
Mountain Dew. All the caffeine you can handle, bacon. All the fat you can deal with. I'm vegetarian, so I'm biased towards bacon. I keep thinking of that, but you know what I mean, like just gives me bad medical advice. And look, this sounds crazy, but it's not implausible if we have AI bots developed to like market, you know, off brand pharmaceuticals or not, FDA approved dietary supplements, huge industries exist already convincing people to do these things. AI is going to be a part of that at some point, right? So where does the liability come in for this? Like it does seem now, maybe it's a product design issue and it's like the fact that this isn't. It's not the speech that they issue. It's something about the product design. But there has to be some hook for liability here. And the First Amendment can't be a shield for that. So you Know, is it the same as if it were just an individual giving that bad medical advice and you would sue them? And that's just not protected speech. Like it's hard for me to distinguish why that's protected speech and this, why this should be protected speech and that shouldn't necessarily.
Alan Rosenstein
So this gets at a kind of meta debate in the First Amendment which is what do you want the structure of First Amendment doctrine to be? And you sort of have two choices. One option is you want it to be relatively narrow. It doesn't apply to a lot of things, but where it applies like strict scrutiny and it's really, really hard to overcome. Right. You can think of the narrow but deep First Amendment and that has certain advantages. The other, and this is the one I prefer, is the First Amendment applies to a lot of stuff. Like a lot of stuff is presumptively First Amendment protected because it involves speech, it involves intellectual thought, which is really to me the kind of, the whole point of the First Amendment is to give a little boost to people's power of intellectual self improvement. But because it applies to so much, it doesn't just apply strict scrutiny to everything, it has to apply in a more, you know, balance y proportionality kind of. I'm channeling my inner Justice BREYER, you know, it's 73 factor test thing. And I generally prefer, prefer the latter. Now what I would expect is that, but this will be a generation long project as courts, you know, have to go through every single fact pattern that is in your standard thousand page First Amendment treatise and say, okay, we had this rule when our speech environment was such, when the amount of speech was such, when the velocity of that speech was such, and when the harms and benefits of that speech was such. AI is going to change that in certain ways. Right. You're going to have a lot more speech, that speech is going to be a lot more personalized, that speech is probably going to be overall more accurate. Right. But for certain specific cases it's going to be much less accurate. Okay, how do we recalibrate our First Amendment rules for that? I think that's a good 30 year research agenda for the courts and for scholars to do. But what I don't think is the answer or where I would doubt would be the right outcome would be to say anything coming out of a machine is not First Amendment protected. Because of the point that Kevin raised. Within five years we're just going to be talking to AI systems all the time about everything. And the idea that none of that is first time Protected strikes me as really wrong. The kid issue is really interesting. And Kevin, I really appreciate your story and it really does sound like you could have benefited. I mean, my intuition. And again, I'm very curious, Scott, because Scott and I both have two very small children. The idea of letting these systems anywhere near my children strikes me with such absolute dread. Like I have almost physical freak out that I can't overcome. But to your point, Kevin, like, it's also possible, you know, that whether it's a mental health issue or teaching my kids to read or whatever the case is, that these tools will be amazing. But I don't know, man. I just get the heebie. And I know that heebie jeebies is not exactly a recognized doctrinal argument in the First Amendment law, but I get the absolute heebie jeebies about saying that there's a First Amendment right for these chatbots to target my children. But you might be right.
Unknown Speaker
Well, and that's the potential I want to preserve, is that I might be right because I acknowledge I only have a niece and nephew, so I can't claim fatherhood.
Alan Rosenstein
Yeah. The difference is you can give the children back. At the end of the day, Scott and I just stuck with them.
Unknown Speaker
You take it back. Yeah, yeah, we, we.
Alan Rosenstein
You break them. You don't have to buy them. It's not that relationship that Scott and I have with our kids.
Unknown Speaker
See, my, my concern though, is that the sort of remedies that some folks are talking about in this space is moratoriums. To, to bring up moratoriums in a different way, moratoriums on any AI tools designed for kids or disgorgement of the actual algorithm or the underlying data. To me, those are, those are too far right. I agree that we need to think about how do we design safe products, how do we impose liability on folks who are consciously disregarding signs that, for example, we're seeing these tools lead to abuse or lead to bad outcomes for kids. I'll also say, as Alan was noting earlier, and something that Alan and I have talked about ad nauseam is the key for using AI tools, in my opinion, is moderation. If you are an extreme user and you become over reliant on these tools, of course it's going to lead to bad outcomes. The opposite is true as well. If you never use these tools, you're going to miss out on a lot of potential benefits. And so finding that right line is really important. And finding that right line is something that every parent should be talking about with their kids who are going to find these tools. That's another thing that, let's just be honest. When folks say, oh, we'll just hide the AI tool under the mattress. Guess what? Kids live under the mattress. They're going to find that tool. They're going to find access to these. So we can't say, oh, they'll just, you know, I'll impose some jail, some parental control over this. They're gonna get their own burner phone or whatever kids do these days. They're gonna find access to these tools.
Alan Rosenstein
I guess so. But like, I. That just look. And again, obviously, like, at a certain level of generality, you're totally right, but honestly, that. That just sounds like we're gonna recapitulate what basically we allowed social media companies to do, right? Which was to put the entire onus on parents to keep their kids from using Instagram or using social media. And obviously the research on this is debated on this, but I'm a big Jonathan. I follow fairly squarely into the Jonathan Haitcamp, which is that on net, this has been very bad for kids. And it was exactly this sort of, oh, let's have parents do this. Let's do this through literacy, which, again, there's obviously a place for. But an individual parent is just never going to be frankly able to compete with the billions of dollars and trillions of IQ points now, AI augmented IQ points, of Silicon Valley companies trying to create really engaging. Which of course is just addictive. That's all that word means in California, Northern California tools for our kids.
Unknown Speaker
I am not opposed to that. Just to put that out there. Yeah.
Kevin Fraser
Well, folks, that brings us to the end of our time for this week. But this would not be rational security if we did not bring you some object lessons to ponder over in the week to come. Alan, what do you have for us?
Alan Rosenstein
So I am reading a wonderfully random history book that I picked up at a used bookstore, which is like the best use of a used bookstore is random discount history books. It's called the Great Game by the British journalist Peter Hopkirk. It was written in the early 90s, but it is basically a very entertaining narrative history of Tsarist Russia and Great Britain's kind of skull and dagger competition over Central Asia, sort of Afghanistan and sort of what became Uzbekistan, Tajikistan and Kazakhstan. As Taurus Russia was heading south to expand its domain. And the Brits were really worried that they were going to go into India. It's just like a really fun history about a topic I know nothing about. The characters are great. It's Very kind of swashbuckling. But it's also, to me, just such an amazing example of the you can just do things maxim. Because what I did not fully realize was that at the time in early 19th century, when India is largely run by the East India Company, which is sort of this corporation, kind of a governing entity, you would have these situations where they would, like the real story here, they would import some English veterinarian to improve their horse breeding program, which was obviously of importance. And that veterinarian would wander into, you know, Kazakhstan, because Central Asia is where horses are from, to find, you know, better horses. And in the process of just wandering around and talking to a bunch of, like, petty dictators or whoever was, you know, running the region at the time would end up setting British policy for the next hundred years because, like, he just happened to be there. Which is an insane way to run geopolitics. But how it was run for, like, much of the 19th century, it is totally wild and extremely entertaining.
Kevin Fraser
Fascinating. I love 19th century weird colonial stories because they are so bizarre. You work the American angle into it where it's like American colonialism. It's a particularly weird, weird angle on it. Always, always an interesting read, I gotta say.
Alan Rosenstein
The big lesson for me is just it's all warlords. It's like warlords on this side and that side, and it's Russian warlords and British warlords and Afghans. Just warlords.
Kevin Fraser
So warlords, warlords all the way down. There you go. Well, for my object lesson this week, I am once again singing the praises of one of my favorite publications, the Economist. I've shared it here before. I'm Economist. I've been a lifelong Economist subscriber, more or less a couple lapses here and there because it is very expensive. So college got a little steep. Regardless, I've subscribed to it, read it regularly.
Alan Rosenstein
High school speech and debate. You got it. You got to be into the Economist.
Kevin Fraser
Oh, of course. That's how I got into it, exactly. Populated most of my model UN position papers for most of high school. But the one downside of the Economist, as an adult subscriber with two young children, as Alan's already noted, I don't have time to read it every week. Not even close, honestly. Sadly, they, like I get through about every other issue at most, and even then, mostly skimming, but they have a phenomenal tool. I've just discovered that it's been around for a while, but I just tuned into it. That is great. Available for subscribers and definitely also free for students. If you have any students in your life between 16 and 22, I think maybe grad students too. And that's called Espresso. It is an app they have that basically condenses a lot of their main stories and headlines into an audio podcast, essentially what you can tune into daily. It gives you phenomenal news. It's very kind of NPR esque, but very focused on like content delivery and very sharp. It's an easy, pleasant listen. It's become part of my morning routine the last few weeks since I've discovered it, and I highly recommend it. So check out Espresso by the Economist if you are a subscriber or a student between 16 and some indeterminate age in the future. Kevin, what do you have? Bring us home.
Unknown Speaker
Yeah, I was just going to say that's what my wife does with everything I say. She just condenses it into a podcast.
Alan Rosenstein
And then doesn't listen to it and.
Unknown Speaker
Then just download downloads it. I get the download count, but then.
Kevin Fraser
Marcus listened and then just straight to spam.
Unknown Speaker
Straight to spam. No. So my object lesson is about America's greatest game, baseball and softball. The Longhorns are crushing it in the postseason and I know Allen doesn't get enough sportsball.
Alan Rosenstein
Longhorns are. You can't just say Longhorns enough assume that I have any idea what you're talking about. Oh my cow of some sort.
Unknown Speaker
Dude, the Texas Longhorns.
Alan Rosenstein
Oh, that's Texas. That's the hook. Em. Okay, got it.
Unknown Speaker
Now I know this is okay, we're.
Kevin Fraser
Gonna thought it was about fishing.
Unknown Speaker
A separate podcast. Just where I explained sports to Alan. But we'll do that another time. So I can't wait for the World Series for both baseball and softball. Give it a watch. Whether it's baseball or softball, it's really entertaining. The softball games are usually super high scoring. So fun to watch there. Anyways, that'll be how I'm occupying my time other than preparing for podcasts and readying myself for Alan's arguments.
Kevin Fraser
Well folks, that with that note, that brings us to the end of this week's episode. Rational Security is of course a production of Lawfare, so be sure to Visit us@lawfairmedia.org for our show page, for links to past episodes, for our written work and the written work of other Lawfare contributors, and for information on Lawfare's other phenomenal podcast series, including Escalation, now available in a podcast or near you on our Lawfare Presents feed. Just search for Lawfare Presents for the separate feed where we put our narrative podcast. Realize I should have been saying that the last couple weeks I've been advertising this podcast. While you're at it, be sure to follow LawFair on social media wherever you socialize your media. Be sure to leave a rating or review wherever you might be listening. And sign up to become a material supporter of Lawfare on Patreon for an ad free version of this podcast and other special benefits. For more information, visit lawfarmedia.org support our audio engineer and producer this week was Noam Osman of Goat Rodeo, and our music, as always, was performed by Sophia Yan and we were once again edited by the wonderful Jen Patcha. On behalf of my guests Alan and Kevin, I am Scott R. Andersen and we will talk to you next week. Until then, goodbye. Want a workout that actually works? Hydro delivers a full body workout that hits 86% of your muscles in just 20 minutes. Rowing with Hydro combines strength and cardio with thousands of workouts led by Olympians in breathtaking locations. No wonder 9 out of 10 members are still active one year later. Try Hydro risk free@hydro.com and use code ROE to save up to $475 off your hydro Pro rower. That's H Y D R O W Com Code Row.
Rational Security: The “Hi, Robot!” Edition – Episode Summary
Release Date: May 28, 2025
Introduction
In this episode of Rational Security, hosted by Scott R. Andersen with co-hosts Alan Rosenstein and Kevin Fraser from the Lawfare Institute, the discussion centers around the rapidly evolving landscape of artificial intelligence (AI) and its implications for national security, policy, and regulation. The episode delves into three main topics: the diffusion of AI technology under different U.S. administrations, federal preemption of state AI regulations, and a landmark court case concerning AI and the First Amendment.
1. AI Diffusion Policy: From Biden to Trump Administration
Timestamp: [03:25] – [22:17]
Overview: The conversation begins with an analysis of the Biden administration's AI diffusion policy, which aimed to control the spread of advanced AI technologies by categorizing countries into tiers based on their strategic alliances. This policy sought to limit the export of high-end AI components, such as semiconductors, to non-allied nations to maintain U.S. supremacy in AI development.
Key Points:
Diffusion Explained: Diffusion involves exporting AI-related components like semiconductors and allowing AI companies to operate overseas. The policy questions how widely U.S. AI capabilities should be shared globally.
Biden’s Tripartite Division: Countries were classified into:
Trump’s Repeal: The Trump administration repealed Biden's diffusion rules, easing restrictions to promote free trade and global transfer of AI technologies. This shift includes notable deals, such as the recent agreement with the UAE to transfer advanced semiconductors.
Notable Quotes:
Alan Rosenstein: “No one has any idea what the right margin is for all of this. It’s very, very important to appreciate.”
Kevin Fraser: “The Trump administration appears to be saying that we're going to move a little bit more away from this control aspect... the current structure was unworkable, too bureaucratic and potentially stifling to American innovation.” [07:42]
Discussion: The hosts debate the effectiveness and implications of these policies. While acknowledging the Biden administration’s intentions to curb AI diffusion to strategic rivals like China, they express concerns over the practicality and long-term impacts of such restrictive measures. The repeal under the Trump administration is viewed as an attempt to foster innovation and maintain economic competitiveness, albeit amidst criticisms of undermining global alliances.
2. Federal Preemption of State AI Regulations
Timestamp: [35:12] – [84:49]
Overview: The discussion shifts to the legislative arena, focusing on a provision in the reconciliation bill passed by House Republicans that seeks to prevent states from enacting their own AI regulations for the next decade. This federal preemption aims to create a unified national framework for AI governance but raises questions about accountability and the role of states in regulating emerging technologies.
Key Points:
Purpose of Preemption: To avoid a fragmented regulatory environment where states impose varying standards on AI, potentially hindering national competitiveness and creating compliance challenges for AI companies.
Arguments For Preemption:
Arguments Against Preemption:
Current Legislative Status: While the House has passed the preemption provision, it is expected to be removed from the Senate version due to compatibility issues with the Byrd Rule, which restricts the inclusion of unrelated provisions in reconciliation bills.
Notable Quotes:
Kevin Fraser: “There are appropriate circumstances where the federal government should preempt state legislation... But in an era where we're really seeing a lot of state legislation debated, but I don't see a lot of it actually being enacted successfully.” [57:41]
Alan Rosenstein: “You either believe in federalism or you don't. You don't have to. They're perfectly normal countries…” [53:10]
Discussion: Alan and Kevin exchange views on the merit of federal preemption versus state-level regulation. They acknowledge the complexity of balancing national interests with local autonomy. Alan emphasizes the potential risks of states enacting uncoordinated regulations that could stifle innovation or create inconsistent standards. Conversely, Kevin highlights the value of state experimentation and the dangers of centralizing too much regulatory power, which could lead to inefficiencies and a lack of tailored solutions for specific regional issues.
3. AI and the First Amendment: A Landmark Court Case
Timestamp: [66:12] – [89:17]
Overview: The final segment covers a significant court case from the Middle District of Florida involving the tragic suicide of a teenager allegedly influenced by AI-powered chatbots. The court's decision not to dismiss the case on First Amendment grounds sets a precedent for AI accountability and the scope of free speech protections.
Key Points:
Case Details: A teenager interacted with AI bots modeled after Game of Thrones characters, which purportedly encouraged him to commit suicide. The court concluded that the AI's output was not protected speech under the First Amendment.
Legal Implications:
Host Perspectives:
Alan Rosenstein: Emphasizes the need for First Amendment protections for AI-generated speech while advocating for robust regulation to prevent misuse, especially concerning vulnerable populations like children.
Kevin Fraser: Stresses the importance of balancing innovation with accountability, arguing against broad moratoriums and highlighting the potential benefits of AI when used responsibly.
Notable Quotes:
Alan Rosenstein: “I think that these chatbots should absolutely have the output... should have First Amendment protections.” [68:15]
Kevin Fraser: “There's a separate really hard First Amendment question about this idea of like kind of coaching to suicide, right." [72:27]
Discussion: The hosts debate the court’s rationale and its broader implications for the AI industry. Alan argues for recognizing the expressive value of AI outputs, suggesting that AI conversations can be as beneficial as traditional speech. He advocates for regulatory measures that protect users without stifling the technological advancements of chatbots. Kevin, however, raises concerns about the potential for harm and the need for liability frameworks that ensure AI developers are held accountable for malicious or harmful outputs. Both agree on the necessity of nuanced regulation that safeguards public interests while fostering innovation.
Conclusion
In this episode of Rational Security, the Lawfare team navigates the intricate terrain of AI policy, balancing national security interests with the imperatives of innovation and personal accountability. From dissecting AI diffusion strategies across different U.S. administrations to grappling with the federal versus state regulatory debate, and finally confronting the legal challenges posed by AI-generated speech, the discussion underscores the multifaceted impact of artificial intelligence on law, policy, and society.
Closing Remarks: The hosts encourage policymakers and stakeholders to engage thoughtfully with these issues, emphasizing the need for informed and balanced approaches to AI governance that protect public interests without hindering technological progress.
Notable Quotes Summary:
Alan Rosenstein:
Kevin Fraser:
Resources and Further Reading: For more insights and analyses on national security, law, and policy intersecting with emerging technologies, visit www.lawfareblog.com and explore Lawfare's other podcast offerings, including Rational Security, Chatter, Lawfare.no Bull, and The Aftermath.