Transcript
A (0:00)
Your film is now ready to be shown.
B (0:12)
Good morning. I'm Justin Hendricks, editor of Tech Policy Press. We publish news, analysis and perspectives on issues at the intersection of tech and democracy. Across the world, governments and other institutions are racing to apply artificial intelligence in countless ways. Woodrow Hartzog and Jessica Silbey, both professors of law at Boston University, argue that the design of these systems, from large language models to predictive and automated decision tools, is fundamentally incompatible with the civic institutions that hold democratic society together, including the rule of law, universities, a free press, and civic life in itself. This isn't necessarily because AI is being misused or falling into the wrong hands, but because in most instances, AI is working exactly as intended and in doing so, eroding the expertise, decision making structures and human connection that give institutions their legitimacy. Let's jump right in.
A (1:04)
I'm Woodrow Hartzog. I'm the Andrew R. Randall professor of Law at Boston University School of Law.
C (1:11)
I'm Jessica Silbee. I'm the Frank Canison professor of Law at Boston University School of Law and the Associate Dean for Intellectual Life.
B (1:20)
I am pleased to speak to you today about this draft, a paper that you have published, how AI Destroys Institutions, which has already got a lot of attention and probably a lot more downloads than you might have anticipated for a draft. We're going to talk a little bit about the details and about the arguments you're making here and some of the feedback you've already had. But I think to start, I just want to read from the first couple of lines from this. You state in if you wanted to create a tool that would enable the destruction of institutions that prop up democratic life, you could not do better than artificial intelligence. Authoritarian leaders and technology oligarchs are deploying AI systems to hollow out public institutions with astonishing alacrity. Why do you come out swinging? This is not couched language. This is not the type of wishy washy. You know, in some circumstances AI might be a danger to democracy, or if we go down a certain line, this may emerge. You're saying the threat is here, that it's urgent, and that what's happening is on purpose.
A (2:29)
Yeah, that's right. So I suppose the simple answer to that question is we call it like we see it. But the longer answer is probably that we've been in the game long enough to see that if you don't come out strong swinging, then history is going to repeat itself. So when I first started writing in law and technology, I was relatively equivocal. I would say, oh, there's some good and bads, and we just have to make sure to put the guardrails up and then everything will be okay as long as everyone acts responsibly and we just have some good, sound, common wisdom rules, right? And that it was all well and good, and then tech companies decided that was just a free pass to do whatever they wanted to maximize their profits. And then we saw them sort of routinely leverage the uncertainty of the current moment to just stall long enough to get people acclimated to these technologies, in particular business models, and dependent upon them. Then once that happens, then there's reduced political accountability and the chance for meaningful rules goes away. And so within any time any new technology comes out, there is a window where some meaningful rules might get passed, that you've got some potential political accountability. But the longer time goes on, and the more tech companies can just sort of stall and run out the clock, the less time there's going to be to meaningfully create a rule that's going to push against the harms. And in, in STS studies, this is called the Collin Rich dilemma, where lawmakers are in a little bit of a double bind, where if you go regulate too early, then maybe you squash the potential benefits of a particular technology, but if you wait long enough, people are acclimated and dependent upon the tool and there's nothing you can do about it. And so I've heard that called the. The avocado problem, which is not yet too late. And so that's why we came out swinging.
