Transcript
A (0:00)
Mike Horowitz of Penn University, formerly with Biden's dod. We didn't get enough on Monday on autonomous weapons systems. This whole Iran war thing got in the way. So we both thought it would behoove the audience to do a little bit of a 101 on what these things are, how they kill people, and just how autonomous the world is in 2026 and perhaps beyond. Like take it away. So Mike, how would you characterize where the fear lies in the well meaning researcher or head of an AI lab who thinks their technology used for certain types of autonomy would be a bad direction to go go and maybe contrast that with how this stuff is used today in Ukraine and Iran.
B (0:58)
I think that the, the average sort of, maybe Silicon Valley AI safety researcher, AI safety researcher who's worried about autonomous war bots is probably worried about AI making the decision essentially about who lives and who dies and is, you know, thinks that that's a, like, that's some dystopia that they don't necessarily want, that they don't necessarily want any part of. And so get worried about the incorporation of AI into the like, pointy end of the sphere for militaries, especially when it comes to, you know, potentially selecting and engaging targets. What I think sometimes gets lost in the conversation is the substantial degree of autonomy that already exists in modern weapon systems, in that the US military and basically 40 militaries around the world have deployed autonomous weapon systems since the early 1980s. These are often automated systems that are, we're using essentially deterministic, good old fashioned AI, like more or less that are on ships like these enormous Gatling guns called the Phalanx that can operate by algorithm. And so if there are too many threats that are coming in, say too many missiles about to hit a ship, an operator can basically flip on the algorithm which can automatically target and hit those kind, can automatically target and hit those incoming threats. Or, and you also have things, you also have semi autonomous weapon systems that fall into the category of fire and forget munitions. Think about how a radar guided missile works. So you know, a pilot sees that, you know, believes that there's an adversary radar that's a legitimate target. They, you know, press the launch button. The, the radar guided missile fires when it go after going a certain distance, it turns on a seeker, it detects a radar, it goes in and it, and it destroys the radar. There's no human, human supervision or control of any kind after that, after that weapon is launched. And like, hey, maybe that radar's on top of a school Maybe that. Maybe that radar is on top of a hospital. And so that's the status quo, in some ways, of autonomy and weapon systems. And those kinds of technologies have been used since the 1980s. And we tend to think that they're way better than what came before, which was the, you know, like, area bombing, essentially, of World War. Of World War II. And so there's already a lot of autonomy in weapon systems, which then makes this conversation about what we don't want AI to do in the weapon space a lot harder because it can sometimes be challenging to talk about it without inadvertently, in some ways, wrapping in all of these existing kind of weapons, which we generally think are good, more or less, like, in the world where we support military action because they're both more effective and are more accurate, making things like civilian casualties generally less likely.
