Transcript
A (0:00)
We will not govern AI without AI. That's a weird fact, but it's like also kind of trivially obvious if you think about other general purpose technologies. Imagine trying to govern computers without computers.
B (0:16)
And now the good fight with Yasha Monk. There are so many things going on in the world at the moment that it is hard to keep track. We had a great episode with Frank Francis Fukuyama trying to understand the escalating conflict in the Middle East. I invite you to listen back to that episode if you haven't yet. But the big thing that was going on just before this war started was the extraordinary conflict between Entropic, the maker of Claude, one of the Frontier AI labs, and the organization formerly known as the Department of Defense, now known as the Department of War. There was a very public extraordinary conflict over whether Entropic was going to be allowed to limit the way in which the Pentagon uses its technology. And the Trump administration retaliated in a very extreme way against Entropic when they were not willing to budge on their red lines. Today in the podcast we have two snippets of a conversation with somebody who really knows a lot about this. Dean Ball is a senior Fellow at the foundation for American Innovation and writes the excellent newsletter Hyperdimensional. He was also the senior Policy advisor for Artificial Intelligence and Emerging technology at the White House, where he was the primary staff drafter of America's AI action plan under Donald Trump. In a lot of this conversation we had a broad discussion about how to think about public policy in the age of AI. This is such a transformational and fast moving technology that we really don't yet have the categories for what kind of regulation is going to be helpful and what kind of regulation is going to be harmful for whether there's a greater danger in overregulating an AI and potentially stopping this technology from being useful to people, potentially losing the race against China on AI technology, or whether there's much bigger risk in under regulating the technology and it potentially devastating our economy, potentially being very harmful to our public discourse, or potentially developing the capacities to kill humanity. We also talk about the wisdom of some of the different kinds of approaches to AI regulation that we've seen so far. As you'll see, it's a really interestingly philosophical discussion about these issues, really trying to apply first principles to how to think through this topic. We also have at the beginning of this conversation an extra 20 or so minutes. Dean, in a very busy week, kindly hopped back onto the recording devices to talk us through the extraordinary events of the last week. And they are particularly interesting because Dean, despite having served in the Trump administration on these very issues, is, as you will see, very critical of how the Trump administration has treated Entropic in these past weeks. But the question that raises is really a much broader one. I certainly don't want Donald Trump to be in charge of of technology that can be used as autonomous weapons without any human in the loop, or that can be used for mass surveillance of American citizens. But nor do I want Sam Altman and Elon Musk to be making decisions about what AI uses are appropriate and what AI uses are not appropriate. I think there's a deep dilemma here and we start to tease that out in the beginning of the conversation that we recorded on Thursday, March 5, before then going back to the deeper, more leisurely conversation we had a few weeks earlier. And finally in the last part of this conversation, we talked about the million dollar question of AI alignment. If the technology is about to become super intelligent and it really is technologically impossible to make sure that it is aligned with our interests, is the tragic end of humanity for ordained, Are we basically about to rush headlong towards disaster or can regulation change that? And why is it that Dean is actually a little bit more optimistic than some others about our ability to make sure that AI systems are aligned? Why is it that the latest model published by Entropic is a good example for what a wise AI may one day look like? To listen to those parts of the conversation that have a slightly more optimistic angle on existential risk from AI, please go to yaschamunk.subbek.com Please become a paying subscriber. So you know, we recorded a really interesting in depth conversation about the broader philosophical issues on how governments should or shouldn't regulate AI. And then you know, this amazing news story broke over the course of the last week with you know, this head on clash between Entropic and the Department of War. For listeners who are not in on the details of this, give us a brief summary of what happened there and what, what the stakes of this fight are.
