Loading summary
A
Foreign. Where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work and create. Our goal is to help make AI technology practical, productive and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn X or Bluesky to stay up to date with episode drops behind the scenes and AI insights. You can learn more at PracticalAI FM. Now onto the show.
B
Welcome to another edition of the Practical AI Podcast. I am your co host Chris Benson. I am an AI and autonomy Engineer at Lockheed Martin and today with me our guest is Ben Buchanan, who is an assistant professor at John Hopkins University School of Advanced International Studies. He was previously the White House Special Advisor on AI to the Biden Administration. He's been the author of four books, one of them about to come out called the Bitter Struggle, and he also authored a recent article for Foreign affairs magazine called the AI grand what America Needs to Win the Innovation Race. Welcome to the show, Ben.
C
Thanks for having me.
B
Appreciate it. So I'm kind of curious if you could tell us a little bit about that's a fantastic set of things that you've done professionally. And often we like when people have these kind of amazing backgrounds. We often like people to start off, just kind of tell us how you got to where you got and why you're passionate about the topic before we dive into policy issues. Just to get a little personal, personal spin on your background there, of course.
C
Well, the cool thing, at least in my view, about a lot of jobs I had is that they didn't exist before I had them. There wasn't a White House special advisor for AI and the like. So it's been a real adventure. But it's not the kind of thing that I've tried to plan out that I would do these. So the short answer is by accident and by luck. But I do think I got into AI really in 2013, 2014, 2015 when I was doing my PhD in cyber operations and how nations hack one another and what that means for international affairs. And we were just transitioning as a society or as an AI community from an older paradigm to a newer machine learning paradigm then. And I kind of noticed this happening. And there's a period in a PhD when everyone's kind of sick of their own subjects. They're looking for things to procrastinate with. And for me that was AI at a time when it was not really a policy subject and then over the last 10 or 15 years or so, it's just becoming even bigger and bigger and bigger interest. And now of course, it's, it's salient to mainstream policy in a way I never could have imagined. But for me it was just a hobby and something that was intellectually fascinating 10 or 15 years ago.
B
So as you kind of arrive at where you've been, I'm kind of curious, as you've gone through that process, what would you say as you've dived farther and farther into the AI world and become an advisor to people and written about these topics, is there a particular kind of through line to your work that you would say is kind of thematic on the topics that you tend to address and the things that are of interest to you?
C
Well, I think the thing about AI, which makes it so interesting from a policy perspective, and this was the through line of all the policies we worked on in the White House, was that this is really the first revolutionary technology, probably the last hundred or so years that comes from the private sector. And if you think about the dawn of the nuclear age or the space age, or radar, satellites, gps, so much else, jet aviation, all of those technologies in their early days are really coming from the Department of Defense. They're not necessarily the ones inventing it, but they are the ones funding it out of military necessity. In many cases, AI is that way too. If you go back to the 1960s and 1970s and 1980s when the US government's footing the bill for everything, but the US government basically gets out of the business by the 90s of big time funding of AI and then we have this period called the AI winter where not a lot happens. And it is only in kind of the 2012 to modern day era, the technology comes roaring back, as I said, this different paradigm machine learning as opposed to traditional AI. And that is the kind of thing that comes from the private sector. And it's companies like Google and OpenAI and anthropic that are the ones that drive this technology forward. And that poses a vexing challenge for the US Government because the US Government doesn't have a built in knowledge the way it did for the previous technologies because it's not involved in making them or funding them. And it doesn't have the kind of control that it used to have over, over those technologies and where it's going. So there is a real kind of challenging dilemma that we'd have to confront as policymakers that's shown up in everything we did.
B
So how does as you Talk about it. Largely, though, there were those early days with government driving, which stepped out about the time I actually stepped into. It was in the mid-90s, before the last day I winter. As you're. As you're looking at coming into this modern era that you just discussed, you know, and the fact that this is not like those previous technologies for the government, how has that changed the interactions between the government and the private sector and, you know, the defense industrial base, the technology sector? Does it. Is it substantially changed the relationship between them?
C
Yeah, it does. And I think the first thing is just a question of policymaker education and how do we get folks in the government to understand what this technology is doing? And I had my job as the White House Special Advisor for AI, not because I knew the most about AI of anyone in the country. Many of your podcast listeners probably know more. I think I know a fair amount, but it was my job to explain things and to say, how can we put this in terms that make sense to policymakers? And also, how can we view this just as a scientific question or technological question, as interesting as those angles are to me, but as a geopolitical question? And if you talk about explaining to someone like the National Security Advisor, he's not there for the science project, he's there for what does this mean for the US China relationship? So that, I think, was the overarching theme of it, and then it was figuring out, well, what is the technological reality? What is coming from the private sector? And the insight we had in 2021 or so, and was the basis actually of a previous Foreign affairs piece I wrote in 2020, was that at the time everyone was saying data is the new oil and all of that, but it actually was computing power that was driving the bus, not data. And that, as we can talk about if you'd like, the United States and its Democratic allies have a real advantage in the production of computing power, and that that was a place where we could disproportionately benefit democracies with some significant action. And that's what we did. But I think it's a combination of being able to explain to policymakers and being able to understand what's actually going on at a technical level when the government's not inventing that technology. That's what makes a difference.
B
I'm curious. You mentioned that kind of old AI is the new oil, or data is the new oil, and there's variations of that. It is very common out there for people to try to draw analogies between previous emerging technologies that are now robust. The advent of the Internet being chief among those. I'm curious how you see that in terms of when you are explaining policy, you're sitting there at the juxtaposition of the technology itself and policy and trying to explain that and how it affects different aspects of national defense policy. Do you go back and see it as something very similar to the advent of the Internet and that it's just another technology, another normal technology, if you will, or do you see it as. As something distinct and special in its own right? There's always debates about how people are. I'm kind of curious where you come down on that.
C
Yeah, I think there's two questions in there, and they're both really important questions. The first is how do you explain it to somebody? And my rule here is I don't use analogies. In almost every context, I try to resist analogies. Sometimes I think it's Susan Sontag, the essayist and writer, had this line. To resist metaphor is to endure the thing itself. And I always just say we have to endure AI itself. We have to confront this on its own terms and not through the prism of nuclear technology or the Internet or so much, even though those are so familiar to parts of the US government. There are entire agencies of the US government that just do nuclear policy. But this is something different. So I wasn't always successful in that. Sometimes you have to use analogies and they can have value. But my philosophy was always, let's endure the thing itself, let's confront the thing itself. And then there's a question of, okay, but on the merits, how does this stack up to previous technological revolutions? And my view here is that this is going to be an extraordinary technology, maybe one of the most significant technologies. You very kindly mentioned my upcoming book, the Bitter Struggle. The previous book on AI was called the New Fire. And that was the kind of metaphor, maybe against my own advice, putting an analogy in the title. That was the kind of metaphor that I was reaching for in trying to describe this technology in a really broad range of outcomes, something foundational to humanity. And everything I've seen in the period since I wrote that book and in the period since I've left the White House has only confirmed, I think, that we are on that trajectory for better or for worse, as a species.
B
Another thing is a follow up to the same point is when you kind of mentioned that compute was really central rather than data. To that I'm curious how we're so used to hearing about data being the new oil driving AI. But you Mentioned compute. How should listeners maybe think about compute, maybe differently from the way they were? Is there trying to frame things because that's constantly evolving with people because the technology is moving so fast. Could you talk a little bit about how compute is relative to data and the way that you are positioning?
C
Yeah. One of the most important papers in The History of AI is a paper that came from OpenAI in I think January of 2020, and it's called the paper the Scaling, everyone calls it the Scaling Laws paper. Scaling Laws for Neural Networks is the title. And this is a really important insight, which is the more computing power you use to train an AI system, the more powerful the resulting AI system. Now it does scale with data. So we're not saying data doesn't matter at all, but relatively speaking, the limiting factor tends to be computing power. And that is a very important insight because that then shifts AI from this ephemeral thing of algorithms and data and stuff that who knows where it lives to something that is physical and practical and, and it's computer chips. And in fact huge, huge numbers of computer chips and increasingly the power, the electricity to run those computer chips. And that physicality of AI creates geopolitical opportunity. Now, making a computer chip, in my view, is the hardest thing we do as a species. We can talk about this process if you'd like, but you and your listeners probably know something like 97% of the advanced computer chips in the world are made in Taiwan by a company called tsmc, using incredibly advanced machines from a company in the Netherlands called asml, as well as from companies in the United States and Japan that also make these chip making machines. Well, I just mentioned a bunch of democracies. And it is very fortunate for democracies that maybe as a historical accident, maybe as a credit to our innovation culture, democracies own the computing supply chain. And that created an opportunity, in our view, to say, here's this thing that's incredibly important, that's very physical, it's very hard to make. We can control this, and we can stop a nation like China from taking these computer chips and the AI systems they create and modernizing their military, repressing their population, building their surveillance state. And that was what the Biden administration did for our four years. And we took a lot of conversation about it in 21 and then put the action into place in 22 and then tightened further in 23 and 24. And that was really born of a desire to make sure, especially when it comes to military competition, the United States has a United States and Its democratic allies and partners have an advantage that AI can give them.
B
I'd like to, I'd like to follow up on that as well. When, when we talk about Taiwan. I know in my own personal experience, if you get outside of kind of government centric private sector, which is, you know, where I work for a living, and the academic and government sector like you, but you just talk to everyday people. They're aware often of competition between the west and China, and they're sort of aware of Taiwan, it seems, but they don't always understand it. I'm wondering, you were really hitting at the heart of some of the strategic concerns around that. If you could talk a little bit about why people in AI or even outside of AI should care about that kind of geopolitical concern. A lot of people go, it's politics or something. I don't care about that. Could you talk a little bit about the. What does it mean if we. Recognizing everything that you just said in terms of policy that you're implementing, why is. I guess for somebody who's not familiar with that.
C
Well, the first is let's start with the importance of computing power. And as I said, computing power is what drives AI progress. That's the scaling law. It drives a huge portion of our economy even beyond AI. So think, for example, about the semiconductor storage from a few years ago and how that was delaying cars and dishwashers and so much else. So I think it is fundamental, the role of computing power is fundamental in the modern economy. I think if you for some reason didn't have chips coming out of Taiwan, some of the analysis I've seen publicly available analysis has suggested this would be trillions of dollars in losses to world gdp. So there is the centrality of computing power to the modern economy, and then there's a centrality of Taiwan to making that computing power. And it is an incredibly intricate process to make semiconductors and especially advanced AI semiconductors. And really only TSMC has mastered that process. TSMC is the Taiwan Semiconductor Manufacturing Corporation I was mentioning before. And they're the company that has mastered this using a supply chain from the United States, Netherlands and Japan. So again, something like 97% of the chips in the world, the advanced chips, come from Taiwan. We did try to change this in the Biden administration. And we said there's actually a strategic national security weakness. And on a bipartisan basis, the Congress passed the Chips and Science act to bring chipmaking here to the United States, which has begun in Arizona. But Taiwan is still way, way ahead. So they're just a fundamental importance in the computing supply chain and therefore in national security and the global economy.
B
So I appreciate that that kind of helps frame a little bit about the whys of why Taiwan is a political concern. Coming from somebody who understands both sides of the divide on that, I should
C
say, just for completeness, there are other reasons to care about Taiwan. I'm someone who believes very deeply in democracy, and I'm not saying we only care about Taiwan because they make really cool chips. But if you are just a pure realpolitik person, this is one reason. The chip production is one reason to care about Taiwan in addition to the other more moral ones.
B
So, Ben, I'm curious, as a professor and also a government advisor, how does your thinking and your communications change with those two audiences in terms of as you are explaining the different aspects of how technology and policy interrelate, how do your different audiences choose to care and what is the messaging that you have for them in terms of that? If you're the White House versus you're in a classroom at Georgetown?
C
Well, if you're asking, do my graduate students know more than Congress does? The answer is yes. I won't comment on the White House comparison. But they definitely know more than Congress does. I think there's a difference between teaching in the classroom and engaging policymakers, whether in Congress or in the White House. Policymakers usually don't want the theory. They just want to know what's happening now. And policymakers usually are a lot busier than my graduate students who have to put up with me for a two and a half hour seminar every single week. I have sat with President Biden for an hour and 10 minutes, an hour and 20 minutes maybe, but I've never sat with him for two and a half hours to go through something. So there is a, there's definitely a difference in terms of how much space you get with graduate students. But I do think in many respects they're asking the same questions, which is where is this technology going? What does it mean for humanity? What does it mean for democracy? And what should we do now? I think the president has President Biden had much more ability to action what should we do? Than my graduate students. But I do think they're asking the questions that really all of us should be asking about this technology, its pace of progress, what it's going to mean for us, and then ultimately, what is the policy response for this technology that's coming from the private sector?
B
How and I'm definitely not trying to trap you into the politics of It. But one of the things that we have observed and we've talked about in our show over time is through different administrations, there have been different AI policies put into effect. And the current one, which came after the one that you served, has a different collection. Do you have any thoughts on kind of the how the progress of AI policy has developed across administrations? Are we on the right track? What are we missing? What are we on target for and kind of talk about the policy, aside from the current, you know, political aims of any given administration?
C
Well, one of the things I like about AI is that AI is not a partisan issue. And it has not. I mean, it's getting there, but it was not at the time I was in the government, particularly polarized. And we had very good conversations with Republicans. The president, the day after he signed his executive order, hosted a bipartisan group of senators in the Oval Office. It was a great conversation. It was actually a little bit of a lesson for me in that conversations are not always that good. And in fact, just because you have a good conversation in the Oval Office does not mean Congress is actually going to do anything. But there are good bipartisan people. Senator Todd Young of Indiana, Senator Mike Rounds of South Dakota, who I think are Republicans and I'm sure don't agree with Joe Biden on a lot of things, but could engage on this issue. So I don't see us as a partisan thing even now. Another reason why I don't see as a partisan thing is the first Trump administration on some of the national security questions, at least philosophically, was in the same camp we were, and it was the first Trump administration. And they get a lot of credit for this. Matt Ponger was Trump's deputy national security advisor. He gets a lot of credit for this. Leaned on the Dutch in 2018 to make sure they didn't sell advanced chip making equipment to the Chinese. And that was, I think, a really good decision. Now, we went much further and we did Countrywide bans and we banned the chips itself. And there's like 15 things I think that we did that went further. But I don't think those are particularly partisan things. And in fact, a lot of Republicans agreed with those things when we were in office. We have seen a reversal. And in some sense I think that the delta between Trump 2 and Trump 1 is much bigger than the delta between Trump 1 and Joe Biden's administration on these issues. And I think there's a vibe right now of let's sell the chips to other countries, including China. President Trump has said he's willing to sell advanced AI chips to China. Of course, we were not willing to do that. And J.D. vance, when he gave a speech on AI, said, I'm not here to talk about AI safety here, talk about AI opportunity, kind of downplaying the risks of this technology. My successor, somebody named David Sachs, he has famously said that the Trump administration's policy on AI is let the private sector cook. Again, back to this theme of the private sector inventing it. So I do think there's differences now that you can imagine are not where I would land the policy. But it is not the case that this is a capital P partisan issue, as even the first Trump administration chose.
B
As you're looking at how this relationship is, what would you like to see in the developing relationship between government and the private sector as it's meandered across these administrations? And to your point in that last answer, what would be an ideal to go for between public and private sector? There's always that tension about, you know, and we've seen in the, in the, in the news as we're recording this, some of the, some of the concerns between the, the Department of Defense or Department of War, depending on how you're, how you're choosing to label it, and, and you know, a particular organization that, that doesn't want to put certain models into, into combat scenarios. And so which is anthropic, I guess there's no reason not to name them outright. Like, how do you see that relationship developing over time? What would be a healthy way for it to develop that, you know, brings the larger, the larger good into, into picture? And are there anything that jumps out as we really don't want to do that? I know you just named selling maybe to potentially hostile countries and such, but I'm just curious how you see that relationship developing.
C
Well, I think it depends on if we're talking about the relationship between the government and AI companies or the government and chip companies. So when it comes to the chip companies, companies like Nvidia and the like, our posture was we thought their technology was so important, so fundamental and also so scarce that we did not want it going to countries like China because of the ways in which it would modernize the Chinese military and the like, and because China really struggles, in part because of the controls on chip manufacturing equipment, China really struggles to produce similarly powerful technology. China will not make a chip as powerful as the one Trump has agreed to sell them until 2028. According to China's own roadmaps, if you believe the Chinese propaganda, they Won't get there until 2028. So they're just, as I said before, an extraordinary advantage. And we put a policy that I would defend of export controls and say, we want this technology, especially given how scarce it is. Every chip that gets made will get sold. We want this technology to go to democracies and ideally American companies. So that, I think, is the first aspect of the relationship. Then the question is, well, what's the relationship between the government and the AI companies themselves, the ones that are developing the systems? I'll leave the news reports aside. Obviously, it's a timely subject. But I'll tell you, we thought about this a lot in our administration, and the president signed a document called the National Security Memorandum. Basically, it's an executive order for the Department of Defense and the intelligence community. It's got an unclassified and classified component to it and then classified components, pretty straightforward, which is it directs these government departments and agencies to work with the private sector and to say, let's bring this technology in to the private sector. A lot of the ways in which we did things in the past is outmoded or broken, and we need something that is newer and more capable and able to keep up with changing times and changing threats. And I'm very proud the President signed that in 2024, and I'm very proud of what we did in the time developing that and then before we left office to start that ball rolling, to say there's a way we can work very collaboratively. That does not mean it's no holds barred. And President Biden said this is a technology that poses significant risks, and we need to have guardrails in place to make sure this technology is not misused and the like. And we were very alert to try to craft those guardrails in a way that gave the Department the flexibility they needed to fight and win wars, but also made it made us worthy of the values we are defending. And that is something that the Department of Defense and the intelligence community worked with us and were totally on board with of saying we're going to use this technology in a way that's consistent with our values.
B
As we talk about guardrails and values, and that's a huge topic right there, especially when you talk about specific applications of AI. The reason I brought up the anthropic thing that was the Current Affair thing before is that's it is typical of a concern that people across, you know, the political spectrum have in terms of the appropriateness of putting AI into specific cases, such as combat and Stuff like that is there? Do you. What is your guidance like? What is your own personal feelings around the right place to draw a balance in that? What's the responsible place to land in terms of how you match AI up with security critical or safety critical applications of it? Do you have any guidance on where you think things should land?
C
I don't claim a lot of expertise on the particular service and again, not commenting on the anthropic case in particular. I do think in our administration we recognize that it varied by use case. So there are some use cases. I don't know if you call cyber operations combat, where autonomy is really important. Missile defense is. I mean, I would too, but, you know, I can introduce you to some infantry men who wouldn't and men and women who wouldn't. Missile defense is another area where autonomy has historically been very important. And there probably are some great Lockheed systems that have an autonomous mode from the 1990s on this kind of stuff. So I think it depends on the area. The DOD in our time, and I had nothing to do with this, but they revised a policy called 3000.09 which talked about appropriate levels of human judgment in this. And I think that policy is very familiar with that. Personally, you probably know more about than I do, but that was something that was important to us. One thing that I was a little closer to that I think was important is to say this is not just the United States deciding this on its own. We need to get a group of nations, ideally group of democracies, but also, also broader than that, to work on this problem together. And the answer probably is not we will not have any autonomy in military systems. No one's saying that. But we developed a document called the Political Declaration on the Use of Autonomy in Military Systems, I think, and we got something like 58 countries to agree to that and the set of principles that would guide that work. So I think that is vitally important as well. Wherever we decide as a nation to draw the lines, it's vitally important that we go and try to set that aside. The, in a collaborative way, the norms and standards with the rest of the world and a minimum, the rest of the democratic world. And that's something I think we're very proud of. Having gotten the ball rolling on.
B
I think is kind of one more question along the line of guardrails in general and kind of where to land. There's a lot of debate in the. In, you know, out there in general with people that care about policy and how AI relates in terms of What I'll, what I'll call kind of speed versus caution in terms of how you set, you know, guardrails that are appropriate and stuff. And interestingly, I'll make the comment so that I don't put it in your mouth. I've seen the current administration go both directions on that at different times, which is somewhat confusing in terms of how to interpret. But, but in general, like as you're looking at the, the private sector racing forward in terms of development of models and capabilities, as you would expect from the technology sector. And then you're dealing with certain areas that are safety critical and you're worried about. And it's not necessarily just military topics, but other areas. There's obviously transportation and there's even psychological impact, a hot topic these days. I have a daughter who's about to go into high school and the impact of AI playing into systems that are social media and things like that. So there's a wide array of concerns to address this. How can people think about that kind of speed versus caution paradigm in all of these different aspects that impact their lives and, or which they are watching? Politicians in Washington D.C. you know, engaged in. Do you have any thoughts around, like, how do you, as you're looking at such, this massive array of possibilities, how do you assess those? Like, how do you have a framework in your own mind about how you would say this is, this is a good place to go. This is not such a good place to go.
C
I mentioned at the top of our conversation that in some respects this is the first private sector invented revolutionary technology in the last hundred years. I think the last one was the railroad. If you go back to the late 1800s, and the railroad had many of the characteristics that AI has. Private sector invented huge capital expenditures, if you go back and read some of the literature, this kind of promise that it would transform the economy. They even said it would transform the climate, it would make everything wonderful. And it's this similar kind of utopian vibes that you get from sometimes these Silicon Valley companies. And the early days of the railroad were incredibly bloody. And there was train derailments, train deaths. There was no standardization of anything. So there was no time zones. There was no brakes, no air brakes on a lot of the trains. There was not even standardized track gauge width. So you got all sorts of derailments. There was poor coupling between cars, so the trains were coming apart. Thousands of deaths in the early years of the railroad. And it eventually was the case in a very halting, imperfect process, some combination of the government and Private sector worked this out. We got time zones from the railroad companies, we got standardized track with gauges from the government, air brakes, coupling between cars, Railway Safety Act. All of this stuff kind of emerges over several decades. The net result of this is the trains are safer, but also the trains go faster. And the reason I kind of give this historical analogy is to suggest that too often I think speed and safety are put into tension. And again, look at J.D. vance's comments where I'm not here to talk about AI safety, I'm here to talk about AI opportunity. Well, my view is that we get AI opportunity through AI safety and through not incredibly cumbersome regulations and the like, but through developing technology that is safe, secure and trustworthy and people can trust. And that is still my general principle. And it shows up in all kinds of different ways. We've worked on domestic things and kids online safety. And the principle, I think, applies across the board, but at a big picture. That is the most important principle. I think the second important principle that I thought about a lot is competition is good between companies, competition is sometimes even good between nations. But you can have a competition that becomes a race to the bottom. And I was worried, and I think a lot of people were worried that if you had the United States and China in essentially an arms race or perceived to be in an arms race for this technology, that would create incentives for both sides to cut corners on safety and to race ahead in ways that might be foolhardy. So part of our thinking was we want to build as large of a democratic lead as possible for American companies to essentially make this democratic problem and to then say, well, we'll coordinate, we'll figure out whatever coordination and regulation is necessary within our own borders. But this is not a thing where you have democracies and autocracies racing to integrate into the military, so developing a big lead so that you can spend it on safety, I think, and more generally spend it on safety and trust. I should say that was a big, a big part of our philosophy. And again, I think that that shows up in the policies.
B
So, Ben, as you were, just as we went into break, kind of talking about, you know, how different geopolitical concerns affect how people are perceiving, you know, potential like arms race in AI or whatever. And that begs a larger question, especially in today's climate, is, you know, kind of the, the structure, the international order has been definitely being, I'll gently say, transformed in recent times. And so many of the relationships that for many, many decades, 80ish years, that we've been relying on to coordinate are shifting as we are. Like, even if that hadn't happened, you're having AI technologies and CHIP concerns that aren't just affecting. We've had somewhat of a US centric conversation so far. But you have to get allies working on different policies and try to align those to where we have that kind of guideline safety net that you were talking about just before the break. And so is it getting harder to do that in today's geopolitical. If you were able to influence current policymakers in this administration, how would you guide them into a safer world? With AI where we're able to get many parties internationally to kind of agree on what those safety and guidelines should be like, what is the best way to do it seems like it's harder with some of those relationships faltering at this point.
C
I think that's absolutely correct. And we knew the international dimension. This was really important. And this was something that our company said to us all along. So this is not a bleeding heart liberal thing. This is American companies saying this goes better for us if there's one clear set of standards and if there's interoperability between regulatory structures and the like. So this was something that was a priority for us in every aspect. You can see in the President's executive order on AI. You can see it in the national security memorandum I mentioned. We had something called the Hiroshima process that the G7 group of nations did. We had a UN resolution that was unanimously passed, including the Chinese co sponsored the resolution. So obviously each of these documents was different. But our view was we had to show up as a kind country in every international setting to make the case for what our vision of AI looked like and then to hear from others and to work it out. That was international diplomacy. And there's a great team at the State Department that deserves a lot of credit for the work that they did. All of that is harder now because the relationships have frayed for other reasons. Tariffs, politics and like, I think ultimately that will hurt us and that will hurt us and will hurt our businesses. And the proof is going to be in the pudding on that. So, yeah, I think the harm is real. One thing I also want to say is this is not just going to be a question of democracies. And I believe very firmly that democracies need to have preeminence in AI. I recall a quote from Kennedy. He's given the speech at rice University in 1962. Everyone remembers a speech because he says, that's when we're going to go to the moon and come back. But he talks later on the speech about space, and he says, we don't know if space is going to be good or ill for humanity, but only if we are first can we help decide. President Biden had a copy of the speech in his private study outside the Oval Office. And I always thought it was just a great metaphor for this. So I'm all for democratic preeminence in AI, but I also think we need to talk to autocracies. And we had quiet conversations with the Chinese in Geneva. I mentioned the UN resolution. And I do think there's an aspect of this that is being able to engage with all nations about technology that affects all of us. And I'm worried we're also losing the ability to do that, just as we're losing the ability to talk to our friends about it.
B
As we've kind of talked about where the west is and our capabilities a few minutes ago, talking about chips specifically versus Chinese capabilities and where they're anticipating chips coming up. One of the things that we've seen, particularly over the past year or so, is kind of a surge of scientific papers released by Chinese researchers. If you go to hugging face, there are a lot of Chinese open models that are there at this point. They are. If you're just doing a numbers count, they are definitely catching up and kind of passing at that way of measurement. Like as we're at this point of at least inflection at some level, and I'm not sure that I know what the inflection is, that it's a fairly complicated thing to discuss in its intricacies. But what might this mean for the relationships between ourselves and China in this case, in terms of maybe steering away from an antagonistic relationship in the future, finding our way back to that kind of Nixon opening up China moment. It potentially in the future, whether it, whether it could be this administration or whether it needs to be a future administration that does that. How does this surge of open models matter in the sense of should people be using them? Should they stay away from them? Like, how should they think about them? We get a surprise. I get a surprising number of questions about that topic is like, do I want to use a Chinese model? Am I putting myself at risk? But so there's the, there's kind of the several questions, you might say, packed into one. There's kind of the how do individuals assess those and how does it affect the relationships of these national bodies in terms of how they are, how they are developing the relationship in the future. I definitely am hoping that we don't get into a worse situation in the Taiwanese strait. Be nice to find an opportunity to. To back away from some of the risks developing there on an ongoing basis.
C
Sure. Well, we can start there. I obviously would like to. No one wants a war in Taiwan. I mean, I think it's a very important piece of the global puzzle. But I'm with you on that. Now that I've left the government, I advise American AI and cybersecurity companies. So obviously I have a preference for American systems. And I think some part of me still has the old national security policymaker in me too. That of course has a preference for American systems. But I think the Chinese developers are very talented and if you look at the team at Deepseek and look at what they have done, they are very talented algorithmic engineers. I was a big fan of. We worked very hard in the AI executive order on high skill immigration, bringing AI scientists to the United States. I'd love to have all them move to the United States and start their companies here. So I have no ill will towards them or the Chinese people generally. That said, I do think sometimes Deep Sea or companies like that can be presented as rebutting the thesis I outlined earlier of the centrality of computing power or the dominance of the United States and democracies in computing power. And I don't think it does that. And in fact, if you look at the Deep Sea systems, they're all trained on either smuggled or stockpiled American chips and they're constrained in their performance. They in many respects lag US companies because of their inability to get US chips. And if you look at the deep seq v3.2 paper that came out in December of 25, for example, they acknowledge this. In the paper they say essentially we are constrained by a lack of computing power. And The Deep Sea CEO, when I was in office in 2024, said, My issue is not talent, it's not money, it's computing power. And he's right. He's absolutely right. So I think several things are true here where the Deep Seat people can be very talented, which they are. And also it can be the case that computing power remains incredibly important, in fact, probably the most important US advantage. So none of this changes my mind on the policy side of it and what we did or didn't do. Again, I think we should go further than we did and certainly further than President Trump has gone. But it is a reminder, I think, of the Chinese ability in this space and it makes me wonder what they would do if they could get even more of our chips. And the answer is they would be really good.
B
I'm curious, as they continue to work toward developing this, and you've talked about your expertise in cyber and maybe you're kind of starting your career on the cyber side, moving into AI. Can you talk a little bit about, like, how you expect AI to create asymmetries in cyber and the various aspects of cyber. Talk a little bit about what the junction of AI and cyber looks like and how you expect that to evolve. You know, how is it influenced? You know, we, we hear about cyber attacks pretty much every day, everyone does in the news and that I don't think anyone expects that to stop. So, you know, going back to our agreement that it is certainly part of war fighting, it's also part of daily life that, that every listener here has, has to consider. So could you give us a little bit of a level set on how cyber and AI interact in that way?
C
I think cyber operations is one of the most immediate ways in which AI will impact national security. And again, it's part of how I got into AI because of this, this connection with cyber. And I agree cyber is part of war fighting for sure. But as you said in your question, it is also part of day to day life, not just for ordinary people, but for nations. The famed US strategist George Kennan has this quote. I think he talks about the perpetual rhythm of struggle in and out of war between nations. No place is that more real than cyber operations. The United States conducts offensive cyber operations, defensive cyber operations, not just in wartime. It's a key part of intelligence collection. Every nation does this. So an advantage in cyber operations and offensive defense translates in a very immediate way to an advantage in national security, which is why it's so important Now. I think AI will have a significant effect both on offense and defense and cyber operations. There's a number of different angles to this. Let's take the most obvious one, which is vulnerability discovery. Vulnerabilities are weaknesses in computer code. If you can find them on the defensive side of the ball, you can patch them and reduce them. If you find them on the offensive side of the ball, you can exploit them and use them as a tool for intelligence advantage and the like. The question for decades has been, can AI find software vulnerabilities? DARPA ran a thing called the Cyber Grand Challenge in 2016, tried to begin answering this question. We did another version of it called the AI Cyber Challenge in 2024. And 2025, that went much deeper and now I think the world is providing the answer. Anthropic, which is the company I advise, but I do not speak for, published with their recent model release published that it found something like 500 high severity vulnerabilities in open source software. So some of these have been in the code, I think for years or decades. So this is a tangible proof, not in a theoretical kind of way, but a very practical, real world way that AI is changing cyber operations. And I do think there's a lot of opportunities for nations to use it to their advantage. And I hope the United States is continuing the effort we put in place to try to get there first.
B
I appreciate that great way of framing that as we wind up and kind of as a last question we often ask kind of what we call the future question to finish up and where your thinking is at. And I think I want to kind of craft it a little bit on this in terms of let's take the current politics and the challenges of that out of it a little bit and just talk about democracies in general within AI. If we're looking at what the next few years might look like and let's say that some of the angels on our shoulders prevail and maybe, maybe some of the scary things that we worry about don't occur so that we kind of get a hold of, maybe rational minds are take hold for the future. How do you measure democracies? Kind of like, are we winning? Isn't as a simple way of putting it. Democracy and AI we've talked a little bit about, you know, I talked quite a lot about that in the conversation. How do you look forward and say, yes, democracy's holding true, we have a competitive advantage in this particular way or that particular way and we can measure it, that AI development is better for that. I know that's a little bit of an oblique way of putting it, but I'm trying not to put you into a Republican versus Democrat future kind of thing and more just democracy going forward. How does that look to you when you're thinking about healthy democracy is moving the ball forward?
C
I appreciate it. Again, I don't think it's a Republican or Democratic thing. I think this question of what is success for democracy in the age of AI look like comes down to three things. The first is are we inventing the technology and are we the ones for the species pushing this technology forward, bending it as best we can towards safety and justice and the like, but making this technology ourselves? And that's part of the reason why things like chip controls are so important. The second is, are we adopting this technology? Are we adopting it to a national security apparatus? Are we adopting it to things like cyber operations? Are we adopting in our economies, in our businesses, in a way that propels our economy and our prosperity and of course, propels our security advances, our security as well? So that's the second question. Those two questions I actually feel pretty good about. I feel best about the invention. One, I think we have all the winning cards on invention. The only way America loses is if it folds and does things like sell chips to China and the like. But I think we have the winning cards on invention. Adoption is doable. It won't be easy, but it is doable. Savvy policy making. Can, can, can do that. And then the third set of questions is the hardest, which is, are we using this technology in accordance with our values? We talked a little bit about lethal autonomous weapons. That's one dimension to it. Another dimension to it is, are we using this technology domestically in a way that guards against some of the risks, risks that it could pose? What are we going to do about this technology and its impact on jobs? I'm a national security person, not a labor economist. But that's really important, I think, in a democracy. What about the ways in which it can centralize power in the hands of a couple companies or in the hands of a government, a surveillance state? What about the degree to which it might undermine the social contract and citizen ability to participate in democracy? You mentioned disinformation. You mentioned kids, online safety. There's many, many, many different aspects of the how do we make sure AI is advancing rather than undermining democratic values question. I don't have great answers to all of them. I think I've got answers to some, but Those are the three things I would use. If we're sitting here in five years or 10 years and we're evaluating whoever's been in office, it's is America inventing AI or democracy inventing AI systems? Are they applying them better and are we doing so as consistent with our values?
B
All right, that was fantastic. Thank you very much. Really, really appreciate having you on the show. A lot of great insights there. Thanks for sharing them with our audience and look forward to having you back on the show at some point in the future to talk about some of the extension of what you've been working on at that time. You're always invited back. Looking forward to talking to you again.
C
Thank you so much.
A
All right, that's our show for this week. If you haven't checked out our website, head to PracticalAI FM and be sure to connect with us on LinkedIn X or BlueSky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner, Prediction Guard for providing operational support for the show. Check them out@prictionsguard.com Also, thanks to Breakmaster Cylinder for the beats and to you for listening. That's all for now, but you'll hear from us again next week.
B
It.
Practical AI Episode: AI Policy and the Battle for Computing Power Date: March 9, 2026 Guest: Ben Buchanan (Johns Hopkins SAIS, former White House Special Advisor on AI)
This episode dives deep into the interplay of artificial intelligence (AI) policy, global competition, and the vital importance of computing power. Host Chris Benson (Lockheed Martin) welcomes Ben Buchanan, a renowned AI policy expert, professor, and former White House Advisor, to discuss how the public and private sectors shape AI’s future, the geopolitical stakes around computing hardware (specifically semiconductors), and the emerging guardrails around AI’s responsible use. The conversation blends historical context, technical detail, and actionable policy insights—grounded in both US national security and global impacts.
Buchanan shares how he “accidentally” entered the AI field:
“The cool thing, at least in my view, about a lot of jobs I had is that they didn’t exist before I had them.”
— Ben Buchanan (01:57)
His journey from studying cyber operations to AI policy, emphasizing the shift from government-led to private-sector-led innovation, setting up the episode’s central theme.
Buchanan highlights AI as the first major tech revolution in a century driven primarily by the private sector, not the government or military.
Comparison with nuclear age, space age—AI’s private-sector origins are a “vexing challenge for US government,” making policy guidance and oversight more complex.
Current policymakers often lack direct technical experience, requiring effective translation from experts:
“My job [in the White House] was to explain things and to say, how can we put this in terms that make sense to policymakers?... Also, how can we view this not just as a scientific or technological question, but as a geopolitical question?”
— Ben Buchanan (05:49)
Buchanan stresses that Taiwan’s semiconductor industry is essential—not just for AI, but for the global economy and security:
“Something like 97% of the advanced computer chips in the world are made in Taiwan...It is very fortunate for democracies that, maybe as a historical accident...they own the computing supply chain.”
— Ben Buchanan (10:25)
Loss of Taiwanese chip output would cost trillions, triggering massive geopolitical consequences.
Recent US policies (e.g., CHIPS and Science Act) aim to bring chip manufacturing stateside, though Taiwan remains the indisputable leader.
Contrast in engagement: policymakers want actionable briefings (“what’s happening now?”), while students can dive deep into theoretical explorations.
Both audiences fundamentally ask: Where is this technology going? What does it mean for humanity and democracy? What should we do now?
“If you’re asking, do my graduate students know more than Congress does? The answer is yes.”
— Ben Buchanan (16:27)
Application-specific judgment: Autonomy is acceptable in some domains (e.g., missile defense, cyber operations), but regulation/policy must vary accordingly.
US-led effort to build international consensus: “Political Declaration on the Use of Autonomy in Military Systems” with 58 countries agreeing on principles.
“Wherever we decide as a nation to draw the lines, it’s vitally important that we go...set that aside [internationally], the norms and standards with the rest of the world.”
— Ben Buchanan (27:18)
Buchanan uses the historical analogy of railroads—safety innovations ultimately allowed trains to go both faster and safer.
Argues that opportunity and safety are not conflicting:
“My view is that we get AI opportunity through AI safety...through developing technology that is safe, secure, and trustworthy and people can trust.”
— Ben Buchanan (30:24)
Warns against a “race to the bottom” between nations on safety, advocating a strong democratic lead to better coordinate safety frameworks.
Buchanan gives a three-pronged framework for “winning” as a democracy in AI:
“We have all the winning cards on invention...The only way America loses is if it folds and does things like sell chips to China...But...how do we make sure AI is advancing rather than undermining democratic values? ... Those are the three things I would use.”
— Ben Buchanan (45:35)
On Analogies in Explaining AI:
“To resist metaphor is to endure the thing itself. And I always just say we have to endure AI itself.”
— Ben Buchanan (08:23)
On Global Chip Power:
“Making a computer chip, in my view, is the hardest thing we do as a species.”
— Ben Buchanan (10:25)
On Bipartisanship:
“I don’t see [AI policy] as a partisan thing...the delta between Trump 2 and Trump 1 is much bigger than the delta between Trump 1 and Joe Biden's administration…”
— Ben Buchanan (20:09)
On Speed vs. Safety:
“We get AI opportunity through AI safety...not incredibly cumbersome regulations...but through developing technology that is safe, secure, and trustworthy.”
— Ben Buchanan (30:24)
On Democratic Success:
“Are we inventing [AI]? Are we adopting [AI]? Are we using [AI] in accordance with our values?”
— Ben Buchanan (45:35)
This episode gives listeners an unvarnished look into how AI policy, power, and national security intersect at the highest levels—and why global leadership in compute and democratic values is critical for the next decade. Whether you’re a technologist, policymaker, or simply an engaged citizen, Buchanan and Benson provide a nuanced, actionable roadmap for understanding AI’s real-world impact—far beyond the buzzwords.
For more insights and future episodes, visit PracticalAI.fm or connect on LinkedIn, X, or Bluesky.