
In this episode, we are joined by Wadhwani AI Center fellow Kateryna Bondar to discuss her recent reports on Russia's military AI, "How Russia Is Building a Sovereign Drone Ecosystem for AI-Driven Autonomy" and "How Russia Is Reshaping Command and Control for AI-Enabled Warfare."
Loading summary
A
Foreign.
B
Welcome back to the AI Policy Podcast. I'm Gregory Allen, and today I am genuinely very excited for today's episode because we have on my colleague Katerina Bondar here at the CSIS Wadwani AI center, and she has been putting out over the past few years, just like hit after hit after hit hit report on what's going on in terms of military AI usage in the war in Ukraine. Last year, she published a bunch of really big and influential reports on the Ukrainian side, the Ukrainian side of the equation. And we are now here to talk because she has just published two pieces on Russia's military integration of AI. I have been bugging Katarina to come on the podcast for a long time, and she has finally given in. So, Katarina, thank you so much for joining me.
A
Hi, Greg. Thanks for having me.
B
Okay, so I first want to kind of introduce you to the audience. So who are you? How did you become interested in the military AI dimensions of the war in Ukraine? And of course, you were originally born in Ukraine, so maybe we can sort of start with that part of your life.
A
Yeah, of course. Well, I can start from the very beginning, which is my birthplace, which is actually in Donbass, the territory which is now occupied. So I moved to Kyiv. And my background is in international relations. And after the revolution of dignity, I joined the Ukrainian government. I was an advisor to the president. And for the past seven to eight years, I've been working in Ukrainian Defense Ministry of Defense. So basically, the project that I was responsible for is implementation of NATO standards into Ukrainian military, which gave me a pretty good understanding how this works in NATO countries. And I was coordinating the job across all allies and was coordinating and talking to all the ministries of defense, including NATO structures. So that was pretty fascinating. And, you know, we already had War since 2014, and I was tracking, you know, tech advances, technologies that are being used there. And in 2019, I was lucky to get a fellowship at Stanford. So I come to Stanford and I'm like, I'm from Ukraine, and people are like, where is it? I'm like, oh, okay, this is Stanford.
B
Not a good day for you on this podcast. Okay?
A
It's not about Stanford. It's just about awareness, what this whole war is about. So I'm like, okay, I'm here to make a research on what kind of new technology can be helpful to Ukraine against Russia to win this war. And this is when I got interested in military AI and how AI, this emergent tech, can be used for military purposes. And as you can imagine, in 2019 in Silicon Valley. You know, when you talk about defense and anything military, it was a huge pushback.
B
So, like, yeah, this is only one year after Google pulled out of Project Maven in 2018. So, like military and AI, very, very controversial in that time frame.
A
Exactly, yeah. So I'm like, well, okay, let's wait and see what's going to happen. And Fast forward to 2022. We have full scale invasion and I made a decision to move to the United States. And I was lucky enough to get a job at csis, but I wasn't lucky enough to get straight away to your program, Greg, but I still felt that I need to write more about it. I need to write about this kind of technology, about Ukrainian war, because it was still very far from understanding.
B
You had written a paper that was published by the Carnegie Endowment, which is sort of how I became interested in your work, which I think, correct me if I'm wrong, that was on institutional reform in the Ukrainian defense industrial base.
A
Right, right, right. So it was more about defense industrial base as it, as it was before. I mean, before the full scale invasion, before the emergence of all the new companies, startups and everything. So I decided to, you know, raise awareness about what Ukraine is and it. The defense industrial base looked like. And yeah, so. And then guess what? For our audience, I'm like sitting in CSIS and I'm looking at all the programs and I'm like, okay, what, what I want to write about and who would be the ideal partner to do that? And I looked into your program, Greg, and I'm like, okay, Greg does an interesting job. His center does cool research. So I reached out to Greg, just inviting him for a coffee, and shared what I want to write about, offering to write a report. And he was like, why don't you just switch to my program? And I'm like, that's a very Ukrainian approach, because we love to do things really fast and effective. And I'm like, sure. And it happened in what, two weeks? Yes. Yeah.
B
And. And I honestly can can say of all the human resources investments I made at csis, stealing Katarina from another program was the highest return on investment that I made because it was something that I was very interested in. But I don't speak Russian, I don't speak Ukrainian. And so while I had a lot of expertise in military AI, I didn't have a great understanding of the Ukrainian and Russian context. And in you, I found someone who was obviously, for somebody who, whose, you know, birthplace is currently occupied by Russian forces, you were super Passionate about the conflict. You had recognized the opportunity of innovative technology to be a part of the solution, and you wanted to write and do think tank work. And so I was just like, what is wrong with CSIS that this woman is working on, like, economic development? Like, what a horrific misallocation of resources. And so I was able to right that wrong by bringing you into the Wadwani Center. And just like immediately, you just started publishing hit paper after hit paper, which, like, for folks who are out there. If you just find Katarina's page on csis, you will find. And I think I can say this with some authority, having been a former US military AI official not that long ago. Katerina's work is the best analysis of AI and autonomy being used in the Ukrainian war by both sides, available at the unclassified level. Like, probably there's like some shop, you know, in the intelligence community that's doing work that's better than Katarina, but they have an unfair advantage. And in terms of what you can do with what you've got, this stuff is a real. It was a joy to publish, it was a joy to edit, it was a joy to see you bring all these ideas to fruition. And so now we finally, only years later, we're bringing it to the audience of the AI Policy podcast, which I'm delighted on. So let's dive right into it. So we're going to talk a little bit about Ukraine, but really the focus of our conversation today is on Russia because you have just published over the past weeks and months these two new reports and they are titled just so I have them here, how Russia is building a sovereign drone ecosystem for AI driven autonomy, and how Russia is reshaping command and control for AI enabled warfare. So before we go in deep on each of these things, can you just like talk to us about, like the high level information about what role you think technological innovation has played in this conflict and what. What role this conflict has played in driving technological innovation on both sides?
A
Yeah, so I think these two aspects are really interconnected and it's hard to identify which comes first. Is it technology defining the conflict or the conflict defining technology? I think these two things actually go in parallel. And why I decided to do research on Russia is just because I think Russian capabilities are really underestimated. And Russian approach is really interesting from like a pragmatic standpoint, how they tackle the issues with technological limitations, access to technology that they have, how they compete with Ukraine, Western technology. So coming back to technological aspect of this conflict, I think it's very interesting because this is where you can see a dynamic of this race and this is where you can see a competition. So if we boil it down to unmanned systems, AI, because these are also very two interconnected aspects of this conflict, I think that Russia was really lagging behind during the development of the conflict since 2014, just because Russia was heavily reliant on its conventional systems and they were pretty confident that they have full arsenal. And this is an extension of their concept, and not only their concept concept that Ukraine is going to fall in three days. This is what everyone thought. So I would say it's more a competition of a new type of warfare technology driven and software driven against conventional old school system. So from this perspective, I think it's very interesting to look into this tech, into this conflict.
B
You've already anticipated my question a little bit here, but where did Russia start in 2022? Like where were they in January 2022 when it comes to military AI, unmanned systems, autonomy, where was Ukraine in January 2022? And then like, where are they now? How would you sort of describe the state of AI and autonomy in particular? And I'm interested here in all the potential use cases of AI and machine learning. So obviously a lot of your work has focused on drones and command and control systems, but feel free to talk about any applications that you think are particularly noteworthy.
A
Yeah, I mean, let's start very broadly and generally. I think Russia was, as I mentioned, was really lagging behind because they were just underestimating this technology and underestimating the complexity of this conflict. They just treated it probably as a local conflict, you know, their sphere of influence, post Soviet space, and they just didn't pay too much attention to what's going on in, in terms of AI development. But when we go, if we look into unmanned systems, actually experiments from both sides started right in 2014, 2015. So this is when we saw the first deployment of AI system. We call it offensively first deployment. But these were just DJI Mavic drones, you know, bots, private funds from both sides, by the way, just soldiers bought them in commercial marketplaces to fly over the front line and see what's going on there. And there was a lot of experimentation with different uses, use cases, applications of unmanned systems. And yeah, this is when it all started. But Ukraine, having lack of arsenal, having lack of munition, having lack of software for situational awareness, et cetera, due to this necessity, leveraged its volunteers, civil, civil, civil engineers, just to fill the gap in this capability. So what I mean is their situational awareness system today, which is known as Delta system. For example, it started to be developed in 2016. Russia was relying on its conventional intelligence and ISR and everything. So you know, when I read today Russian military journals, like the official journal of their general stuff, I see that they themselves say we were lagging behind like six, seven years comparing to Ukraine. Now we're behind for two years at least. So now Russia is catching up. But what is dangerous in this situation, when Russia identifies a gap, it starts to catch up really fast.
B
Yeah. So I think you, you put it well, because I remember in September 2017, Vladimir Putin very famously gave a speech in which he said, whoever leads in AI will rule the world or something to that effect. That quote got a ton of play in the United States and Western press. And so it seemed like, oh, he's making this a priority. But the reality is by 2022 they had gotten effectively now in terms of adopting military AI. And what you're saying is that in the four year sense, they actually do have interesting capabilities. Now there is stuff that America can learn from them. And you have some recommendations in this paper and I think that's all great. Now I want to drill down on something you talked about because I remember you wrote a great paper on this, which is the Ukrainian delta system, which I think it's fair to say if I remember your article was titled Does Ukraine already have CJADC2 so combined joint all domain command and control, which is this like white whale for the United States Department of Defense that we've been trying to hunt for over a decade now, this sort of fantasy of the future of military operations. And you wrote something about the Ukrainian delta system and what they had achieved. And now in this paper you have the write up of the Russian counterpart to that system. So why don't you talk about first Delta and sort of how it came to be, what it achieved, what it provides for Ukraine and then we'll move to the Russian one.
A
Yeah, sure. So first of all, it's not only the Pentagon who is trying to achieve this huge abstract concept. Russia was trying to do the same.
B
You mean like on a doctrinal military journal, aspirational kind of a sense? Yeah.
A
Yes, yes. Join all the main command and control, like huge system which encompasses compasses it all. So but if we go back to Ukraine, Ukraine didn't have this ambition from the very beginning. So and Ukraine started to solve its emerging battlefield problems and fill in the gaps that they identified immediately with the help of civil engineers.
B
So these Are when you say civil engineers, you don't mean like the people who build dams and bridges, you mean civilian engineers, right?
A
Yes, yes, civilian engineers. Like people who are not part of
B
the military or who volunteers.
A
Volunteers, yes, who just joined the military and these are people working in IT and mostly outsource industry for the United States market. So they gather together and like, why don't we develop some sort of software which is able to integrate different data sources that we have and create a common operating picture. So basically ensuring situational awareness for our forces just to understand in real time or close to real time what is going on at the battlefield. So put it very simple to see on one map, digital map, all the targets, all the of assets, et cetera, in close to real time to make informed decisions. And that was kind of the first small prototype of CJC2 system developed in Ukraine. Again, fast forward to 2023. This system proved to be really effective and it was adopted as a formal official situational awareness system by the Minister of Defense and actually for all defense agencies in UKRA, which means it is available for 13 or 14 law enforcement and defense and security agencies in Ukraine. So they are supposed to work in one common environment and to see one common picture. So if we look into Russian side, Russians as I mentioned, were relying on their conventional intelligence and reports and everything. Of course they have huge capabilities comparing to Ukraine, much bigger, much more capable and they thought that it's going to be enough. So in 2023 or 2024 we actually see the same pattern that, you know, building this huge CJC2 system which also doesn't exist in reality. It exists only in papers, in some concept papers. You know, it actually translated into something very practical where we see military exper. You know what, let's put that on hold. And of course we can put the word AI there that in future we're going to have AI enabled situational awareness and all the main command and control system. But for now let's solve this particular battlefield management problem, which means let's try to integrate all unmanned systems that we have right now in the air, on the ground, in the sea, into one environment where we can get data from those systems. Because there are sensors, there are kinetic capability, they are everything. And let's try to manage that. So again, Russia also leveraged its so called people's replica, which is civilian military industrial complex. Basically those startups, volunteers and everything who developed exactly the same thing as not. It's not exactly as Delta in Ukraine, but it's the same Concept, is it
B
fair to say, like that Russia was explicitly inspired by Delta? Like, was like, did some Russian official say, gosh dang it, go build me a Delta or something like that?
A
Well, I cannot confirm and say who said what, but again, I saw in their official military journals that they always refer to Delta and Palantir. For them, it's like those are the benchmarks. Those are the benchmarks. And this is evil Western, Western software.
B
So basically that we need an equivalent to.
A
Yeah, no, not just need, we have to be better. So that should be something better.
B
Yeah, I think there's something you just said that I want to jump on. You know, first off, like this, the CJADC2 concept, we should say you talked about the common operating picture, which is that different folks, different types of forces, people in the air, people on the water, people on the land, they're looking at the same map that shows where allied forces are, that shows where known enemy forces are. Different types of intelligence sources are fused into that common operating picture. So that like an artillery group can like actually know what a satellite is seeing or what an aircraft is seeing. And then they can all communicate to each other. And this is actually really, really hard. If you go like, when I was in the military, sorry, I should say the military, when I was in the Department of Defense as a civilian, like basically you had these stovepiped terminals that could only talk to their one system. So you had like Patriot batteries out in the field and then you had like this arcade machine sized terminal, right, that like talks to the Patriots in the field. But that thing doesn't talk to Aegis, it doesn't talk to Fateds. And the integration of all this stuff literally might be like a bunch of people in a, in a network operations center and like they yell at each other when they see something, you know, through their one sensor. So the integration, it's not automated, it's not, you know, API integration, like modern software stack. And that kind of is what Delta was figuring out is like how to build this with modern software development standards, with APIs, so that the systems can talk to each other in a machine to machine way initially, so that you can know what your allied forces know, but eventually so that you can like task and make requests of your allied forces and those requests can be judged maybe with human judgment, maybe with algorithmic judgment, sort of like like who is in range and is the best person to put artillery fires on this target that I've identified. So now the thing that you said that was the most interesting is you said that Russia benchmarked against Palantir. And I think what's interesting that's going on right now that I have to bring into the conversation is Palantir's work on Maven Smart System. I think there's so many echoes between what is now Maven Smart System and Delta. And it's very clear that the Department of Defense has made the exact same determination because a recent memo that was acquired by, I think it was Reuters from the Deputy Secretary of Defense said that he wants the Undersecretary of Defense for Research and Engineering to study the idea of just basically ending CJADC2 in its current form, which is more like, like, let's get a bunch of committees together and write the communication protocols and write the standards. And basically says like, no, no, no, no, no. We don't to want to top down order CJADC2. We want to bottom up, grow CJADC2. And the most obvious starting point for doing that is Maven Smart System, a system developed by Palantir and which really hit its stride in Palantir support to Ukraine starting in 2022. And I think that gets to something that you said in one of your papers, and I remember because you and I talked about putting it in your paper, which was like, like the idea that something like CJADC2 cannot be built, it can only be grown. And in your paper you give an analogy to Amazon, the bookseller, followed by the everything store. Can you sort of walk us through that? Like, cannot be built, can only be grown theory?
A
Yeah, yeah, exactly. Because we can track this in Ukrainian Delta System, which started from a digital map and basically integrating a couple of data sources. But now they call it not a platform, a program, not a solution, but an ecosystem of products. And for a reason, because it started with one, let's call it a product. Then they just collaborated with another team, for example, which developed a data live streaming solution. So basically any drone, any drone manufacturer could integrate into that solution and stream data footage directly from any drone flying out in the air, in the battlefield. So that has become a second product.
B
So it's like anything that can talk to Delta can talk to these live streaming drones, as long as they use the Delta protocol. And then that just keeps building, keep going.
A
Right, right. So like another solution, another product that is integrated with Delta is so called Mission Control because it's a huge problem right now. There are so many drones flying and there's so much electronic warfare involved and, and 30 to 50% of drones are lost because of friendly jamming so there was an urgent problem of managing the airspace. And basically the military realized that we need to know who is flying where, who is flying when and with what. So they created this additional product called Mission Control, where every operator, every unit is submitting their mission and everyone knows who is doing what. So, so, and again, I won't explain the whole system, but basically the principle is that you keep integrating different solutions and it builds out this big, I don't know, system, system of systems of products, or call it whatever, an ecosystem of solutions that Ukrainians are using for this. And I think that's the right approach, especially in the case of the United States and Russia, because Ukraine had an advantage over, of starting and building all this, let's say, from scratch, whereas both huge militaries have their legacy systems which have to be sometimes somehow integrated. Some of them might need to be divested. Right. So again, it's very logical to build this kind of things bottom up rather than just directing and everyone has to, to align standards.
B
And I think like the two analogies of, like, how that fails to, you know, the startup ecosystem. One is, as I mentioned, Amazon. And they're, you know, right now Amazon is quote unquote, the Everything Store, right? You can buy industrial supplies on Amazon, you can buy cosmetics on Amazon, and you can buy books on Amazon. And I think Jeff Bezos kind of had that vision at one point that he wanted to create something called the Everything Store. But if on day one of Amazon, he had launched the company as the Everything Store, it would have failed miserably. They would have taken on too many things, they would have encountered a million problems. They never would have created anything good for anyone. And so you kind of have to recognize, like, what's the first problem worth solving, where we will deliver a great deal of value to a customer community, even if we just only solve this one problem. And then how does like, solving that problem of mailing books all around the country or mailing books all around the world give us a growth path and a progress flywheel that can turn us into over time, a company that can add more and more products and more and more capabilities until we can grow into the Everything store. And with CJADC2, what I had seen so many times in the DOD is they're like, yeah, we want to be the Everything Store. So you're ordered on day one to be the Everything Store. And of course they take off too much too quickly and they stink. And so everybody says, oh, the Everything Store is a terrible idea. Well, like, probably not probably it's a great idea. It's just trying to build it on day one is a terrible idea. Like what you really need is this sort of first product that is delivering initial value. And that's why it's not surprising to me at all to see that the DoD is at least considering Maven Smart system as the foundation for a renewed approach to CJC2 in the exact same way that with Ukraine, with Delta, found their way to their sort of equivalent of CJC2. Now, what about the Russian system, first off, what's it called? What can it do today? And how credible of a competitor is it to what we see in Delta? Feature wise, capability wise, reliability wise.
A
Yeah, and that's exactly the reason and the problem that you're describing, that we don't have one single system that we can name today. At least what I'm aware of from open source research. Right. So Russians actually took the same approach. Their civilians developed a system so called Glasgow, which is translated as eye and lightning. Basically an eye is eye, like eyeball. Yeah, like see with eyes. And lightning is. I think it's a reference to speed. So basically they also tried to tackle this problem, as you mentioned, like you have to identify one single problem that you want to solve. And that problem was drone unit management. Like, how do you get information from an ISR drone somewhere in the battlefield? And how do you transmit that information to a drone unit? How do you get coordinates for a target and transmit it to a striking unit, let's say? And they started to solve this problem. And again, I think their conditions, I mean their military conditions for their military is even worse than for Ukrainians because they're all own government, creates obstacles and barriers for their own military. It's, you know, like you're shooting in your own leg, basically.
B
Yeah.
A
What I mean is that Russian government bans software which already solves this problem, but just because it's western software like Discord, for example, they're cut off of this technology. So they have to develop everything locally and from scratch and it has to be on the local Russian product. So they're civilians, developed this Glass Krazzatz system software which basically enables this communication between drones like Edge, ISR and kinetic capabilities with operators and drone units working, working further from the, from the front line. So, and that happened in like 2025, early 2025. And we saw the integration of these systems through training, training of new force and basically integrating it really fast across the force. And in half a year, the Russian Minister of Defense comes out with a concept of SWAT system, which is ministry led tactical battlefield management system. And its functionality, at least from what he has described, copies and resembles exactly that system that has already been deployed and developed by the civilians. So, so we can make an assumption that Russians learned how to work with their people's Wipeca, this civilian innovation, how to integrate it and then scale. So that's another way how you can leverage commercial technology, civilian development, developed technology, and then scale it across the force, and again adding and adding additional functionality, features, et cetera, solving problem by problem. And that's how you build your complex, huge CJAT C2 system.
B
So what we were just talking about is really mostly about digital transformation. We haven't even gotten to the AI part of the story. And I know there is an AI part of the story in the CJADC2 platforms, for lack of a better word, I'll call them that. That's probably not a precisely true statement, but to what extent is AI being used as part of this problem set, which is to say like command and control, integrated with isr, Intellig, surveillance and reconnaissance, data collection, et cetera, et cetera?
A
Sure. So you know, command and control in general is a very broad concept. So. And I think we should look into different parts of it where AI can be integrated and being integrated in Russia. And I'll focus only on two aspects. One of them is computer vision technology. And again, there is a lot of discussion if it's still AI or it's something or it's just.
B
No, no, no. On this podcast, computer vision still counts as AI.
A
Yeah, yeah, yeah. Okay. So. So that type of AI and that technology is evaluated like TRL technology readiness level six to nine, because Russians have
B
all AKA like ready to go. You can put.
A
Yeah, ready to go, ready to work in operational environment, like on the battlefield. Because Russians, you know, they think as engineers, and I mean engineers who build things, I mean their government, their military, they start building a foundation for some, like if they, if they want to achieve a goal, they start with foundation. And foundation for any AI is data. So they started from collecting data, data on drone operator performance data on strikes data. Everything that is related on what, you
B
know, what foreign military hardware and personnel look like when viewed through a drone camera.
A
Right, exactly that. Also I'm pretty sure a big part of labeled data set that Russians are building these days and right now. So based on that, they train their computer and vision models. And those models are only by government owned research institutions, which means it's a very foundational thing and they actually achieved a pretty good level of this technology. So it's ready to be deployed operationally and it is working.
B
So wait, so they've got a mechanism for taking a drone video feed and filtering that through some kind of AI model. And what does the AI model output? Does it output like labels like on the C JADC2 platform map? Like this is the drone saw this and I think it's a tank. So here on the map is a little picture of a tank or something.
A
So basically it helps targeting, it helps to identify objects and classify them. I haven't seen how this works because obviously it's classified and we don't have too much information.
B
Yeah, I should have asked you this at the beginning, but can you just talk a little bit about like your primary sources of information for this entire research project?
A
Project, yeah. So basically it's an open source research media, official documents of Russian Ministry of Defense and Russian government. In addition to this, I analyzed Russian channels where civil engineers discuss what's going on because it gives you.
B
These are telegram channels where Russian military forces and civilian engineers are talking to each other and complaining.
A
Not military forces that much, but basically civilian engineers who work with military forces. And you can hear a lot of complaints on what's going on in the military, what's going on on supply, supply chain side because they, you know, very dependent on China and that dependency. You can really track it and see in those conversations. And of course these are conversations with the Ukrainian intelligence and Ukrainian military as well, just to cross reference and cross check information from what I've seen and
B
we should say you're fluent, both speaking and writing both Ukrainian and Russian and then Russian. So able to read all of this stuff and able to also conduct primary interviews, as are cited in this, with Ukrainian technologists and Ukrainian military and intelligence officials. And it really shines through in this report. Okay, so sorry I interrupted you back to you know what they're using AI for in these platforms.
A
Yeah. So basically this kind of AI helps them to identify objects and classify objects. And from those conversations I have have a impression that the extent to which it can help is basically identifying a tank from air defense system or a pickup truck with electronic warfare equipment on it from just a usual civilian pickup truck. But we're not at the point where we can distinguish between Russian and Ukrainian soldier, for example. So it's not that accurate, it's not that detailed yet. So that's why again, when we get to military AI applications, we cannot fully rely on computer vision models and targeting assistance. In making decisions on engagement, the target. So it's still human who has to review what's going on, what we actually see, and if that is actually a target. So to put it very simple, but another side of command and control is actually a communication part. And in any military, and anyone who has any relation to the military knows that, that there is a lot of paperwork involved. So reports, any requests, everything goes through very formal communication channels. And this is where Russians are really struggling in implementation of AI into these workflows, because the models that they use are mostly come from west or east. So these are open wave models. I don't know, such as Llama or Mistral and Quan and Deep space from the east.
B
Yeah. So here you're talking about large language models as opposed to computer vision models.
A
Right, right. And customizing those models for Russian formal military language proved to be really challenging. So Russians themselves evaluate this capability at the level of 1, slash 3. Like 1, 2, 3.
B
Technology readiness level 1 to 3.
A
Yes, yes.
B
Oh, my gosh. One to three is like. Like a fantasy we have of one day being able to do this. So, like, if that's what they think Russian language military LLMs are at, that's. That. That's pretty lousy for them.
A
Yes. And, you know, they evaluate their own commander's time devoted to paperwork at the level of like, 50 to 60%. So 50 to 60% you work with papers, and only 40% you actually fight in the war. So AI would really.
B
Russia, we should note, is, like, pretty unique among militaries in terms of deploying generals to the front line. They're very comfortable deploying generals to the front lines. The fact that they get to the front lines and still 60% of their time is paperwork is kind of.
A
Yes, yes. So that would really help them. But they struggle with training models to work with Russian language and with specifics in the military, maybe because of classified data sets and classified information and access to that, which is actually good for Ukraine. So.
B
Yeah. And good for America. I think our own Military's experiments with LLMs are going way better, and we would not assess it at TRL level one. So I'm delighted to hear that Russia is lagging behind.
A
Yes, yes. Yeah. So, you know, if. If we try to evaluate the AI implementation in command and control, it really varies, and the spectrum is huge.
B
Yeah. So I realize I'm. I'm going to ask you to massively oversimplify here, because there is no such thing as, like, Russia thinks or Russia wants. So I realize this is a bad Question and feel free to answer in a better way than I'm going to ask the question. But like what can we say about what Russia thinks about the future evolution of military AI? Are they very optimistic? Are they very aggressively working to implement it? Like what do we think and how do you know that? Like what sources are persuasive to you?
A
Yeah, so let me start with like Russian vision of AI and Russian vision equals to me, to Russian leadership, basically Vladimir Putin, who's the president. And then it, you know, goes all the way down. This is like bottom, top down approach for sure.
B
So and your, your paper, I should say has a really nice like review of all the relevant documents sort of at the very highest level and at the most n gritty level. And you're really transparent in what documents you're analyzing and how.
A
Yeah. So basically, you know, not to, not, not going very into details about documents analysis, I would just say that a while ago even, you know the, the quote that you mentioned, the quote from Vladimir Putin in 2017. Right. It was 2017, maybe 18.
B
I forget one of those two for sure.
A
Yeah. But at that time Russia was like my sense, they were just following international trends. Everyone is looking into AI and we should also do something about AI. But it didn't translate into practical measures that they were taking and making to actually implement these initiatives. But what we see today, and especially after the full scale invasion, is that the AI integration not only in the military, but across the country, across all industries and domains, have become really practical and tangible goal for them. And why I make such a conclusion because we see AI as a priority starting from the top level documents like President's decrees, which set strategic goals for the next decade, for example. And then it translates to tactical and operational level into national projects where you have a set of goals that have to be achieved by that time. And their timeline is 2030, for example, have to have a million people working in unmanned systems. In AI we have to have 90% workforce readiness for AI deployment or something like this. So very tangible goals. And what is also very interesting, people are personally responsible. I mean government officials are personally responsible for these goals implementation. So it's not some abstract strategy, but it has a very personal, personal responsibility which.
B
So Putin knows who to blame if it doesn't all work out.
A
Yeah. Who will pay the price in a Russian way if it doesn't work out? And another factor is basically funds allocation because that's how you identify what really will be implemented. So follow the money and you will see which initiatives are Real. And you know, I can just give you a very recent quote which Vladimir Putin has.
B
We love quotes on this.
A
Okay, that's great. So bear with me. It was April 10, just recently, a
B
week ago at the time of this recording.
A
Yeah. So it was a meeting of the Development of Artificial Intelligence Technologies Commission led by the President. And that's what he said. Artificial intelligence alongside digital platforms and autonomous systems systems is shaping fundamentally new landscape for the economy, social relations, public services, education, healthcare, logistics, industry, defense and security. Indeed, for the entire life of the country. Our ability to keep pace with global change will determine our sovereignty and in the near future, without exaggeration, the very existence of the Russian state. State. So basically we see that they prioritize AI implementation and deployment across all industries and they identify it as a key technology for their survival in the future. So, and you know what is interesting in this context with Russia is that it's not just, you know, big, big political statements or PR or something like this. Russians are taking a very pragmatic approach. They understand that they are not able to participate in frontier AI race. They're like, okay, let China and the US spend enormous funds, money to develop this technology. But what is available to us, basically large language models, open source and open way models which we can customize, which on top of which we can build applications, applications layer. We will leverage this technology for our purposes and we will give all the resources. We will create regulation, we will train workforce to work with this kind of technology, but at the expense of world leaders in frontier AI. And they do it pretty successfully. We can see this in development of autonomy and unmanned systems that they're all also put in as a priority that for now it works, the strategy works really well.
B
So they're going to ride the wake of the American and Chinese ships and they think that's a perfectly fine place to be. That's so interesting. Okay, so you already, in the quote that you read from Putin where he talks about AI and autonomous systems, I think that's a natural transition to your drone, drone policy report, your drone ecosystem report. So your, your second report opens by analyzing Russian policy documents on AI and unmanned systems. So what did those reveal about what Russia thinks about unmanned systems broadly and then the intersection of unmanned systems and AI.
A
Yeah, so as I mentioned, Russians are planning from top all the way down. And we saw that interconnection of AI and unmanned systems. These are two priorities that go across all, lay across all industries. And basically the approach that I mentioned for AI, it also applies for unmanned systems because I read a very interesting analogy for unmanned systems in Russian case. I'll explain. So in Russia and Ukraine inherited, part of this industry was a very well developed air transportation industry. You know, the example of Mriya, the biggest transportation airplane, which was produced in Ukraine. And why was that? Because Russia has a huge territory and it's impossible to build infrastructure which can cover all that territory, build roads, build railroads, etc. So it gave birth to a big transportation aviation industry. And now Russia actually applies the same approach to unmanned systems. So they're saying, you know what, we have to solve the transportation problem across the country. We have huge infrastructure projects for oil industry, for example, for electricity, for other things. And let's allow unmanned industry to service all that infrastructure to enable, enable aviation transportation. But now we will replace people and we will basically remove people from this process and we will allow unmanned industry to deal with all this. And this interesting historical example translates into current time. And I think it's a very smart approach as well because it gives a huge hint to the industry development.
B
Okay, wait, so when I think of especially the early drone war. War in.
A
We didn't get to war yet.
B
Well, I was just thinking like when I think of the early stages of the full scale invasion, I'm thinking everybody is just using dji, commercial off the shelf drones. But Russia was already planning to have like a domestic focused industry at that stage.
A
Well, at that stage probably not, but they already had manufacturers who had a pretty good capability. Like Zala, for example, system which translated
B
and some of this had seen combat in Syria. Russia was supporting.
A
Yeah, yeah. So when we're thinking about dji, it's mostly like group one, group two capability.
B
But these are tiny drones that.
A
Tiny, small little drones. Yeah, those quadcopter that everyone is thinking about when you think about FPV drone. Right. But there is a huge class of Group 3 drones, for example ISR, loiter and munitions.
B
So like Azala Lancet, which I think is a group. Group 3. How big is that thing?
A
Unfortunately, we don't have enough screen to show this. And I can tell you in meters.
B
Yeah, give it to me in meters. I'll just multiply by 3.289.
A
Okay, so it's like I might be wrong, but it's like 2 meters long and 1 meter wide.
B
Something like this pretty big, right? This is like, that's like three and a half feet by seven feet. So this is, this is the size of some missiles. It has wings too.
A
Right, right. But what is interesting in for example Zala case is that this company came not from the military industry, it's a commercial company which was producing drones. Group three, and mostly what we call these days isr, but these were drones for monitoring that huge infrastructure. So oil pipelines, for example, you need drones which are able to fly for a very long time along those pipelines and to monitor their cond register some minor little things that changed and they need repair or something. So that's an example how Russia approached unmanned systems industry.
B
Looking for dual use opportunities, basically.
A
Right. And this dual use opportunity and this dual use aspect proved to be really effective in terms of business development. And that's a very interesting and good case for western companies also to look into this because now I see a lot of startups focusing on producing military drones, but the market is so much bigger and it actually makes sense. And again, Russian cases prove this very, very well, that collecting data and training and testing your systems in commercial sector actually helps on the battlefield a lot. And vice versa.
B
Yeah, I mean, I think, I think there's two, two elements of that. Number one is like the feedback of the user community, right. Like if the, if the oil pipeline people hate your drone under, you know, comparatively tame operational conditions, you can bet the war fighters are going to hate your drone. Right? So there is this like, like minimum competence and performance that you can, you can iron out in the commercial sector. But a second thing that you're, you're, you're sparking in me is the economies of scale part of the story.
A
Right.
B
Like so many drone startups in the United States are like, all we're going to do is defend defense. Right?
A
Right.
B
And on the one hand, like, that's who is interested in buying American drones in the United States is the defense customer base. But on the other hand, if your only customer is the Department of Defense, your economies of scale, your unit economics, they're going to be lousy, Right? Because you're competing against a Chinese industry that serves the entire world as its customer base in the consumer side, and then the entire industrial world on the enterprise side, and then also its government on the government and military side. And that's the customer foundation upon which they build their economies of scale. And then in America, all you've got is the military, which is a pretty tiny customer base to build your drone industry on top of. So it's interesting that Russia has this theory of protectionism, basically being needed to build the relevant economies of scale and also to build the expertise in past performance in cases capabilities.
A
Yeah, exactly. And that economy of scale also translates to the data economy of scale, let's say, because these companies, which succeed and are really good at AI capabilities, they also are mostly dual use because when they fly along those pipelines or electric cables or something, when they do this constant monitoring, they train their computer vision models and they learn how to work with this technology. So what I've seen specifically in Zolla case is that then they went to this state owned government facility which has a data set of military objects and they basically took a data set and they know how to work with this because they have their own team and talent and experience in developing computer vision models and training and retraining them constantly. So I think that's a very interesting pattern how Russia first accelerated into accelerated its unmanned industry in civil sector. And in Russia, anything that is civil automatically translates to the military. It's kind of civil military fusion in Russian way.
B
Yeah, that's amazing. So your report has a pretty explosive claim that will stand out for most listeners. It is that Russia has likely fielded a fully autonomous unmanned system in combat that is using AI for offensive strikes. Now, let me just start with a few definitional things. Hopefully by now most people on the podcast are familiar with this, but there is no definition of an autonomous weapon in international law. There is a definition in DoD policy, and that is in Department of Defense Directive 3000.09, which really centers the definition of an autonomous weapon system on the ability to, once activated, select and engage targets on its own. So, you know, a heat seeking missile, well, that technology has been around, you know, since the 1950s. And heat seeking missiles are not inherently an autonomous weapon because the human has selected the target, the human has aimed the missile and they have fired, fired it. And then it's just using the heat seeking functionality to track the target as it tries to evade or maneuver away. So that is like not an autonomous system, but what makes something an autonomous weapon system is that the human does not have to select the target. The human does not have to authorize the decision to engage the target. And this is also not a new thing. Dozens of militaries around the world have autonomous weapons in their arsenal. The US military has used autonomous weapons, has weapons in its arsenal for decade. It's just that almost all of these systems are like the close in weapon system, which is like what defends ships from missiles are coming in or other, you know, close proximal attacks. The Patriot missile defense system has a human completely operated mode and it also has a full autonomous mode. So the specific meaningful threshold that Russia is crossing here that is to a large extent unprecedented in the, in the history of war warfare is not that it's an autonomous weapon system. We've had those. It's that it's an offensive autonomous weapon system and it's getting its autonomous functionality from the incorporation of machine learning AI. So now that we've gotten the definitional work out of, can you unpack that explosive claim? What is this Russian system? What are they using it for? And why are you confident in your assessment that it is indeed this milestone of an offensive AI enabled autonomous weapon?
A
Yeah. So first of all, thank you, Greg, for setting this context because it's really helpful to understand why it's so special and it's so different from what we've seen before. So, yes, this drone was first noticed in like 2024 and it's called V2U system. First of all, I haven't found any information about it in Russian telegram channels and in Russian resources just because it's prob. Really classified story. But this information comes from the Ukrainian side. So Ukrainians reported observations and they intercepted drones, they analyzed the wreckage, and this is how we came up with a conclusion what this system is capable of. So first, when it was noticed, when it was shut down and analyzed, there was some connection to the operator, which means first it was just simply remotely controlled. But it already had Nvidia Jetson Orion Jetson microprocessor, which is able to run AI models on it on board.
B
So it's got Nvidia chips inside. Even though at this stage of when they're picking up the scraps of the blown up drones, it looks like it still is remotely piloted.
A
It was in 2024.
B
24. Okay.
A
Later versions or without any communication system, basically nothing on board which would. Which could connect this unmanned system to an operator, which brought us to a conclusion that it can fly autonomously, that it can search for targets autonomously, and that it can engage the targets autonomously just because there was no evidence that there is any way a operator can control what the system does.
B
Yeah. So I want to connect two things just for folks who might not be aware, right? Like American missiles, if they have like a GPS transceiver in them, they can automatically fly to predetermined coordinates. Right. So it's not per se, the lack of a communications protocol that tells us that this thing's autonomous. It's the combination of the lack of the communications protocol, the presence of beefy chips that are used to process machine learning algorithms, and then thoroughly the observed behavior of this system by Ukrainian Forces and their reports of it, for example, seeming to change the target that it has selected in the middle of a mission, which could only be if it has, you know, autonomous target selection and engagement algorithms on board. So those sort of three checkboxes together give you confidence in saying that, yay. Verily, this is an AI enabled offensive autonomous weapon and, and maybe the first one used in wartime and in human history.
A
Yeah. But it's only one aspect that is really interesting and innovative. Another aspect is this. These systems work as a swarm and it looks like it's a real swarm. So we've seen the systems flying in groups of six to seven drones. Each of them has markings on their wings, and these markings are different and unique. They fly above one each other, so they can see the drones below them, and they fly like a flock of birds. So that's probably an experiment with this tactics, which resembles the bird behavior because the drones flying in a certain group, in a certain position, if one is taken down, they do the maneuver and then they regroup to resemble the same pattern of flight. So they can see each other, they can coordinate, and there was a wi fi connection between them. So basically they can communicate and coordinate their actions between each other and this group group, but without any human intervention or external control. And then what was also observed by the Ukrainian military is that they can fly in this position for quite a long time, basically seeking for a target. But when they identify a target, they form a circle, like a circle pattern, and then they coordinate their attack by just diving on the target. So, and that also h happens in coordination because they fly in circles and one by one, one after another, strike the target.
B
Wow. So, so what's, what's different about that than traditional weapons is like, it's not, it's not new in, you know, American warfare for us to fire a lot of missiles at the same time. Right. Like that's, that's old, but. And it's also not unique to us to like have missiles where like, you go this way, I'll go that way. And so we're much harder to defend against. But the fact that these drones are communicating to each other, but not communicating to a remote operator and are deciding what to do based on what they encounter in the data, that's really interesting. And I mean, I remember we've been predicting, predicting drone swarms as the future of warfare since, gosh, at least 2016, maybe before that. And here we are now and we're actually seeing AI enabled drone swarms in the wild. You know, maybe they're not especially sophisticated right now, but I think the direction of the future is pretty clear. And where we might be in two years could be pretty far from where, where we are today.
A
Yeah. And speaking about the direction of the future and previously, Greg, you asked me about, like, what is Russian vision and what Russia visions, like, envisions. This is where I could break it down. Not, you know, like put what Putin is saying, but what the generals are saying who are actually fighting with these systems. And just recently, you know, in a telegram channel, again, I saw a quote, and I'm sorry for this translation, it might be not very good, but the idea is that the quote of the Russian general who was involved into this kind of things, who is saying that the one who uses the side which uses AI enabled drones will win tomorrow, but if you want to win after tomorrow, you have to use swarms. And a swarm of drones is the new nuclear bomb. So that partially shows us the direction into which Russian military probably is thinking. There's also a fight and resistance of old system versus new system and new visioneers, how they call them, visionary generals within the Russian military who want to experiment, who want to deploy new type of technology, and the ones who fight with their conventional doctrine and systems and everything. But these visioneers and basically generals who are ready to experiment, this is what they see and this is their vision in terms of technology development. So.
B
Wow.
A
And they're actually moving there, you know, because here in the west, we're limited with ethical principles, with regulation and all this kind of stuff, but Russia doesn't have that obstacle. So.
B
Yeah, I mean, when you're, when you're Russia and you're like, I can't wait to bomb another hospital tomorrow, you know, the ethical dilemma looks a little bit different than when you're here.
A
Right. And that's why they can keep experimenting and they can, they can advance in practical applications of AI and new technology much faster than we can do here.
B
Yeah. So one of the things that you mentioned was the Nvidia Jetson GPUs that are included in this autonomous weapon system. But your report also has a broader examination of where Russia is sourcing both the hardware components for its unmanned systems and then the software powering its AI, which you talked a little bit about. But like, what did you find on the hardware components, Components side.
A
Yeah. So basically we analyze, we took the database of Ukrainian Defense Intelligence who analyzes Russian systems that were shut down, and we extracted just those components which are responsible for AI integration into these systems. And we found out that in three major categories, which is memory, compute and sensing. US headquartered companies are leading so with different percentage, but it's over 50% of components come from the west with us being the main supplier, if we can say so. So these are companies headquartered in the United States.
B
And almost certainly this is not American companies selling to Russian customers, but selling to somebody who sells to somebody who smuggles it to somebody who gets it to Russia. But that's still a really depressing fact because I really remember there was a Russian government official in 2022. I cited a quote of his in one of my papers advocating for a much larger enforcement budget for export controls and much more stronger tooling and analytical software to try and trace these smuggling networks. But there was a Russian government official who said, effectively, look, we're really not worried about America trying to cut us off from microchips. You can't really do that in the global economy and we're going to be fine. And here you are saying they're continuing to get the microscope chips. Now I think there's like a really depressing extent to which like we adequately tried and adequately resourced the, the effort to prevent Russia from getting access to these chips. And it's worth saying like some of these chips, some of these chips are fancy stuff like an Nvidia Jetson. Other stuff might be like a microprocessor in which like they're manufactured in the hundreds of billions and cost less than a dollar. So it's really, really hard to prevent Russia from getting its hands on that kind of stuff stuff. But it is a depressing fact that Russia is still sourcing so much of its microtechnology and semiconductor technology from products that are made and or designed by American headquartered firms.
A
And that's part of their strategy. So basically it's weaponization of any commercial technology which is accessible. And again, from conversations of those civilians, like civilian engineers years, they specifically emphasize, like let's build systems consistent of commercial components which are impossible to control and sanction or anything. Because with fiber optics pools, for example, Russia is very dependent right now, same as Ukraine, on fiber optic drones. Right. And it gives a huge advantage on the battlefield. But the cable is produced in China and China increased prices, prices. And when you produce the systems in millions, it's not, you know, like one or two very expensive drones. These are millions and millions of systems. Even a small increase in price makes a huge increase in cost. So yeah, basically Russia has mastered and still working on a strategy of weaponization of any commercial technology that is available.
B
Yeah, not Surprising. Okay, so now I want to close out our conversation the way you closed out your paper, which is recommendations for the United States. What should we. I mean, you've told us a lot of what we should know. What should we here in the United States do?
A
So I think in this world where our adversaries are not limited with any ethical constraints and they actually accelerate and succeed in this technology development, we should also embrace this technology, but maybe in controlled environment. So I hear a lot of conversations and put in oversight over autonomous weapons systems. And I don't think this is something really scary that has to be put under complete control. We have to figure it out how to develop the systems, how to experiment with them, how to integrate them into tactics, into doctrine, and start training with them and start basically deploying them, as I said, in controlled environment, just to better understand how they work, where they're the most effective and efficient, how we should deploy them in case we have to do this. Because just putting oversight and control won't bring us anywhere because our adversaries are advancing really, really fast in this and more broadly, if we look into unmanned systems, my opinion is that unmanned warfare is here to stay and develop and this is something we won't change. So to tackle with this problem, I think the United States has to integrate this system into their operations faster and experiment with them more, train with them, update their training and exercise programs and curricula, work with the industry, industry more closer. Because this type of warfare iterates develops really fast. And without this constant communication between the end user and manufacturer, we will lag behind in terms of technology. Because it's not a submarine that you build once in 30 or 40 years and you're good to go. Right. So you have to constantly provide feedback and iterate and update and improve, improve the system. So I think these three major things, basically it boils down to one big thing. We should integrate unmanned systems, we should integrate autonomy, and we should see how it works. And we should be more free to experiment with that. Just because the world works in such a way that our adversaries are less restricted in this and they accelerate really fast. So to be competitive, we have to move in the same pace and implement a lot of regulatory changes to enable that.
B
Yeah, I mean, I think one of the recommendations that you had, I think it's. It appears to be the case that the Department of Defense is already adopting it, which is, you know, the Delta developmental mindset as it relates to CJADC2. It does seem that that's the direction the DoD now wants to head with maven smart system. But this second point of your about how we need to approach a future of unmanned systems and autonomous systems, I mean, I find it really compelling. I think the question is not can autonomous weapons be used unethically? The answer is obviously yes. But you can also use your fists unethically. You can also use your boots unethically. Right. Any military technology can be used unethically. I think the tougher question, the more important one to answer answer is can we use these weapons systems in a way that is consistent with, you know, the, the law of armed conflict, with the United States Constitution and with United States values? And I think we can. I mean, I think we, we, we've seen that in the way that we've used autonomous weapons defensively in systems like CWIS or Aegis or Patriot, you know, for many decades. And while it's definitely the case that there's a lot of ways that you could potentially get this wrong as you' thinking about incorporated into the offensive side of the equation in the more machine learning enabled part of the equation as opposed to a more deterministic software approach, I think what's interesting to me is something that I learned from an earlier one of your papers, which is that the Ukrainians, as they look at this, you know, they're not yet ready to go all in on AI enabled offensive autonomous weapons. But that's a performance determination determination. It's not an ethics determination. They assess that, at least for right now. You know, there are valuable AI use cases, they're pursuing them. But when it comes to, you know, ending human control or ending and incorporating more offensive AI autonomy, they basically say like it's not ready yet, but when it's ready, when it meets our standards for avoiding friendly fire, for avoiding unintentional civilian deaths, et cetera, when it meets our standards, we will absolutely use it. The survival of our country is at stake. And I think that's a pretty persuasive argument. It's sort of hard to rebut that. I think that is everything. So let me say that both of these reports are now available on CSIS.org and I very much encourage listeners to read them full. Kate, can you give folks a preview of where you might be taking your research agenda next?
A
Well, I want to work more on implementation side. So basically, you know, we follow this conflict and other conflicts and I feel that US military is lagging behind a little bit in terms of integration. So we made some changes in acquisition process and we're trying to buy drones right now. So. But procurement is just a part of the problem. But the next problem is what are we going to do with drones and autonomy and unmanned systems when we actually get it?
B
How do you get a group of forces, military forces who are good at using this stuff, not just having it?
A
Yeah. There's so many questions related to that, and I think we should look into that aspect because, you know, it's probably not that sexy, but, you know, nothing's blowing up and nothing's flying. But it's super important because when it gets to actually fighting with drones, there's a lot of questions raising. Yeah.
B
I mean, in the United States, we have such a fetish for inventing stuff.
A
Yes.
B
And we forget that, like, figuring out how to ensure that millions of military service men and women are good at using stuff is basically an equivalent challenge, if not a greater challenge, to inventing the thing. And so I'm delighted to see that you're going to be drawing additional lessons learned from Ukraine and Russia and all of that. But I should say you've also written some pretty good stuff on the conflict in Iran, which folks attribute check out. So. Katarina Bodar, thank you so much for joining the AI Policy Podcast.
A
Thank you.
B
Thanks for listening to this episode of the AI Policy Podcast. If you like what you heard, there's an easy way for you to help us. Please give us a five star review on your favorite podcast platform and subscribe and tell your friends. It really helps when you spread the word. This podcast was produced by Sarah Baker, Sadie McCullough and Matt Mann. See you next time.
Host: Gregory C. Allen (CSIS Wadhwani AI Center)
Guest: Kateryna Bondar (CSIS; Ukrainian defense expert, analyst, and author of new reports on Russian military AI)
Date: April 14, 2026
This episode provides a deep exploration of how Russia is integrating artificial intelligence (AI) and autonomy into its military operations, particularly in the context of the Ukraine war. Gregory Allen interviews Kateryna Bondar, whose recent open-source research and analysis offer unique insights into Russia’s rapid development of AI-driven military systems—especially unmanned systems and command-and-control (C2) tools—and compares them with the Ukrainian approach. The discussion highlights the evolving arms race in military AI, practical battlefield deployments, Russia’s innovation strategies, the challenges and risks posed by rapid technological acceleration, and what these changes mean for US and allied policy.
[01:05-05:18]
Notable Quote:
“I was tracking tech advances… in 2019, I made research on what kind of new technology can be helpful to Ukraine against Russia to win this war. And this is when I got interested in military AI…” (Bondar, 02:45)
[08:08-10:43]
Notable Quote:
“Is it technology defining the conflict or the conflict defining technology? …these two things actually go in parallel.” (Bondar, 08:08)
[10:43-19:03]
Notable Quotes:
“When Russia identifies a gap, it starts to catch up really fast.” (Bondar, 12:35)
“For them [Russia]... Delta and Palantir are the benchmarks. And this is evil Western software.” (Bondar, 18:43)
[22:51-27:32]
Notable Quotes:
“You cannot build CJADC2; it can only be grown.” (Allen paraphrasing Bondar, 25:22)
“They keep integrating different solutions and it builds out this big ecosystem of products.” (Bondar, 23:44)
[31:17-37:27]
Notable Quotes:
“Computer vision still counts as AI.” (Allen, 31:40)
“Customizing those [LLM] models for Russian formal military language proved to be really challenging… Russians themselves evaluate this capability at the level of 1/3.” (Bondar, 36:55)
[39:15-44:37]
"Our ability to keep pace with global change will determine our sovereignty and in the near future, without exaggeration, the very existence of the Russian state." (Putin, read by Bondar, 42:22)
[44:37-51:44]
[52:58-63:21]
Notable Quotes:
“It can fly autonomously, can search for targets autonomously, and can engage the targets autonomously just because there was no evidence that there is any way a operator can control what the system does.” (Bondar, 57:04)
“[In Russia’s view:] A swarm of drones is the new nuclear bomb.” (Bondar, quoting a Russian general, 62:30)
“Here in the West, we’re limited with ethical principles... but Russia doesn’t have that obstacle.” (Bondar, 63:06)
[63:41-67:21]
[67:41-74:43]
Notable Quotes:
“We have to figure out how to develop the [autonomous] systems, how to integrate them into tactics and doctrine… [and] experiment—with control, but with freedom.” (Bondar, 67:41)
“The question is not can autonomous weapons be used unethically? The answer is obviously yes. But you can also use your fists unethically.” (Allen, 70:19)
Episode recommended for: Defense policymakers, military technologists, AI governance professionals, anyone tracking the evolution of autonomy in warfare and strategic competition.
Further reading: Bondar’s two new reports at CSIS.org.