Loading summary
A
Hey, everyone, and welcome along to Seriously Risky Biz. This is our podcast all about cyber security policy and intelligence. My name's Amberly Jack, and in just a moment, I'll bring in Tom Uran, who is our policy and intelligence editor. And we're going to chat all about the Seriously Risky Business newsletter that Tom has put together. You can, of course, find that, read it, and subscribe over at our website, Risky Bizarre. First, though, I'd like to thank our sponsor for this week, which is Run Zero. And you can also check out in the Risk A Bulletin feed this week's sponsor interview with Casey Ellis, who chats to Todd Beardsley, the VP of Security at Run Zero, all about Kevology, which is the company's analysis of CIS's Kev list. So you can find that interview on the Risky Bulletin feed over at our website. But in the meantime, Tom, great to see you. Thank you for joining me.
B
Hi, Amberly, how are you?
A
Yeah, really good, thanks. And I want to jump straight into your newsletter this week and chat first up about Europe and in particular, Tom, how do you solve a problem like the Russians and European officials are kind of calling for countries to build, I guess, strike back cyber capabilities, which you're saying isn't really going to make a difference. And this kind of surprises me a little bit when I first read this because I feel like I know you well enough now, Tom, to know that you're not adverse to a little bit of cyber fisticuffs. But what you're sort of saying here is Europe already has tools that it's not using, so adding cyber to that arsenal isn't really going to grow them the political will that they need to pull that particular trigger. So, yep, I guess starting from the beginning, Tom, do you want to tell me a little bit about this Russian sabotage campaign that began all these, all these calls for cyber capabilities and we'll go from there.
B
Yeah, yeah. So, so many things. So stepping back, it's the Munich security conference where people get together to talk about security issues. And there were several European leaders, officials, who were saying basically the same thing, that Europe needs more of its own cyber capabilities. And often they would say to strike back or to counter Russian aggression. Now, there's different aspects to Russian aggression. I don't think striking back is the right framing for much of what they want cyber to actually do. So in the piece, I focus on the most egregious part of Russian aggression, or what I think is the most egregious part of, well, other than Ukraine, on the rest of Europe, where they've been launching this sabotage campaign, which is basically run over Telegram messenger, where there's now pretty clear direct links between Russian intelligence services. They go and have hired the Wagner Group, which is a mercenary outfit. The reason Wagner Group makes sense is because they've got both propagandists, like, I guess, basically PR people, and they've also got recruiters, and they basically send a message out to disaffected youth, marginalized youth across Europe, and try and get people who might otherwise be petty criminals to do things like blow up warehouses or send letter bombs. Like that strikes me as, like, pretty bad, unacceptable. But it's not actually clear what to do about it. Like, it's not. It doesn't seem like it's serious enough to like. And for an actual conventional military action, yet arresting the people who carry out the crimes, that seems totally inadequate because they're basically just people use the phrase disposable agents. You know, you message someone on Telegram, and so part of the vibe is that we. We want to be able to do something about this campaign. I think the problem with that is that cyber seems attractive because you can do something covertly, and it's not as if you're necessarily putting your name on a conventional weapon. I think the simplest, easiest thing would be to do the same in return, which is to try and find disaffected Russian youth and get them to chuck bombs. But that just seems totally out of character for democracies. And so in a way, cyber seems attractive because it's a secret thing we can do, which can achieve, maybe achieve the same effect, but because it's deniable, we feel like we're more in control. It's not being. It's not stooping to the same level, I guess.
A
And I guess, you know, when you're looking at what, as you describe it kind of sounds like, I don't know, Russian thuggery. I guess when you're looking at how to respond to that, I guess you have to be really careful as well, because you don't want to go, as you said, you don't want to go so far that you get massive retaliation in response, but you kind of don't want to sit back and do nothing.
B
So, yeah, so there's a. I think there's an interesting argument about what to do in response. One argument I've heard, which I actually find somewhat compelling, is that if you do nothing, you actually encourage Russia to just keep on pushing, like, do more and more and more because, oh, well, we did this and nothing happened. So we may as well do the next thing, which is even more audacious.
A
If you don't stand up to a bully.
B
Yeah, exactly. You need to push back. Otherwise it keeps escalating. Yeah. And so the problem with using cyber is that to push back, you need to do things that hurt Russia more than its sabotage campaign. You need to escalate. Escalate to de escalate. And I just think if you're not prepared to hire Russian thugs, like, the difference between hiring Russian thugs and having an exquisite cyber operation that causes a warehouse to catch fire is the political will. And if you're not doing the first thing, adding cyber as a tool will, where you need to escalate to de escalate, is actually problematic from a political point of view. And there's also a whole range of other things that European countries could be doing. So there was a report on Russia's sabotage war a while back, and there has a whole list of things. Additional sanctions, disrupting Russia's shadow fleet, which is a fleet of vessels that are circumventing sanctions. So you could just go and interdict those boats if you wanted to. A whole lot of basically conventional and diplomatic responses that they could do right now. And I think cyber operations would be a tremendous complement to those actions. You can do them in concert and over the whole suite. It would be synergistic. The pressure built doing all of these things would be greater than the pressure built by doing any one of those things.
A
And the thing about complementary actions is they need something to complement.
B
That's right. Yeah. Yeah. And I think, like to convince Russia to stop. If you've got a whole lot of pressure on different fronts, that means you don't have to have tremendously. I'll use the word explosive, but big and noisy cyber operations, you can have little ones that add to the pressure. And it just felt like to me, people are talking about, here's a magic bullet that when we get it, it'll be great, rather than let's do things now and then also work on that magic bullet, because that would be nice to have as well. It's a nice complement to what we're already doing. So I'm not arguing that they shouldn't develop cyber capabilities. I just don't think it's the solution to the most pressing problems that they have today. Yeah, yeah.
A
And I want to jump in, Tom, to the next piece that you've written about today, which is AI. So AI Distillation attacks. They're a thing. And it seems Like a few American companies at the moment really want the government's help to sort of protect their intellectual property. But why don't you start off, Tom? What are distillation attacks? What are we talking about here?
B
Yeah, so there's another word for them which kind of makes more sense is model extraction attack. And the trick here is that you just ask a model questions and if you do it in the right way, you can figure out how that model works and you can use that knowledge to train your own model. So you can have a open source model, say that's not as capable as the latest frontier models, but with a little bit of, you can get the frontier models to, in a way, be a teacher. And so you can level up your own model for a fraction of the cost. And so I guess the way I came across this is, I don't know, maybe it's interesting. There was a Google report on AI threats and usually those reports say, you know, our model is so wonderful, it's been used for all these sneaky attacks and here's what we've done to stop them. And it's interesting to see how threat actors are incorporating AI into their workflows. This one says, you know, they're using for everything, every stage of the attack lifecycle. That's interesting. But this report started off with distillation attacks. It does pretty good job explaining what they are. And then it frames it as intellectual property theft. And it struck me that they've put it right at the top of the report. They want people to pay attention, yet it's nothing that has ever been in a previous report before. And in the same week, OpenAI sends a memo to a House select committee and says Chinese threat actors in particular are stealing our special source. And they particularly call out deep Seek. And they say that they're running these distillation attacks and they're doing it in an organized and deliberate manner. And this is eroding our advantage and it's an unfair playing field. And what was quite striking to me is that typically the US is all about the rhetoric, is all about free markets and letting there be an even playing field between different countries. And here we have an American company. And I think it was also the intent of Google's report, basically saying, look, policymakers, our stuff is being stolen and it's actually really difficult for us to stop it. And it's difficult because it's not like air quotes, traditional intellectual property theft, where they break into your company and steal plans or secrets, secret recipes, that kind of thing. It's you just ask the model questions. And so it's, in the normal course of business you can steal stuff from us and it's hard to detect.
A
Yeah.
B
OpenAI says that there's actually a whole ecosystem in China around distillation attacks. So there's resellers, there's. And they're running quite complex multi stage distillation techniques. And they actually lay out here other ways that this is important. They talk about things like, you know, American leadership, American values. The Chinese Communist Party will instill Communist Party values. So I think it's. I found it somewhat amusing that you've got an American company so blatantly saying we need government assistance.
A
Yeah, yeah.
B
And they sort of lay out here are some of the things you can do. And so for distillation attacks in particular, they have this small series of you can help with intelligence and information sharing and presumably that would be used to identify the threat actors and I don't know, maybe blacklist IPs or something. They talk about having an industry standard for trying to prevent them, but they sort of also step back and say the bigger picture is that when it comes to developing AI, there's two really key things. There's the electrical power and there's the computing capacity. And they go on, we're not winning the electrical energy war like China is developing 10 times the capacity that the US is every year. And so they imply that there's nothing we can do about electricity. But then they go, there's compute capacity and they don't leave it said. But my reading of it was that there's been chip export bans and they're basically arguing, implying that these stronger ones would be good. So I thought it was quite funny. They talk about making sure that there's a level playing field and the way to make sure that there's a level playing field is to make sure our competitors don't have the chips that they need. So I'm not a fan of hypocrisy and that kind of language, but I think they could be right. I think there's a chance that AI could be entirely a commodity and it doesn't matter. You can argue about this. There's a chance that it doesn't matter where models come from. Every model will be roughly about the same. Some will be better, some will be a bit worse. And that probably is the future if you can just steal the special source from a better model and that everyone is stealing willy nilly. There's also a chance that having your own indigenous sovereign AI Capability is super important for a country's economic and national security. I think that. My guess is that probably more people believe that than not. And if that's the case, well, then, yes, you should do things that help your indigenous AI champions. If you're a policymaker or a lawmaker like that, that sounds like it's an important thing. So I think there's. They've kind of presented the intellectual argument for doing more to help them. You sort of look back at the history of cyber espionage and the Chinese government has done a whole lot to help their companies. So I think it would be stupid to just say, well, that's not what America's about. We want the free market to rule. And I don't think that'll happen. I think they've presented quite a good intellectual argument and I think it makes sense now. It's up to policymakers what they actually do.
A
And it's, I mean, you sort of touched on there, you know, there's, there's a couple of paths that AI could go down. Path A is, like you said, it's going to be a commodity. Path B is it's actually going to be really important too, the economy and national security and stuff. And surely if you are a policy maker, it is better to err on the side of. This is going to be really important, just in case. I don't know.
B
Yeah, I think so. I think so. Like, I think I'm uncertain about the future, but it seems that the consequences of that path, if you, if it's really important and you fail at becoming a leader as a country in that field, the consequences seem quite bad. So you should try and avoid that. If it's all a commodity and it doesn't matter, well, you know, what's the worst that's happened? You've spent some time and effort trying to avoid a future that didn't happen, but the consequences are not that bad. So, yeah, I think it totally makes sense. I suppose the problem is that the Trump administration has ratcheted back some of the chip export controls. They've loosened them a little bit. It's hard to know what the longer term story will be. I think there's a whole lot about the trade war and critical minerals that are playing into it. So AI is not the only story in town. So there's competing factors. Like I said, they presented an argument. What's next?
A
Just touching on the sort of government action. I mean, you spoke a little bit about the chips there being a big one. OpenAI sort of laid out a Very clear plan. Well, kind of clear plan. Here's what you should do. What should governments be doing?
B
I thought it comes down to just restraining Chinese AI capacity and that comes down to tightening up on chip restrictions. That seemed to me the, the whole reason you talk about distillation attacks is that they're hard to stop because it is just asking questions. Like that's what you do of LLMs, you ask them questions and they give you answers. And so it's not a. If it was traditional cyber SBI, you could say, well, OpenAI or Google, whatever. You're a big, big, well funded company, get your own security in order. We can help you with, you know, the normal business of cisa, we can give you advice on how to secure your own stuff. But when it comes to, there's a form of intellectual property theft that you can't stop and this is super important. What are the other levers that we've got? And the framing of OpenAI's report was, well, the fundamental inputs are electricity and chips. And if you can't do anything about electricity, there's also chips. I think it's interesting they didn't talk about talent acquisition, so, you know, H1B visas, that kind of thing. It seems that they've focused on a place where they could get some traction, which is on chip export controls because they're already in place, notionally at least. And there's lawmakers who are interested in making sure that chip making equipment is also not exported to China or types of chip making equipment. So it seems like there's a space where they could get some traction from lawmakers.
A
It's a crazy world, right? Like cheating at school would have been so much easier if you could have just asked the person next to you straight up, what's the answer? As opposed to like trying to, trying to sneak your way through.
B
In Iraq, they actually turn off the Internet to during school exams to stop people from using AI to help answer questions.
A
See, I'm old enough that that was never a problem. We just had to turn our book upside down. But anyway. All right, Tom, we, we might leave it there, but thank you so much for your time as always and look forward to catching you sometime next week. You can of course read and subscribe to Tom's newsletter, Seriously risky business over at our website, Risky Biz. But Tom, have a great week.
B
Thanks, Sam.
Podcast: Risky Bulletin
Host: Amberly Jack
Guest: Tom Uran, Policy and Intelligence Editor
Date: February 19, 2026
Episode Theme:
Exploring the limits of cyber operations as a solution for political and security challenges, particularly in Europe’s response to Russian sabotage, and delving into AI model theft as a geopolitical and business risk.
This episode dives into the current landscape of cyber policy and intelligence, focusing on:
[00:54 – 08:12]
"The difference between hiring Russian thugs and having an exquisite cyber operation that causes a warehouse to catch fire is the political will."
— Tom Uran [06:12]
"People are talking about, here’s a magic bullet that when we get it, it’ll be great, rather than let’s do things now and then also work on that magic bullet... it’s a complement to what we’re already doing."
— Tom Uran [07:38]
[08:12 – 18:33]
"I found it somewhat amusing that you’ve got an American company so blatantly saying we need government assistance."
— Tom Uran [12:03]
"The way to make sure that there’s a level playing field is to make sure our competitors don’t have the chips that they need… I’m not a fan of hypocrisy and that kind of language, but I think they could be right."
— Tom Uran [13:55]
Amberly Jack [15:01]: "...surely if you are a policymaker, it is better to err on the side of this is going to be really important, just in case."
Tom Uran [15:28]: "Yeah, I think so. I think so. Like, I think I’m uncertain about the future, but it seems that the consequences of that path, if you … fail at becoming a leader ... the consequences seem quite bad. So you should try and avoid that."
"There's a form of intellectual property theft that you can't stop ... What are the other levers we've got? ... If you can't do anything about electricity, there's also chips."
— Tom Uran [16:45]
This episode offers nuanced, clear-eyed perspectives for anyone interested in the intersection of cybersecurity, geopolitics, and technology policy.