Loading summary
A
Hi, everyone, and welcome to Risky Business. My name's Patrick Gray. We've got a fantastic show for you this week. We've got some really interesting news to get through this week. A lot of AI stuff in this week's show, but it's all very interesting. We'll be getting into that with James Wilson and Adam Boileau in just a moment. And then we'll hear from this week's sponsor. And this week's show is brought to you by corelight, which of course is the company that maintains Zeek. And if you would like a 200 gigabit per second full line rate, you know, network security sensor, your options are fairly limited, but corelight can do that for you. Obviously this hardware software, you know, unified hardware software sort of thing is one of the reasons why corelight is not really at risk from AI like some software companies, even in, even in security. But Brian Dye, the chief executive of Corelight, joins us this week to have a chat about AI, AI in the SoC and about how we've gone from that being a radical kind of risky idea a year ago to it now just being the way things are done pretty much everywhere. Very interesting conversation. And coming up after this week's news and yeah, as I, as I said, like, this is kind of an AI security, security AI edition of the show. The first thing we're going to talk about today is some work out of the AWS security team about a whole bunch of fortinets getting owned by a threat actor who is using like heavily using AI, which is what I found. There's a few things I found interesting about this. First of all, I think the reaction from a bunch of people in offsec on social media really misses the point because they're like, they just used existing tools. They didn't do anything that cool. Again, we've talked about that on the show before. Not really the point. The point is it helps people who are not very capable become more capable and organized. And that's the second thing that struck me about this, is reading through it. It really seemed like the threat actor in this instance did not really know what they were doing and yet were able to pivot from a Fortinet device compromise to full domain compromise through mimikats and whatnot, where if any part of their automated chain broke down, they didn't really have the skills to work around it. But it didn't matter because they just would move on to the next target where their chain did work. Adam, what were your impressions here? Is your take here broadly similar to Mine.
B
Yeah, it is. I mean none of the tradecraft here is particularly sophisticated. And as someone who you know, grew up doing offsec, you know, my gut reaction is like, well no, you don't know what you're doing. But on the other hand, managing 600 endpoints that you've shelled and doing that, you know, as a not particularly skilled operator, as a small team, whatever it is that, that this group is like, that's actually hard work. Like you know, keeping track of 600 endpoints on a spreadsheet when you're, you know, a super skilled elite hacker, you know, that's real work. So you know, the hacking itself, low rent, but the reality is at scale, you can do this with, you know, these kinds of tools. And you know, I remember the first time that, you know, I had to pivot through a network, you know, in a Windows environment and you know, do the like DC sync thing that Mimikatz does these days, before Mimikatz even existed. Right? And having to do Kerberos attacks and these kinds of syncing stuff, you know, way, way, way back before there was tooling, it was legitimately hard. Right? So having tools that, you know, empower you to move quickly, you know, when I was doing this stuff, Mimikatz came along, made everything a lot easier. Hey, now you can just ask, you know, ask a bot to do it for you and yeah, I mean it gets the job done. It's not done if it works well
A
and you know, in this case it looks like a financially motivated threat act. Probably a ransomware crew by the looks of things because they're doing stuff like targeting Veeam backups and whatever. But I just love it that they're going after these fortigate appliances with you know, commonly used credentials, commonly recycled, you know. So I'm guessing it's like admin admin or admin firewall or whatever, right? And then these are domain joined appliances which is how they're going from there to, you know, mimikatsing their way to, to great glory. James, I mean surely you would agree that if the low rent ransomware crews are using admin admin to own fortigates and do like this sort of low rent hacking at scale, you know, we can only imagine what some of the better apts are cooking up.
C
Yeah, absolutely. Like I totally see Adam's point that from a security perspective it's like you don't know what you're doing. But when I read this article I went through this constant loop of wow, that's actually quite a sophisticated use of AI and they've crafted multiple agents to do this. But then I have to keep telling myself that's late last year's thinking and some of the developments in the models, particularly around like Claude 4.6 and some of the advancements in Codex, the abstraction layer's gone or the line of abstraction's gone so high now that you could probably just give something like claw code or even a openclaw bot. The task of, hey, I want to own a bunch of fortinet gear because then I can sell the access to it. Let me know when you're done. And yeah, what a world. That's enough for it.
A
Can you please go own a bunch of 40 gates for me? I mean, you know, I mean, obviously I think you're going to bang into some guardrails there, but like we are kind of at the point where that would be possible if you had a D guardrailed, Claude.
C
Totally. And maybe not even completely de guardrailed, you could imagine. This could begin with, I've got a bunch of fortinets and I need to keep them safe. What would it look like if an attacker was going to come after me and then adjust?
B
Yeah, the fact you have to just social engineer your AI, that's the main skill for being a hacker these days. It's not a hacking. Social engineering. The bot. What a world.
A
Yeah, I think I needed to generate an image of Donald Trump for like a YouTube thumbnail at some point and Gemini kept telling me no. So then I just asked it to generate an image of a Donald Trump impersonator and it did it. Right. So this is socially engineering. The AIs can be fun. They're getting better at stopping that stuff though. Like that. And it's so weird because they're non deterministic. Like that worked once and I wasn't totally happy with the image so I asked it to do it again and it wouldn't. Right. So yeah, you never know how these guardrails are going to apply. But look, speaking of guardrails and taken them off. This is last week, I think it was, that we spoke about how Google had sounded the alarm on distillation attacks against Gemini. We've got a much more detailed report now from Anthropic which has accused, directly accused, three Chinese labs, including Deep Seq, of trying to distill Claude. There's some interesting statistics in here. There were 24,000 fake accounts. Right. So they're like, you know, distributing these distillation queries across 24,000 accounts. There was something like 16 million queries to the model or prompts to the model. So it is funny like people are having a bit of a laugh at Anthropic here. Given the copyright, like how much these models rely on other people's copyright. It is kind of funny that they are having a bit of a complaint about other people using them. That said, I mean, I could understand why policymakers in particular would be quite concerned by this when we're trying to restrict countries like China, for example, from getting access to these frontier models. You know, if they could just run some distillation attacks and get, you know, 95% of the way there, that that's not good if you're America.
C
It's a vicious cycle, right? Why are these distillation attacks happening? Because the access to the models and all the chips that are required to either train or do the inference has been so restricted. And so it feels like a bit of a loop where yes, Anthropic and Google are yelling about distillation is a problem and it's going to give adversaries access to the same level of capab capabilities in the model, but they're going to find a way to do this regardless. And I think the thing that is kind of missing from this dialogue is Anthropic bases itself on being the safe AI company. But the more they rally for export controls and policy to prevent this, that's resulting in these distillation attacks which produce less safe models with less guardrails and therefore their actions are putting less safe AI into the hands of people that want to use it in unsafe ways. It just, it feels a bit self defeating at the end of the entire cycle.
A
Well, everybody wants to use Anthropic for unsafe things. We'll get to more of that in, in a moment. But Adam, I wanted to quiz you on another aspect of this, which is, you know, I clicked through to the Anthropic report. I was all excited to settle in for a long read. It's actually pretty short. And the section where they like how we're responding because this is a big thing, is the headline on this blog post is Detecting and Preventing Distillation Attacks. And you're like, oh boy, there's going to be some like alien tech in here. Because I'd imagine, you know, preventing distillation attacks, I mean, you're going to have to use alien technology to do that. And then it's like how we're responding down the bottom is like, oh, we've built some classifiers and some behavioral fingerprinting designs. Systems designed to identify distillation attack patterns in API traffic. And you're thinking, oh, I reckon anyone worth their salt could probably sidestep that. And then there's intelligence sharing, where there's sharing technical indicators with other AI labs and you're like, yeah, again, that's how
B
you know, you failed at, you know, at response when you're reduced to like, well, intelligence sharing.
A
Well, and better, you know, strengthened verification for educational accounts and security research programs and startup organizations. And you thinking, oh, that doesn't sound like a particularly effective control. And then their countermeasures, they're developing product API and model level safeguards designed to reduce the efficacy of model outputs for illicit distillation without degrading the experience for legitimate customers. I think that's going to be hard, personally.
B
Yeah, I think so. I mean, you kind of want them to come up with some sort of like, let's classify distillation happening fast enough that we can respond in real time. And now we've got some technique that reverse distills that craps up the model that they're trying to teach by giving it bad output. That's kind of the adversarial future that I want to. If we're going to live in this cyberpunk future, then that's what I imagine it's going to look like. But, you know, none of this filled me with confidence that they really have a great answer for it. And as James said, maybe this is just inevitability happening and you know, this is the, you know, this is just how it be and we have to deal with, with it, you know, and China can steal jet engine designs, they can steal models, they're going to do it and, you know, it's too late by the point that you're, you know, writing blog posts like this. I don't know.
A
Well, and there's a lot of scuttlebutt around that they've been successfully buying and accessing chips that they're technically not supposed to and then just pretending. Yeah, look, you know, we, we trained this one on a bunch of potatoes, you know, and it's like, no, you've got a basement full of Blackwells down there and that's how it's, that's how it's working. But yeah, you know, you just sort of got to think like, if you're China, you're probably loving this, right, because you get 95% of the benefit with none of the capex, right.
B
So that's, that's how I do it.
A
Good for them. I guess moving on. And yeah, look, staying with Anthropic branding itself as the safe AI, you know, all over that blog post that we were just talking about, they're like, oh, this is really concerning because, you know, the Chinese are not going to use it safely and they're going to remove safe the safeguards that we have on this and blah, blah, blah, blah, blah. And you know, this is happening as there is a big blow up between the Pentagon and Anthropic. Pete Hegseth has, you know, called the CEO to his office and said, you need to remove the safeguards from Anthropic so that we can use it to do whatever that we want with it. Or we're going to, you know, they're talking stuff like declaring them a supply chain risk so the government can't use them or even like kind of pulling a pseudo nationalization lever and getting the Defence Production act to force Anthropic to remove some of these, some of these things. You know, I think this is actually a little bit more nuanced than just the Pentagon doing something awful here because our colleague Tom Uren pointed out to us yesterday when we had our weekly sort of editorial meeting that it is the Pentagon's job to kill people. So maybe you don't want a woke frontier model helping you to design the campaigns to kill people. James, what are your thoughts on this?
C
Yeah, there is a really interesting story arc behind this. So Anthropic was actually the first model to essentially get into the hands of the Pentagon, but that was done through a partnership with Palantir and that's how Claude ended up being able to work on classified networks. So they were actually the first ones to get there. But more recently with the Maduro event happening and there being a mention of Claude being used, this raised questions from Anthropic where they essentially asked Palantir, hey, was, was our stuff used in, in that raid? And Palantir actually took offense to this and they said, like, how dare you even question whether you're used in this? And so that's been a separate scuffle that seems to have then excited Hegseth. But the subtlety that I found interesting is when Hegseth gets riled up about this, he talks about this being for a new network, a new project, a new way that they're using AI. And there's actually been separate individual contracts signed up with many of the other Frontier Labs to join this new network. And Anthropic is the last holdout. And that's the final sort of flashpoint that's come up now is that he's given them the deadline of Friday to say, you'll join this new program with all of the other frontier models for my new AI network that's going to have access to all of this defense data or else. So I'm looking forward to Friday.
A
Yeah. I mean, Anthropic's case is that they will not let Anthropic be used to do mass surveillance or to kill people without a human in the loop. Right. Which 100% sounds like. Yep, that's reasonable. But it's also kind of not their job to put those sort of constraints on an organization like the Pentagon, which is tasked with, you know, doing deadly stuff. But, you know, I was chatting with a friend about this today, and I was sort of, you know, I was sort of putting forward the case that, you know, maybe the Pentagon had a. Had a good argument here. And then they were like, well, what about IBM selling to the Germans back in the, you know, back in the 40s? Which I thought was like, a bit of an extreme comparison, but I. I took their point. Adam, where did you land on this?
B
Yeah, I mean, it's a difficult one, right? Because, like, can you imagine, like, if Lockheed Martin decided that they were not willing to sell weapons that killed people, like, they would lose their government contracts.
D
Right.
B
Because an F35, that can't.
A
Yeah. They. They decided to just. Well, the next F35 will only. Will only fire nerfed weapons, phone weapons.
B
Like, clearly that's not a tenable situation. But I mean, on the other hand, the private sector. Private sector companies can make decisions about what products they build and how and who they will sell to. Right. That's also their right. And I'm, you know, it brought to mind for me, like, the case of, like, Boston Dynamics, right, that make robots. They got bought by Google for a little. Google bought them for a little bit and then they sold them again. Right. And they're making, you know, autonomous robots that absolutely could be used in military context, but they've kind of decided that that's not really the market they want to be in. They want to sell it for industrial use, and that's kind of their choice. And the government can't really turn around and say, we force you to make, you know, killer robot dogs. So, you know, on the one hand, like, I feel like Anthropic is absolutely within their rights, so we don't want our technology used to murder people. And the consequence of that is we won't sell it to, you know, to the military and you know, everybody's kind of can do their own thing there. But it just, you know, I'm glad that somebody is willing to say like we won't have our software used to kill people. Right. I mean I would rather people won't get killed obviously the Department of War kind of in the name that that's what they do. But you know, if they want defense contractor grade AI, go buy defense contractor AI.
A
Yeah, it's funny, I just checked on the Google Boston Dynamics thing and yeah, from 2013 to 2017 owned by Google X, then went to Softbank, now owned by Hyundai Motor Group. So there you go. That's, that's about as interesting a history as you can as you can get now. What else have we got here? We've got the embedded security scanning for Claude. So we've got a write up here from Derek B. Johnson. This has been everywhere but we've linked through to the cybersecurity version of this story. So there's a. Basically, you know, Anthropic has released, you know, Claude Sast. Right. You know, I've been saying on the show probably for the last year that you know, the legacy Sast is in a lot of trouble thanks to large language models. And then sure enough, you know, Claude, they ship Claude code security. Try saying that five times quickly. Claude code security. And the funniest part here was the investors took a baseball bat to security company shares on public markets but including companies that like would be completely unaffected like by this. I'm Pretty sure even CrowdStrike copped it and Fortinet and Palo and it's like oh no. Anthropics figured out software vulnerabilities so the whole sector copted drubbing. James, you I know saw the humor in this.
C
Yeah, I mean as humorous as a double digit drop in market valuations across the sector can be, but it's just such an irrational response. The sort of interesting thread in this is that the capability they released is based upon a pretty interesting data set where they've brought together outputs of capture the flag exercises, red teaming data sets, et cetera, to really create a very custom model that's been trained on how to catch these bugs. But at the end of the day this is just largely going to be LLMs making LLM generated code less LLM flaky. So it's a good thing, but it's like, yeah, it ain't going to affect crowdstrike, that's for sure.
A
No, I mean, look, I think if we're at the point where AI based code improvement is at the point where CrowdStrike is actually being affected. Well, we don't need ADR anymore. Like that's, I think that's a ways off. But I mean, Adam, you know, what are your feelings on like where all this is going in terms of like. I think that the SAST stuff in particular, I think that is now an AI product. You know, I really, I really can't see, I really can't see your, your sort of legacy approach to that thing. Like, you know, the sneak from five years ago, SNCC from five years ago is not going to be how it's done.
B
No. And both the rate at which AI is changing, but also the rate at which how we build software is changing largely because of AI. Right. We've seen some numbers around the amount of dependencies and the very, very rapid pace of code being included and then dynamic, everything being very dynamic in ways that the SAST products of old were designed to deal with. Human driven, human speed development. The AI world is so much quicker and so the problems are also a bit different. The speed of supply chain compromise or some of the worms we've seen on NPM and so on. The ecosystem that those products play in has also changed pretty significantly. So it kind of makes sense that they're going to be out of it and just bolting LLMs into it kind of makes sense. We've got this tool, we're going to be able to use it in all different ways. And reading code, reasoning about code is much the same as writing code. So yeah, makes sense that this is
A
how it's going well in their language models. Right. Like this is what I always say, like there is a reason it's, you know, it's use cases in software are incredible. But look, it's not all rays of sunshine. And I amazing stories this week because we got a great one here about how Amazon's cloud unit, we got some leakers apparently leaking to that. Well, not leaking but you know, talking to journos at the Financial Times about how outages have been caused by like AI agents doing really dumb stuff like saying, well, this code's no good, I'm just going to delete all of it and start again. And that, you know, that caused a 13 hour outage. James, you used to work at AWS, so I'd imagine you would have some insight here beyond just the obvious first reaction of yours, which I imagine was lol.
C
Yeah, yeah, definitely. Look, the, the first thing I thought of was that Pretty much every major large scale outage that I saw happen at AWS happened because something dumb happened. Right? It just don't happen because someone did something incredibly smart and creative and it just went kind of wrong. But this one is just like, yeah, you know, the agent decided to delete and recreate the production environment. Okay, hold on a sec. If your agent can delete and recreate your production environment, the problem is not the agent, the problem is you. You've got to be careful of what you're putting in the hands of these agents. Amazon's just hilariously trying to convince everyone the agent's not the problem. I think there was language, like, it's a coincidence that AI tools were involved. And the same issue could occur with any developer tool or manual action. It's like, yes, again, but you know, if, if you can press the delete production button readily and easily, you're going to hit it sometimes. Sometimes you'll mean to, sometimes you won, and sometimes an age will do it instead. The bigger, the higher order thing for me here is that just as this Claude code security feature comes out, and it's useful because it's been trained on specific data sets that are a corpus of information about previous failures that have been seen, I think there's something missing in industry at the moment around how do we capture these agent fails in a structured, meaningful way that can turn into that corpus of data that helps us train better models that have some of these dumb behaviors baked out of them, essentially.
A
Yeah. Now I'll mention that because you're doing some work in that area and we'll talk about that in a moment. But Adam, I just wanted to get your thoughts on this one, because we're used to hearing of people fat fingering a command or accidentally rmrfing the wrong box, right? And now we got agents to do that for us. Like truly we are in the future.
B
We are. And who amongst us has not accidentally deleted production or restarted it when you shouldn't, or dropped the wrong. Like it just, it just happens, you know, humans make mistakes and LLMs, you know, are modeled after us and kind of makes sense, I guess there is the like, if you make lots of changes at speed and you get comfortable with letting AI do it without, you know, with only minimal human oversight, like it's inevitable that at some point it's going to miss, right? And we do more changes more often, you know, we're going to see more failure. So it kind of, it kind of makes sense. But overall it's just, it's Just entertaining. Watch them flail about in the press release and try and. Or in the communications about it and try and make it like not so much AI, more just human error, but maybe with some AI involved. So that's funny. And in the end we just love things breaking. That's ultimately why I like to hack stuff is because it's just like breaking stuff. Seeing stuff broken, it warms my heart.
A
And we love to see people doing crazy, risky stuff as well. Which brings us to this Microsoft Security blog post from the Defender Security research team, which is just. The whole blog post is dripping with a sense of omfg, wtf? Because the people writing this are saying, you know, self hosted agents like OpenClaw are like heaps of fun guys, but here's what they can do, which is like less than ideal from a security perspective. And you know, you really get the impression this is Microsoft flagging to enterprises. Like what are you doing? Like, you know, running these things in prod is like. Or on corporate systems is a really bad idea. Actually reminded me of a conversation, James, you and I had the other day about how, you know, these agents will just find a way to get stuff done right? So if they don't have the tool that they need to do something, if they have the rights and permissions, they will actually write a software tool, compile it and execute it in order to, to get stuff done. Which, you know, I'd imagine your ADR suite or your allow listing is going to like that very much. But there's going to be people in enterprise environments who are just turning off all of the controls so that claudebot can get its work done. Adam, I want to bring you in on this. Like can you see, do you think enterprises are going to learn the lesson quickly that you shouldn't just let these things run riot? Or do you think this is just going to be, you know, several years of headlines of like openclaw doing insane stuff to corporations?
B
Yeah, I think we are definitely in the market for quite a lot of bad things happening because it's just so compelling to individual end users to solve problems with computers. We give them tools, they want to use them because it makes their lives easier. And if they can tell their computer to do their work for them and then they can go have a cup of coffee, of course the incentives are just all there. And so much of our security architecture is dependent upon a computer is a person and a person has security controls. It's not designed that every end user in an enterprise knows how to write and compile code, right? Imagine anyone who's been an admin or a security engineer in a technical organization knows what a menace technical end users are, right? Regular corpo users are just going to pointy clicky around SharePoint. It's fine. No one's going to write a script to delete everything out of SharePoint as an end user. Even letting end users have Microsoft access was a bad enough idea because they might pointy clickies in Visual Basic and now they're all empowered like this. The controls are just not set up for. And Microsoft blog post here, like, ultimately the advice is don't run openclaw with a real identity, like have specific accounts. Don't run them on a real machine, run them in a virtual machine. Which defeats the whole point of these agents, right? They're meant to be able to do your job, not a job with no access, on a machine with no access.
A
You don't have to manage them like they're an employee, right? You want them to just get the job done. You want to be able to give them that API key, give them those credits and say, off you go get. Get my job done. And look, in fact, we have published a podcast on that very topic. James's podcast feed is up and running. It's called Risky Business Features. You can find it on the front page of Risky Biz. Just scroll down. It's in the itunes store and all of that. You know the, the podcast directory. The only service you can't get it on right now is Pocket Cast. I think I got to go and like manually submit it or something. But if you're using Overcast or Apple Apple podcasts, you can get it. There's two podcasts in there so far. So there is the about a 30 minute solo podcast of James just talking about Openclaw and these sorts of issues. And then there's a fantastic podcast he did with Brad Arkin, who was formerly the CISO of Adobe, then he was the CISO of Cisco, and then more recently Salesforce. He's out of there now and doing some podcasts with us, which is great. That one is all about AI pen testing. Really fascinating discussion where you know, Brad's whole thing is like, well, do we really think dollars in bugs out is actually the metric through which we should measure the value of a pen test? And his sort of thing is, well, you know, the value of a pen test for him was always to sit down with someone like you, Adam, at the end of the exercise while you stroke your beard and he could ask you which Areas of the code base. You know, did you have feelings in your waters about. Basically. So that's a really fun chat about a bunch of AI stuff in Risky Business features. Please and subscribe now. Support our work here. Support James, we like him, we want to keep him. Go subscribe to his podcast. But looks, you know, just one more story about all of this is. It was just like more of a curiosity. Something funny that happened this week, which was this woman, Summer Yu, who is the director of alignment at Meta Superintelligence Labs. She was messed around with Claude Bot, and at some point, despite having told Claude Bottom to not do anything without asking her first, just started like deleting emails from her inbox, which was quite funny. She had to literally run to her Mac Mini to like, start shutting down processes and stuff. And I think it's, you know, the fact that she actually works in this space has made this blow up into a bit of a story. She should have seen that coming and maybe not, you know, posted about it on. On Twitter. I think the funny thing about it, though, was when you read through the replies, some of the replies in that thread where she was talking about it, I mean, the screen caps are hilarious because she's like, I told you not to do that. It's like, yep, you know, typical AI response. Yep, my bad. Good catch. You know, but there's someone just replied to the thread, like, I don't know what you people are getting out of running this thing, and someone just replied with, it's fun. And I mean, James, you're actually a claudebot user. I mean, you, you seem to fall into that category. You're actually using it for some of the work you're doing with us, right?
C
Yeah, I do. I use it a lot. You know, even this morning before the show, I'm throwing headlines at it and I'm sparring on it with ide. I'm trying to get it to think about novel aspects. It's interesting, right? I mean, you're accessing a massive, massive corpus of data in a novel way. Why wouldn't we experiment with it?
A
Because it's the 2026 equivalent of talking to yourself. You do realize that, right? You're like, you're muttering to yourself. And the problem is that it's. The computer is muttering back.
C
Yeah, Like I said last week, Yesterday, I think 2025, there was just voices in my head. 2026, there's voices in my head and outside my head.
B
Head.
C
It's great.
A
It's fantastic. And I think there are what There's a couple more stories here. Just real quick, Adam, you flagged this one. There was a Microsoft Office bug where they weren't supposed to be throwing confidential emails into Copilot, and yet they were. And then the fix is some sort of control around what storage locations. Can you walk us through this one, Adam?
B
Yeah. So if you had Microsoft dlp, like data classification stuff in your network, you can classify emails and documents and so on with particular classification levels. And if you set a policy which said don't ingest confidential or whatever emails, it didn't apply for a technical reason to drafts and sent items, I think it was. So like there was a couple of drafts.
A
That's the last thing you want indexed by an AI. They're the angry ones that you decide not to send.
B
Exactly, yeah. So that was a technical bug that caused those to be injected when they weren't. And then the other thing that Microsoft's doing around that kind of AI ingesting DLP bit is that previously the DLP classifications were like metadata that was stored in SharePoint or OneDrive and it didn't apply to things that were tagged on local disks. They've extended their thing that does the ingestion to respect their classification tags on local disk as well, which should be vastly increased the number of places that those will be, those controls will be respected. So that's an improvement. But I imagine it's not particularly comforting for people who, you know, as you said, like, had their draft emails ingested and read by the AI agents.
A
Now we've got an update on the case of Peter Williams, who was the guy who worked for L3Harris and was who pleaded guilty to stealing and selling exploits to like Russians. A Russian vulnerability broker. He's been given seven years in prison. And I think that is just a remarkably short sentence given the harm that he has caused to, you know, the security interests of the five eyes countries. But yeah, and separately the Treasury Department has sanctioned the Operation zero where he sold these bugs to and the guy operates it and another business that he was spinning up to do similar sort of stuff. I mean, no surprises there. I would imagine that this, you know, this guy, Sergey Sergeyevich Zelenyuk, I imagine he will print and frame some sort of letter involving these sanctions and wear it as a badge of honor. So I can't imagine that it's going to hurt him too badly. But look, staying with Russian stuff and Pavel Durov is, you know, the founder of Telegram. He is now the subject of a criminal probe in Russia. And you Know, the most people, their read on this is that, you know, Russia's government is saying, oh, they are failing to take down, you know, a bunch of stuff that's been reported to it and they're facilitating terrorist activity. Yeah, facilitating terrorist activity by failing to respond to law enforcement takedown requests. Funnily enough, a very similar complaint to the one made by France. So they, you know, they're degrading Telegram as well. The read on this for most people is that the reason they're targeting Pavel Durov, the reason that they're degrading the performance of Telegram, is they want people to use Max messenger, which is, you know, their equivalent of WeChat, which allows the government to surveil communications and etc. Etc. But also we're seeing the Ukrainians pushing for tighter regulation of Telegram because they're saying that the Russians are recruiting locals to, you know, gather intelligence and perform various acts for money and they're recruiting them over Telegram and they don't have insight there. So, I mean, you know, I think this is interesting that you've got Russia pushing towards a more open messenger, which they can surveil through Macs, but also you've got countries like Ukraine, which, yeah, they, they have a legitimate reason to want to take away privacy enhancing tools.
B
These tools are used, you know, especially in Russia, Ukraine, like they're very popular in those demographics and you know, these two places are at war, so. Oh, you know, of course, both sides use them in their own ways and it makes sen sense that everyone has their own bone to pick. I mean, the move towards Max I think is a thing that we flagged. You know, I think it was late last year whenever it was, you know, this is just Russian modus operandi, right? Push everyone around, into a place where they can control and it just kind of makes sense and it underscores the importance of communication, you know, as a thing that everyone has an interest in either securing or observing. And back when we used to do this over the bare phone network, right, we had wiretaps, apps and, you know, lawful intercepts and all of those sort of things. Once the communication moves up into over the top apps with end to end crypto, then of course everything starts to move around and not everyone can afford iPhone exploits, you know, to go read stuff on the endpoint. So, you know, it makes sense to look at other ways to solve the problem. And this is them doing that. And honestly, like, this is probably the most pragmatic way for Russia to address this issue because they can't iPhone exploit their way there, they can't lawful intercept their way there.
A
It's the China playbook. I mean, this is why China wound up doing WeChat. Right. Which was just corral everybody into a spot where they can do it. I mean, I think I find it interesting when, when you're looking at this topic in that region. Right. I think it's interesting because this is an extreme environment, particularly on the Ukraine side. Right. It's, it's a, it's, it's the debate pushed to the extreme where a privacy enabling tech is being used on one hand to recruit Ukrainians to do things on behalf of Russia. But on the other hand we know that the Ukrainian military makes a lot of use out of signal for all sorts of stuff, including I believe transmitting live video feeds from drones. Right. So yeah, as I say, I just think it's interesting where you can see both the benefits and the drawbacks of privacy enhancing technology in a country that's in a state of war with a much larger neighborhood labor.
B
Yeah, no, it's a, these kind of cypherpunk trade off things have always been really difficult because the extreme positions on both ends have real problems and then the middle ground is just difficult. Right. And I don't know that anyone has really figured out how to navigate all of these various equities and come up with a solution that's workable. All we've got is a bunch of kind of best effort kind of work. And it varies between countries, it varies between cultures and you know, kind of government types and, and you know, there's, there isn't an easy answer to any of this stuff.
A
Yeah, I think Telegram's a particularly interesting one because it sort of operates like a cross between a messaging platform, a social network and like a discord, you know, like it's a, it's a very, it's a, it's a hybrid of a bunch of different things. Anyway, slippery. It is. But it's also like if you wanted to reach out to a bunch of people in a certain community, like Telegram's a way to do that and signal less so. Right. Like it's more for direct message messaging, you know, groups as well. But yeah, I'm sure you see what I mean. Anyway, moving on. And look, we're going to talk about Persona now. Persona have sponsored a Risky Business Snake Oilers segment. They may sponsor something again in the future. Just have to disclose that whenever we talk about someone who has sponsored us. But we had this very strange situation over the last week where, where a Group of researchers did a bunch of fingerprinting of Personas infrastructure. Now Persona is, it does identity verification and a lot of these companies that are now being forced to do like age verification for certain online accounts, like Discord is one of them. I think in the UK they're going to companies like Persona to, to actually do this stuff. I think Chat GPT uses Persona for identity verification as well. But someone did like a bit of a tear down and fingerprinting of their infrastructure and they kind of added two plus two and got 55 right and, and wrote this long blog post about how every time Persona scans someone's face they're sending it to the government and you know, it's like doing all of this really privacy violating stuff and they're, they're in cahoots with ice and it was all like, it was a lot. And it got so bad for Persona that their chief executive wound up like publishing you know, writing open letters back saying like this is not true and whatever. Before this turned into such a big deal. When that blog post was first circulating, I asked both of you, Adam James, to have a look at this blog post and let me know what you thought of it and whether there was a there there. You both came back to me and said there isn't. James, let's start with you. What was causing these people to become so, you know, tinfoil Hattie over what they found?
C
Yeah, this is just unhinged. It all began with they found front end code that had all the JavaScript script source maps, right. So that's the equivalent of basically leaving around a debug binary that you can then go and get all the symbols out of and deeply understand what it's doing. But then from there on it just went into misassumption and incorrect sort of cognitive leap one after the other. So the endpoint that it was running on had a domain name that they somehow then concluded there was a Fedramp equivalent. So therefore the government must be involved. And one of the boxes was named Onyx and oh yeah, that's related to that ICE thing. That's, that's definitely going to be government involvement. And you know, neither of those things had really much of a basis in truth. But fundamentally, you know, the, the front end code that they found was just exposing all the ways in which Persona can be configured not specifically the way it was being used by Discord or in this case OpenAI where I think they found the source maps. You know, each, each customer is going to configure it to do what they need to do that suits their use case and you know, while the things that it does. Yeah, sure, I can see why they raise journalistic interest when it's things like watch lists and looking for likeness of terrorist images, et cetera. But this is bread and butter. Kyc, anti money laundering. Banks do it. Right. You should be no more surprised that this stuff does what it does than a bank will flag suspicious transactions. It's just what they do.
A
There's another aspect here which is Peter Thiel's Founders fund as an investor in Persona. Right. Which made people like lose their minds a little bit more because he's certainly a controversial figure. Adam, you came away with the same, very much the same impression as James on this.
B
Yeah, pretty much. But the technical quality of the write up was great. The conclusions they like the technical assessment and the conclusions they drew about technical aspects of it, like these domain names, these APIs, the way like all of those things made total sense. Like the person who wrote it. Technically competent, but where it comes a bit unhinged is the like fitting it into the current kind of like culture war environment in the US and all of the things that could be happening in the world. Do I think that Peter Thiel is directing via his investment funds, Persona to cooperate with the US government to capture everyone's identities? And none of that kind of makes much sense. Sense. And much like James, I felt like this is probably someone who is technically competent, but probably hasn't ever worked at a bank or worked in an environment that does identity verification as a matter of business. And you would expect a bank to be able to validate people's identity well. And the fact that they can't do it online is kind of a problem for everybody, which is why there's startups like Persona trying to make this doable. And the way that you make it doable is by doing a whole bunch of comprehensive, comprehensive checks and collating the results and then building a kind of like, how do we feel about this? What's our confidence level on this identity? And you have to correlate a whole bunch of data sources because someone who is faking their identity can't fake everything. They can fake a subset of those things and you have to kind of match everything up. That's what they're trying to do. So of course it has all of these extra features. And then as James said, the fact the user interface has as potentially an option to configure a button to report it to a regulator, like that's a capability that you would expect them to have. Doesn't mean that everyone's using it, doesn't mean it's automated, et cetera, et cetera. So overall, it felt like a lot of conclusions drawn without necessarily understanding the implemented context, which it's understandable. It's totally understandable within the societal context that people are worried and scared and afraid of, you know, of surveillance and government overreach and blah, blah, blah, blah. This didn't feel like an example of that.
A
Yeah, it's funny, right? Like, yeah, things are, things are quite tense in the US at the moment. We actually had an email too, where Fair complaint. I thought it was a fair criticism. I came into our editorial, editorial inbox. But it was a gentleman who took issue, Adam, with you saying that, you know, SISA being a victim of partisan political football over all of this ICE stuff was, was a shame. And you know, we had as a listener write to us who's actually based in Minnesota, and he's like, look, man, you know, it's a, it's a disaster here at the moment. You know, people are being shot in the streets. So, you know, he took issue with the, with you saying it was partisan political football because you know, in his view, as someone who lived in Minnesota at ground zero of a lot of bad stuff happening with ice, you know, whatever could be done to put a stop to that was. Was certainly worth it. Not just a case of, you know, sort of petty partisan political tricks. That said, I know that I just earned us a couple more one star reviews because any time I say anything critical of America's current leadership, the Snowflake maga Snowflakes who are the, the, like, they are the most snowflakey little children, little toddlers who throw their toys out of the pram anytime you criticize them. They run off to itunes and do a one star review and they have a little cry about it. So if you want to, if you want to annoy a MAGA person, please head over to the, you know, the Apple podcast store or whatever they call it and give us a five star review to drown that stuff out. But yes, anyway, moving on. And we've got some. You know, it wouldn't be an episode of Risky Biz without talking about, about some real dumb bugs. And we got this first one. It's a write up from Akamai. Adam, you found this one. And the crazy thing is I think you pulled this out of Catalyn's newsletter, the Risky. The Risky Bulletin newsletter. So there's a, there's a CVSS 8.8 bug. But it's like what is it like IE with Active X?
B
Like who it's what it's the year 2026. This was one of the ones that was fixed in the most recent past Tuesday. It's some bugs in Internet Explorer and essentially it's a like a chain of bugs that you can, if you get in front of mshtml, the old, you know, like IE Trident renderer, then you can bypass the like mark of the web controls, you can bypass the like IE extended security controls, blah blah, blah blah, lead to straight up codexec. And this has been seen in the wild. There's some Russian crews against a particular bug. And that's the interesting thing, right is we don't know how it's being used. We don't. And like, even the like, how do you get code in front of MSHTML these days? Like there's a few cases where like, you know, the Microsoft Help, like the, what's it called, the Microsoft to Help application where like you can have like help files that they are rendered by an old version of IE and there's like the IE tab in Explorer that you can use IE mode and Explorer, this particular campaign seems to be dropping like malformed LNK files that have HTML that eventually ends up in front of Trident to render. But I couldn't find. There's some hashes of samples that were meant to be on VirusTotal. I managed to dig up and figure out exactly how they are doing it, but via some mechanism, someone somewhere in the Russian hecsphere is hacking people with Internet Explorer, you know, in the year 2026. And you know, so it's either, so
A
it's either a weird path to where it is still lurking in the Windows code base somewhere or they are combining this CVSS 8.8 with some sort of time travel appliance and sending it back 10 years to pop shells. Crazy. You know, it's just weird seeing this stuff still, still used. We've also got a Bloomberg report here, Adam, that you say dishes on Ivanti actually getting hacked in 2021 by bugs in its own software. Which is as you said, I think you said in the notes. Yep, there it is. Lol.
B
Yes, detailed analysis there. Yeah, this is a quite a long place from Bloomberg which digs into a bit of the history of Avanti and it reveals that particular detail that at some point presumably China broke into Avanti itself through Connect Secure, use that to then gain access to, you know, some stuff to hack other customers. But overall this Piece digs in a lot of detail into the kind of root cause here which Bloomberg identifies as private equity. Buying security firms, gutting their expensive stuff and then selling them and you know, selling the products, you know, for a while whilst they coast on the work that have previously been done. And I think that's a pretty important message. And in Bloomberg, you know, talking to investors, you know, that's a good thing for them to be saying. And they cite, you know, mostly it's about Avanti, but they also make the same kind of complaints about Citrix, you know, people, you know, private equity that have been also, you know, kind of capitalizing on, you know, the COVID remote working thing and then you know, the gutting of these firms that build security critical products and kind of what that means big picture, national security wise. And like that's a, it's a pretty good write up and honestly I think well worth a read. And you know, if this was a thing that got traction amongst how other people think about like if you're going to go out and buy a product in the market, is it safe to buy a private equity owned security product? And the answer may not be that it is, you know, because they don't encourage, they don't incentivize the right behavior. And so yeah, I think it's a great write up, worth a read.
A
Yeah, I mean in one of my wide world of cyber conversations it sort of came up that a bunch of these, these companies that are ostensibly publicly listed, like are still majority PE owned, right. And they've got like board control and whatever. So it's insidious. But not all PE is created equal. The one that I always defend is, is Thoma Bravo. Because they have a habit of buying like wobbly companies and then making them better instead of, instead of like just doing that sort of rent extraction model of all the pe. Right. So let's not malign them all. It's the bad ones we got to watch. Not that I do any business or anything with Thoma Bravo. Like I don't but you know, I just think it's cool when they are able to actually turn some stuff around. What do we got here? Yes. So there's some horrible Dell bug. It's a CVSS10, it is in the wild. And what is it CISA is like now telling government agencies to patch it. And we got a write up here from the Google Googie and you know, Google Mandiant Services team all about this bug. I mean it's look, it's written in pure Threat intel person speak this Google blog post. But if you can get through the writing. You know, it is interesting, Adam.
B
Yes. I mean the bug itself is dead boring. Like it's hard coded admin creds in Apache Tomcat. I'm going to go ahead and assume it's login admin password Tomcat, but it might be admin manager, it might be manager manager, it might be admin admin and it's going to be one of those.
A
It might be admin password. I mean, why not go with a golden oldie there?
B
Yeah, exactly, exactly. So I did try to find a pocker, an exploit or someone that had written this up. I didn't download the product in time to go figure it out. But anyway, it's going to be a dumb password like that and that's not particularly exciting. I mean we've all Tomcat tomcatted our way to victory in the past. The Google write up does talk about a bunch of interesting post compromise activity and really pretty sophisticated, you know, kind of manipulation of VMware infrastructure for monitoring and you know, other things like that. So like, you know, as threat intel reports go, like I quite enjoyed it. But it is, as you say, very threat intelly. And the moral of the story is don't put Apache Tomcat, you know, on your network with default creds.
A
Yes, I think that is a reasonable bit of advice. But yeah, like to give you a sample of the writing. Analysis of incident response engagements revealed that UNC 6201, a suspected PRC nexus threat cluster, has exploited this flaw since at least mid-2024 to move laterally, maintain persistent access and deploy malware, including sleigh style brickstorm and a novel back door tracked as Grimbolt. It's like, yeah, okay, how many indecipherable terms can we squeeze into one sentence there, guys? But that's how they roll.
D
That's how they roll.
A
All right, we're going to wrap it up there. Adam Barlow, James Wilson, thank you so much for joining me to talk through the week's security news this week. Really appreciate your time.
B
Yeah, thanks very much, Pat. I will see you next week.
C
Thanks, Pat. See you next week.
A
That was Adam Boileau and James Wilson there with a check of the week's security news. Big thanks to both of them for that. It is time for this one week's sponsor interview now and this week's show is brought to you by corelight, a company that I really, really like. So corelight maintains Zeek, which is the sort of industry standard network Security sensor and corelight makes its money a few ways. They sell a NDR platform based on Zeek, so if that's something you're looking for, you can get it from them. They also have some SOC tools and really a big thing that says sort of traditional area of business is selling hardware appliances that can offer Zeek optimized Zeek at just sort of ins insane line rate speeds. Right? So you can get like a 200 gigabit per second like network security sensor. And I don't personally need one of them, but there are companies out there who do and they get them from corelight. So Brian joined me for this conversation though really about. It's about a couple of things. It's very AI centric. It's about how very quickly we've gone from a situation where we have been thinking about using AI in the sock to now basically everyone's using AI in the SoC and it is the way that things are done. So there's that part of it. And we also talk about really the opportunities for Core Light around AI in that, you know, no one's going to Claude code a 200 gigabit per second full line rate network sensor. Core Light's very safe in this regard, but they've got to think about how they can make their product work better with everybody else's AI. So that's a part of this conversation as well. But we started off by talking about how that transition has happened, that sort of inflection point, like we're past it and about how AI in the SoC is just the done thing. Here is Brian Dye talking about that.
D
Enjoy watching. The evolution has been key. The forcing function I think is obvious to everybody at this point, right? You've got to fight fire with fire. When you look at the speed, the volume, the shortened timeline from vulnerability to exploit to tapping because of attacker use of AI, that's been a rocket shift. But I think the other two pieces are kind of equally powerful. One is that the shift from kind of call it LLM based defense to agentic defense means you can slice and dice the individual components of the investigation and security and as a result you kind of take away a bunch of the hallucination risk. But then the other piece is defenders themselves. I think the models and the automation is earning their trust because look, we're all hyper skeptical, otherwise we wouldn't be in security. I mean it's a required entry code. But as the solutions themselves, whether they're in house or kind of vendor provided, are earning that trust, that I think has been equally important to adoption as well.
A
Yeah, I mean, it just doesn't feel to me like that's even much of a question anymore. The question isn't, oh, should we use AI to do like alert triage? It's like, well, how much AI should we use to do the alert triage? Or how much of the alert triage should we throw AI at? Is that your read as well?
D
Yeah, 100%. And I think, I think what's happening is folks are more comfortable using it in individual domains right now because you can kind of parse the problem and you're not trying to eat the whole elephant in one bite. But we're absolutely going to where it's going to be multi domain orchestration to kind of solve the entire kind of soc triage problem. The biggest thing that I've seen that's kind of an aha is that people have changed their mental models over the past year. It used to be, what's the LLM? What can the LLM do? Does it hallucinate? That's where we were 12 months ago. Now people's mental model has shifted of like, oh wait, I need to think about this thing as a three layered cake. Number one, do I have the right data? Because it turns out that we're not as worried about the hallucination anymore. We're worried about our models hitting a data ceiling. Right. That limits what they can do. That's step one. Step two, do I have the right decomposition into agents? Because now we're not doing a single LLM. Do I decompose things correctly? And then number three is really interesting. The workflow you build on top of the model is essentially packaging the expertise. It's either a vendor packaging their expertise or if you're doing it in house, it's you taking your kind of your experts, your most advanced folks and packaging that expertise in. And I think splitting that model from the old school view of what's the LLM capable of, what new model comes out this month and what can it do to like, oh wait, this is a three layered cake. It's about the data, the agent architecture and kind of how much expertise I'm embedding into the workflow. That's been really cool to see.
A
Yeah, I think one thing that's interesting too, and I bet you'd love to talk about this, I think people have been going a little bit overboard on the extent to which AI is going to eat everything. Right. And I think corelight is a perfect example of that. You can't AI your way into another corelight. Right. Like, you guys are so safe life from, from basically being, being replaced by AI. But where do you see all of that? Like, how disruptive do you think AI is going to be to security vendors when you've got, you know, you can, I can point to you as a case where. Well, not at all, really. If anything, it's great for you because AI models love data and you get data, but others, like, I don't know, where do you see it all going? That's just a scatterbrained question there. Sorry, Brian.
D
Oh, it's a great one. Look, I think there's two interesting things going on here. One, the safest roles here are actually the defenders themselves. Like, if you're a cyber analyst, I think what AI is going to enable you to do, even with all the Segentic work, is that right now folks can tackle the top 10% of their queue and you can triple the productivity of a security team. Now they can tackle the top third of their queue. You still don't have the other two thirds of the queue covered. So I don't think anybody in their right mind is going to say, oh, I don't need the same size of security team team. Like, I don't see that risk happening at all. I think we're just going to get better coverage of the inbound kind of alert queue and in terms of kind of what's going on in the technology landscape, because I think that's kind of where it's going to be a lot more interesting. I do think this is a real chance to rearchitect how the SOC itself or the technology behind the SOC itself works. Because the world where you put all the data in one place, you had to centralize it, you had to put it all there for search purposes. I think that's come and go. The ability to essentially use the LMS as a search federation and to really orchestrate the expertise and a point tool and bring that together. That's where things are going.
A
Yeah, we finally get to knife the saying single pane of glass because nobody cares anymore. Like, it is absolutely, it absolutely does not matter. I mean, you know, you know that I work with Edward wu over at DropZone and that's one of the most amazing things with DropZone is it can go out and just like it needs some information, so it goes out and goes. Gets it.
D
Yeah. And look, the interesting thing is watching all of us as technology providers figuring out that, wait, we have to plan to live in that world. So What I think is going to happen is you've got ndr, edr, itdr, pick your various kind of control pillars. Each of us is going to have to do what we can, right? We're going to have to do a bunch of triage and kind of alert aggregation, data curation. That's the value prop we have to do. But then we've got to realize there's going to be an agentic SOC layer that we have to be a partner for. So whether it's A2A interfaces or data APIs or just kind of MCP interfaces, we've all got to be actively planning that. Our job, we have a two part job, one that's in our product, one that's in somebody else's product. That mindset I think is going to be really key.
A
Well, so you know, I've had a bit of time to sort of think about corelight the last couple of days like seeing on my calendar, oh yeah, I'm chatting with Brian soon and just been, just been noodling on it and I think one thing that's really amazing about corelight is how little AI actually changes your. All you've got to do is yeah, spin up a good API that can be used for machine to machine, you know, get, get your, get a good MCP server happening and I mean you can pretty much call it a day and just go back to doing what you do, right? Like it is, it is barely changing like what you do as a product, right? I mean, am I wrong? Am I off there?
D
It's not quite that easy. I wish it was. Think about kind of is it the first party or the cross SOC experience, right? Because folks kind of consume two different form factors from us. They either consume just the network centers, which is the data detection, telemetry, the analytics. In that case you're right. We have an MCP server and a client. We've got a bunch of workflows built into that. Look, between you and me, I think MCP has more mind share than market share. The number of customers actually deploying their MCP servers isn't that high at this point, but you can absolutely do that. Especially if you're in the biggest of the big. You can consume that. You're off to the races. If you're consuming not just the data and detections but the SAS console, then we look like a bunch of other detection response products, right? Where we actually do have a first party search investigation experience that's always had kind of, not always, but for a long time now has had AI acceleration features into it and we need to continue to build that. Frankly, it has been a little bit easier for us just because we got this gift from being an open source based company where all the flagship large language models are already trained on the data set. So it was a lot easier for us to deploy that we didn't have to pay Nvidia for all these tuning. Yeah. Everything else, right? Better to be lucky than good.
A
Yeah, yeah, yeah. So I mean that's the thing, isn't it? All the LLMs just understand Corelight, you know, to its core, no pun intended.
D
But yeah, also in the better to be lucky and good category in the naming. I don't think we saw the LLM thing coming when we picked the company name out 10 years ago.
A
Yeah. Now when you started with AI, right, I remember you were doing what everybody else did, which was to use gen AI to sort of explain alerts to people. Right. Now you've whacked some sort of agentic investigation sort of features into your NDR platform, which makes sense. But you know, agentic SOCs don't really need agentic NDR. They're going to have, you know, they're going to have a more Swiss army set of agents that are going to be often querying that data like where? How do you. And this is such a challenge, right, for so many companies that I talk to. How do you even think about like some of the. Developing these products, some of these products and putting resources into them when they could turn out to be, you know, And I don't think Core Light is a dead end, but some of these stuff could turn out to be a dead end because it's going to get eaten by the companies that do that are doing More generic, generic AI SoC. How do you as a chief executive of a company go about like prioritizing this stuff and allocating resources to it? I mean it must be. I mean, I rather you than me, pal.
D
We've all got hard problems, right? I think the thing that is a big yes and to what you're saying is different sizes of companies are going to have different architectures as they roll them out. So there actually isn't a one size fits all cookie cutter answer to this. You know, if you've got a soc that has 500 or 1,000 people and we've got customers that do, you're going to have one defensive architecture. If your SoC has 10, 20, 30 people, you're going to have a different architecture. If your SOC has three or five people and you're heavily relying on an mssp, you're going to have a different architecture as well. So a lot of what we think about is what are the architectures that we see happening in those three and how do we put the enablement behind those different architectures all at the same time. And fortunately there's a fair bit of overlap here. Like let me give you two different use cases. An easy one is, hey, look, we've seen an anomaly on the network. Can we triage that anomaly and actually really understand what's going on there? Right, That's a great first party use case that we should be able to do because we have all the context around it. A different use case is, let's say you had an alert come in from an EDR and that's coming in through your SIEM or your agentix SOC or your EDR console. And you really want to get supporting information about that alert, like, hey, can you tell me what you know about this IP address? Right. That's a very specific thing that you could support with either an API or an MCP server or an agent to agent interface. Right? But if you think about what the SOC is actually doing, you can break those down into specific workflows like are you the lead or the follow in the investigations? Kind of a simple one, right. That takes you down two different paths. And then you think about what's the architecture of the soc itself, itself, right? What's the technology stack that they're using? That's the grid that we're trying to solve for. And then it just becomes a sequencing problem, right? Where do you start based on where your business is strongest to kind of move through that stack.
A
Yeah, no, it makes sense. And like an agentic look, I think, you know, a product like your NDR stuff, I think adding some sort of agentic capability to it. I mean it's 20, 26. That's kind of table stakes now, right? Which is crazy.
D
But, but here we are, it's where everything's going. And look, this is, like I said, the focus here is how much can you automate how fast. Because if I go back to kind of where we started, right, I mean this is clearly where the SOC is going. One of the favorite customer conversations, well, favorite and most horrifying was we had our advisory board together kind of end of last year and one of the stats they shared was it used to be about three weeks from when a new vuln was published until when they would see exploit in the wide that is now turning into two to three hours because you can take a essentially jailbroken or open source model, go hammer it against this vulnerability, you'll get some really crude hacky exploit, but it'll work.
A
Yeah, you can reverse. Like reversing a patch now is like a lot easier. And I'd also imagine too with some of the agentic features in something like corelight, you will be able to do stuff like just tune in the sort of alerts that it kicks out. They've had some agentic magic done to them before, they wind up being kicked out into some sort of other SOC platform anyway. Right.
D
Yeah, I mean that's the non genai side of AI, right? I mean there's a whole bunch, if you look at what corelight does specifically and the category overall, there's a whole bunch of anomaly detection, living off the land type stuff that is really useful for these kind of live exploits and for more importantly, living off the land. Because the other thing we see happening is this, that just like the LLM platforms let people get into vibe coding and kind of, oh, I'm not a Ruby on Rails expert, but I can hack it away for an afternoon. Right. It turns out that the attackers are using that to bridge their own skill gaps. And so techniques like living off the land, that used to be a lot more advanced. It was in the nation states, the bigger kind of criminal gangs are becoming a lot more accessible to everybody else. So the traditional AI side, what you and I would call advanced master math actually, absolutely. Still matters in this world.
A
Yeah, yeah, 100%. Like, let's not forget about the deterministic like machine learning. Right. Because that stuff is also very useful. All right, Brian Dye, fascinating to chat to you as always. I always really enjoy our catch ups. A great way to make a living. And yeah, I'll look forward to chatting to you again soon. Cheers.
D
It's always a pleasure, Patrick. Appreciate it.
A
That was Brian Dyer, the chief executive of corelight there. Big thanks to him for that. Big thanks to corelight for having been a sponsor of the Risky Business podcast for some years now. All good stuff. All right, that is it for this week's show. I do hope you enjoyed it. I'll be back soon with more security news and analysis, but until then I've been Patrick Gray. Thanks for listening,
B
Sam.
Podcast: Risky Business
Host: Patrick Gray (A)
Guests: Adam Boileau (B), James Wilson (C), Brian Dye (D)
Date: February 25, 2026
Theme: An action-packed exploration of the latest mishaps, advances, and risks at the intersection of artificial intelligence and security.
This episode dives deep into the tumultuous week in information security, defined by a string of high-profile incidents illustrating both the power and peril of AI in cybersecurity. From bumbling adversaries empowered by LLMs, to ethical standoffs between tech giants and the Pentagon, through to hilarious and alarming AI failures in both consumer and enterprise contexts, the show acts as a rapid-fire analysis of the bleeding edge of AI’s impact on cyber defense, offense, and policy.
Theme: AI’s evolution in the SOC – from far-fetched to the norm
This episode captures a security world in flux. AI is now fundamental infrastructure—both for attackers and defenders. Guardrails, sandboxes, and workflows are in arms races, with nation-states and corporate giants locked in struggle for model supremacy and control. Listeners are left with sharp observations, fun tangents, and the sense that chaos and rapid adaptation are the new normal.
Recommended Segments by Timestamp:
Memorable Closing Lines:
Enjoyed this summary? Listen to the full episode for even more in-depth banter and analysis.