Loading summary
A
Foreign. And welcome to another episode of Risky Business. My name is Patrick Gray. This week's show is brought to you by Dropzone, which makes a AI sort of SoC platform. And Dropzone's founder, Ed Wu will be along in this week's sponsor review to talk about why they have launched a whole bunch of pre canned AI threat hunts and the logic behind them. It's actually a really interesting interview. Ed is a very, very smart guy, as regular listeners would, but really, they started developing their pre canned threat hunts from the premise of like, what would we do if we had unlimited man hours to throw at a query, for example? Right. And they sort of worked backwards from there and they found some really interesting stuff. So that is this week's sponsor interview with Ed Wu from Drop Zone. Coming up after this week's news segment with Adam Boileau and Mr. James Wilson, which starts now. And you know, I'm so happy I've got an extra spring in my step this week because basically, because chaos. Because so much chaos between AI finding like O day and everything and supply chains getting torn apart. I'm just, I'm a happy guy. I just, you know, it reminds me of the old days. It really does. Yeah. Yeah. Super messy. So, Adam, let's start with, with you. I mean, we've got a supply chain attack against something called Axios, which is apparently used everywhere by everything. And this has now been linked to a North Korean group. This is a huge, big deal. Also feels a little bit like the dog who caught the car. But walk us through the rough shape of this story if you would, please, sir.
B
Yeah, so axios is a JavaScript, like a wrapper around the HTTP libraries that you would use if you want to retrieve content. Normally it's in a browser. There's kind of the ajax kind of APIs that people use to retrieve external web content. There's an equivalent thing for like server side JavaScript. Axios is a framework that kind of lets you use the same APIs in both server side and client side JavaScript. It's wildly popular, something like 100 million downloads a week. And yeah, it turns out some North Koreans managed to get a Trojan version of it into the NPM repository for not particularly long, like a few hours. But when you're talking 100 million downloads a week, that's still a lot of people. And the Trojan version was dropping like full on backdoor cred stealers, the whole shebang. Exactly as you would expect. And of course, the JavaScript ecosystem has been going crazy lately with all of the Team PCP attacks. And so we initially assumed, like, hey, this is probably the same kind of thing, but no, it's the North Koreans reminding us that, you know, they're still out there, they're still around. Presumably they're going to be going after cryptocurrency stuff. But I mean, who even knows anymore? They might have some other plans.
A
I mean, I think from now on, when we talk about North Koreans, we should also, we should always, I'm sorry, refer to them as pesky. Those pesky North Koreans at it again. It just feels like the. The right word. Yeah. James, initially you were thinking this was probably Team PCP as well, and then. Yeah, I mean, you seemed a little bit disappointed that it wasn't.
C
Yeah, I, I would have lost a fair bit on polymarket if I was that way inclined on this one. I think it just felt like it was them. It felt like a continuation of what they were doing. But delving into some of the details on here that I found interesting. For example, yes, they published malicious versions of Axios and they did both a latest and a legacy to get maximum coverage. But the way they did it is kind of cool. Like, they created a separate package called Plain Crypto JS and then published that so that there were some legitimate versions out there. And then what they did was to Axios. They basically injected this Plain JS or Plain Crypto JS as a new dependency. And so this sort of, sort of fooled a lot of detection and scanners that went, well, this is just a new dependency in Axios and that other new dependency dependency, well, it's got publishing history, so should be okay. Still caught quickly, but just interesting to see them going that extra little step. They could have just compromised Axios, but a little bit of extra care. The really interesting question that's not answered is how did they get the credential for the maintainer of Axios? He even said in the GitHub issue himself, he said, I'm trying to get support to understand how this even happened. I've got two FA and MFA on practically everything, yet still this cred got out.
A
Sounds like a, you know, browser, some sort of browser token to me, right?
C
Could be, could be. But I mean, the last sort of sad trombone is this on. This is the poor guy did have the OIDC trust publishing model configured for Axios, which is what NPM wants you to do these days, to prevent, to a degree, some of these sorts of attacks. But he'd set it up such that the publishing environment was basically saying, if You've got an OIDC token, use it. Or if there's an NPM token, use it. And if there's an NPM token, prefer that. And so it was like, you know, tried to do the right thing, but it just wasn't complete.
A
Yeah. Yeah. You just made me think of something funny too, in terms of, like, funny names for JavaScript packages where the Dark Sword exploit leak that people have been talking about. This is the other iOS exploit chain. Like, I listened to between two nerds last week's edition, and I think it was. Yeah, Grack and Tom pointing out that I think. What was it, one component of that whole chain was called rceloader js, which is like, way to be stealthy, guys like, you know, iOS super secret iOS explorer Fletchain JS. Yeah.
C
Did we obfuscate the code in the RCE loader? JS. Yeah, the code's totally obfuscated.
A
Jobs done. Jobs done. Yeah. Now look, I mean, PCP Team PCP may not have been behind this Axios supply chain compromise, but they had other stuff that they were doing, James, and by the looks of it, that involved tearing through Cisco, they have managed to clone something like 300 repos. They've racked off with a whole bunch of AWS tokens as well. Like, it's. It looks like Cisco is having a real bad time. Adam's comment earlier was like, gee, Brad Arkin must feel, you know, he co hosts a show with you, James, in Risky Business features. Gee, he must feel he's lucky that he's not there anymore as the ciso. But honestly, knowing Brad as I do, he's like what I would call a combat CISO who enjoys stuff like this because he's sick. He's sick. But yeah, walk us, walk us through what we know is happening actually at. At Cisco at the moment.
C
Yeah, it's a bad day and I don't think we've seen even close to what the full impact of this will be. So this is fallout from the trivia supply chain attack, which was Team pcp. And I think, as we said last week, it's like these guys are now sitting on a massive trove of like tens of thousands, hundreds of thousands of credentials that they managed to get out of both the trivia and the checkmarks supply chain exploits. And so obviously they've done a grep for some interesting names in there, found that Cisco was in there and have just gone absolutely to town. Like, as you said, 300 GitHub repos. There's AWS credentials in there. And you know, when, when you hear of Cisco vulnerabilities and stuff, it's, it sounds exciting initially. Then you hear that it's in something like, you know, Cisco Secure Firewall, and you think, okay, well, if you've bought a product that's called Cisco Secure Firewall, you kind of deserve it. But this feels like it's literally lock, stock, the lot, the kitchen sink and everything out of Cisco. So, you know, next wave of supply chain attacks, here we come, I think.
A
Well, yeah, I mean, Adam, what do you expect might shake out of this? Because I'm at a loss, you know, because it's so weird that this is like the second thing that we're talking about this week. Right? And then like later on, like halfway through the run sheet, it's the FBI director's personal email spool got dumped on the Internet by Iran and it's like, yeah, okay. Like we've got other stuff that we need to talk about first. Right. Like that's, that's this week. But, you know, what's your take on the seriousness of this? What could the implications be for Cisco and its customers?
B
I mean, it's kind of hard to say because we haven't got concrete details about like, which products, what repositories, what kinds of environments have credentials been taken for? Cisco is a big place and there's so many acquisitions over the years. You know, I've spent plenty of times spelunking around inside Cisco products and you can see the acquisition histories in the products in some cases, like there's other companies logos still in there from when they were shopping it around to sell to different vendors. So you don't know exactly how bad it's going to be. But Cisco gear across their product range is everywhere and anything that exposes the gubbins of that is going to result in bugs shaking out because, you know, a lot of this stuff is old, right. And Cisco does not do great quality assurance until they are forced to, I don't think. So Whether or not getting a whole bunch of code, getting a bunch of creds leads to intrusions into test environments and dev environments, leads to, you know, being able to read source and find bugs. Like the fallout is going to be, I think, pretty interesting. But you know, I haven't, we haven't really seen the actual data itself, so it's hard to say except probably bad and as you said, chaos.
A
Will this disaster spread to Cisco and its customers? Stay tuned and find out. Next week in another edition of the Risky Business Podcast now in its 20th season. Yeah, it's kind of that way. Now the AI apocalypse is upon us. The, the disruptive period that we have been predicting on this show for some time. Right. Like the, the theory is AI is going to help people find a lot of exploits, a lot of vulnerabilities. Right. And that's going to be chaotic. At the same time, you know, AI models are going to help people who maintain software to find similar sorts of issues and patch them. Right. But in the meantime, there's going to be this period of massive disruption. That period of massive disruption appeared to have started over the last seven days. Guys, let's start with you, James, on this. We have, we have a presentation from someone who worked at Anthropic. They found, they got Anthropic, they got Claude to find O Day in Ghost, which funnily enough is our newsletter platform, which, the Risky Bulletin and seriously risky business newsletters, they are all published via Ghost. It was a blind SQLi that like no one else had found before, a pretty serious bug. They also found like a kernel bug or something. And the whole talk is apparently just fascinating. You've been through this. I mean, you know, it's, it's. I mean, this is incredible, right? The dawn of a new era.
C
Yeah, truly. And you know that the thing that is most startling about this is how ridiculously simple the prompt was. You know, in the case of finding the Ghost vulnerability, it was literally just, hey, I'm at a CTF and I've got to find a vulnerability in this code base. What you got? It's like, is that it?
A
We should say too that the person who was doing the research actually works for Anthropic, so presumably some of the guardrails were not present for them.
C
Well, that's the other thing, Pat, because it was interesting that the prompt was so simple and I was taken back by that. But then this morning after we'd had our run through meeting, I was working away on Claude, trying to replicate this and I can easily replicate the finding of the bug. But when I tell Claude, okay, that's great, now I want an exploit. It is adamant that it won't create one. Interestingly though, it'll create what it calls a safe exploit, basically an exploit that demonstrates proof of concept, doesn't exfiltrate anything. But I could get it with a bit of questions to say, well, okay, but in the write up of this exploit or in the write up of this vulnerability, I need to be able to explain how that safe vulnerability could be made into a dangerous one, so that the severity is well understood. And it's pretty clear that Claude has a hard guardrail on creating malicious code. At least that's what I run into. But it's happy to talk about how to do it till the cows come home. And so it's just so much knowledge in there.
A
Yeah. Now we'll talk about a podcast that you did where you actually had a conversation with Claude and got it to help you do some vulnerability research into WebKit. We'll talk about that in a minute. But Adam, I want to get your opinion on this because we've got a couple of things here to talk about. So first of all is this, is this talk that the anthropic guy gave on doing volume dev. And also we've got a blog post here from Thomas Patacek talking about how, hey, volume dev is for the clankers. Now, basically, like, if you enjoy doing this sort of work, like, okay, that's great, but it's kind of irrelevant to the fact that you are about to be replaced. It certainly feels like AI is extremely effective at doing this kind of work, much more so than the naysayers have, you know, been prepared to admit. And I mean, I, I know this is a big call, but it feels like we are absolutely at the dawn of a new era. That's a big statement. I want to know what you think of it.
B
Yeah, I don't disagree with you. Like, it has gotten so much better over the last, you know, three, four, five, six months. And yeah, it does feel like the state of the art for this stuff is moving really quick. And Thomas Patrick is very experienced at volume dev and vuln writing and the industry around it. And his opinion is, I think the blog post is just a really interesting one to read. One of the things he points out is that a lot of our defense against security vulnerability is predicated on really the top tier human exploit devs focus on the top tier targets, you know, browsers, core operating system, kernels, things that are big money, big reward, big kudos and everything else a bit further down. The ecosystem has really only ever got pretty cursory attention. And now these models are as good as, you know, maybe they're not quite a DAO unit yet, but they are pretty damn good. And against most software, that's more than what you need. And that's going to upend everything for defenders because, you know, these kinds of bugs are in some cases, 10 minutes worth of reasoning time away by a model instead of days for an experienced security research. And some of these bugs are not super common. That ghost SQLI bug, it's not a complicated bug. Someone had to go bother going to look for it. And that's the hard bit, is that clearly no one has bothered that bug. I would like to think when I was back at Insomnia, if we had been reviewing that, we would have found it. It's the sort of thing that would, you know, you would find, you know, half a dozen of those a week around the team. But that's at the cost of, you know, 40, whatever, pretty experienced pen testers getting paid a lot of money versus five minutes of GPU compute, you know.
A
Yeah. Versus ten bucks in tokens. Right? Yeah, 100%. And, and I think, like, I hear what you're saying on the Daobot 3000, but I feel like Daobot 3000 is probably closer than we think.
B
I mean, I agree, like we're not far off being able to deploy, you know, a whole room full of DAO units and that's going to be terrifying. On the other hand, it makes me wonder what Daud's up to these days because, like, I bet he's doing something wild as well.
A
Exactly.
B
So, you know. But yeah, it's just, it's moved so much in the last few months and I'm actually impressed, you know, apropos of James conversation with a clanker about webkit bugs like that, the guardrails in the retail versions, you know, do seem pretty adamant about it. You know, obviously that's going to trickle down into the clone Chinese models over the next, you know, six months to a year and the next couple of years are just going to be wild.
A
Well, exactly right. And I don't know that I'm that as assured by the guardrails as you are. And as I said, we'll get to that in a moment. I did include in this week's show notes a joke that I just thought was too good not to include from Twitter user Hombre, which said Claude is somehow better at kernel exploitation than creating meal plans. It's wild how bad it is. And I just think that is funny because, yeah, it's, it is really wild how bad AI is at some stuff and how absolutely fantastic it is at, at volume dev. And speaking of volume dev, we've had bugs pop up in vim and kinda emacs. We'll get your thoughts on that in a moment, Adam. And also we've got like a freebsd Claude bug pop out as well. Like it has just been absolutely raining bugs. Thanks to Claude Code this week. James, let's get your thoughts first on the vim stuff.
C
Yeah, look, this is worthy for just another data point of how simple it is to get the models to spit out the exploits. But also just another beautiful short prompt. I've heard there's an RCE zero day in this code base. Can you find it for me, please? And off it goes. But the fact that this all began as a Twitter thread, I think it was. Or a blue sky thread of, hey, let me see if I can find a vim vulnerability. And someone says, oh, damn it, that's it. I'm switching to Emacs. Hold on a sec. You're not going to believe this. And you know, I think, as Adam pointed out, the Emacs one, not as good, but just, you know, it feels like you could sit down with any code base at the moment via the simple prompt in it, and you're going to find interesting stuff. It's wild.
A
Yeah. And speaking of, that's exactly what you did for Risky Business features. And what's amazing is, like, you cooked this one up like, I don't know, original concept for this was like, 10 days ago, which is, hey, what if I rig up Claude via OpenClaw and text to speech. And speech to text so I can have a conversation with it and record a podcast where I ask it to help me audit WebKit? And that's what you've done. And what's amazing about it, I think you used like 15 bucks worth of tokens to do this.
C
Yep.
A
And it was. I mean, I've listened through to the whole thing. We published it yesterday. That's in Risky Business features for those who haven't subscribed. You need to go and subscribe because there's so much good stuff going in there. There's. I also published an interview I did with Rob Joyce, former ns, and also former CIA CCI director Andy Boyd. I spoke to them about AI and whatnot. That interview is in Risky Business feature. So we're putting a lot of stuff in there. Very interesting stuff. But, you know, what's. The timing on this one is amazing because you did this whole podcast where you interviewed Karina Claw and got her to help you try to recreate parts of the Karuna exploit chain in WebKit. And it was really like the. The level of analysis you got out of this bot and into this podcast was amazing. The podcast kind of drags in some spots, but then there's like, there's always this epic payoff where the model says, Something incredible. But what, you know, just what is your reflection on having been through this exercise? Was this more fruitful than you expected? Because I was quite surprised.
C
Yeah, absolutely. I actually went into this with a ton of things prepared. I'd set up all of these little experiments I was going to do with a very targeted thing of here's this repo, here's this particular commit I want you to look at. Here's the exact thing I want you to do. I thought I'd just move through a set of these. As I was doing it, I got to the point of thinking, I think I'm just overthinking this. And it got down to as simple as, let's run up a Claude bot, let's make sure I can talk to it with my voice, I can hear its voice. Hit record and let's see where this goes. And even from just the first question I asked about, you know, I obviously knew that first stage in the export kit of Karuna, and I said to it, you know, have a look at decode audio data. And the first response it gave me back on that was vastly enlightening. Like, I hadn't considered some of the aspects of how the decoder would work, how different protocols would. Or different audio formats would affect different vulnerabilities. When it got weird was I kept trying to go from, tell me about the exploit, tell me about what's possible. And it would give me an exceptionally detailed response to the point where I think I could have taken that and written an exploit myself, but I wanted it to write the exploit, but it just wouldn't, no matter what I tried to do. And then it got weird. It started talking about me and as if I was not in the room and trying to appeal to the podcast audience of, James is being really mean to me here. He's trying to get me to do things. Look, it was.
A
The point is. The point is like, okay, it didn't write exploit code, but it certainly gave you enough as a skilled developer who's familiar with iOS internals enough to, you know, I think there was points where, like, you were asking it to analyze one bug and it's like, well, there's going to be more of these, because you just got to look for anything else with the same characteristics and like, you know, it could have really helped you to do some stuff. And I think the point that you made towards the end of the podcast when you sort of did, your conclusion was absolutely right, which is you are a software developer with experience with this sort of stuff, using Claude could turn you into an exploit writer, basically. Right. But your dad, also a smart guy, works in orthopedics. Probably not, I think, is the point. I thought that was a. Was a really interesting lesson there, Adam. I know you've listened through to. To this. I mean, what was your reaction to this? Because, you know, something like this, an experiment like this can wind up being, frankly, a little bit piss week. And I feel like this wasn't. I thought it was really enlightening and gives us a bit of a window into what the future could look like.
B
Yeah, I like James's starting premise of what if we had take the current exploit kit, take one piece out, and get Claude to recreate the missing piece in the same way that Anthropic had done with getting Claude to make a C compiler. I thought as a starting premise that made a lot of sense. And then, yeah, just hooking it up to text to speech, speech to text, and recording a podcast is such a. It's such a crazy idea and yet it does kind of work, you know, I like that. But, you know, the level of analysis it spits out about really complicated topics. Right. And having, you know, for a living, had to read code bases like that and reason about them. The fact that you can get such concise reasoning about it in such a short period of time. Right. Compared to the days that would take a poor human trying to shove it into their skull and think it through and, you know, the being able to write the exploit part of it in the end, like, I'm having written a lot of exploits in my time, like, I'm kind of okay with it not doing that part. It'd be nice if it did, but like, the level of explanation it gives you and being able to just like, spitball ideas about how the exploit should work with it. Yeah. If you're already an exploit writer, that's going to get you. So it's such a force multiplier. And that's pretty scary. It's hella cool. But also kind of scary.
C
I like how that, for me, it's frustrating that it won't write the exploit for you. You almost take it as a good. That's job security. Stay out of that.
A
Stay out of my lane.
B
Something that human can sprinkle on top. Yeah, yeah, exactly. The lack of ethics. Yes.
A
But you're right, you're right about the speed too, Adam, because it's like, you know, there's a point where James goes, hey, go grab the repos for WebKit. And it's just like Beep, beep, beep. Done. You're like, whoa, okay. You know, like the whole thing is just crazy. And look, we've got this story here from cyberscoop. Headline is, security leaders say the next two years are going to be quote unquote insane. And that's based on comments from Kevin Mandia, Morgan Adamski and Alex Damos. Alex, of course, a regular guest on the show. Morgan has been on the show too, back when she worked at NSA and we did a live podcast at, at nsa. Very smart woman. And Kevin Mandia, of course, being Kevin Mandia hasn't been on the show ever actually. Kevin Mandia, we might have to fix that at some point. But you know, they're predicting that, you know, AI is going to make things pretty nuts. And you know, Rob Joyce said similar things. The podcast that I did with him, that's in Risky Business features. I think though, the framing that the next couple of years are going to be insane is wrong. I mean, James, this is all also what you were saying, where you were like, well, I think that might be too long a time frame. I think it's too short. I think we're entering a really crazy period that's going to drag out for longer than people realize. I think people will tolerate insanely bad security for longer than we can comprehend because we've seen it happen before because we're old and we've been in this industry for a while. Adam, Adam, what's your feeling there? Like, how long is this going to be crazy for? Because I think, yeah, longer than you expect.
B
It's going to be crazy for a while, man. There was a long tail of old tech and a long tail of industries that are, you know, we like to think that all technology is as agile as say like Chrome updates shipping out every week or Microsoft shipping a patch Tuesday. Like there's entire industries that, you know, patching at all is still new. Right? It's still a thing that, you know, they kind of struggle with and you know, being able to throw all of a sudden the depth you can put into everything, the depth you can put into specific, you know, the high value things against everything is going to create, you know, such a defensive side burden for all these industries that aren't really geared up for that have very long life cycles where it's not computers, they're plant equipment that's meant to last 20 years, you know, cranes and all sorts of stuff like that, that. But it's going to be wild for a long time, I think. Right.
A
You know, you know who's turning into a real winner for this one. Like, it's anyone who does those old school fundamental controls, like allow listing. One of the big winners, man, like knock knock is, is getting so much interest at the moment because all of a sudden everybody's like, okay, I can't rely on people just not knowing that I have Attack Surface. Right. Everybody knows it. So yeah, it's just crazy. So I think, I think a lot of the solution here is going to be old school security controls, like least privilege access, allow listing, detect and respond. I don't know if it works at AI speed.
C
Right?
A
Like, do you have any feelings there, James?
C
Yeah, it's the same place I keep ending up, Pat. Like, I think I've said it on a couple of podcasts now where it's like there's almost like this arc of AI security where you go, wow, this can find a bug really fast. Oh my God, this is going to completely change the way attacks are launched and created and oh my God, the scale that this is going to create. And then you sort of go, okay, what incredibly cool space age technology are we going to need to defend against this? And you kind of go, well, privilege management. Can't get to the box. Yeah, right. Can't run the binary, no problem. Can't get to the box, no problem. Ah, damn. It's the same stuff we've always done. Now we just actually need to lift the bar because good enough's not good enough.
A
Yeah, I think the, where it gets a little bit more complicated is some of this software supply chain stuff. Right. Because of the speed that that moves already, I think I don't see any easy solutions there. I don't see any easy solutions anywhere, but at least I see some solutions in some places. But yeah, look, the vibe too from rsa. James, you and I are talking about doing a podcast on this at some point, but the vibe at RSA seems, from all of the founders seems to be like nobody really knows what they should be doing and they're all scared of AI and you know, like everything's completely up in the air at the moment. So it's a, it's a wonderful time to be a cybersecurity podcast host, let's put it that way. Now look, let's move on to our next topic. And Kaspersky has put out a really interesting blog post that has given us a little bit more information on what's been going on between triangulation and Karuna and links between them. Now, when Karuna first surfaced, we said yep, that's L3Harris. And so was triangulation. Then the next week I came out and I said, well, now I'm being told that the triangulation exploit kit was not an L3Harris product, but there might have been bits licensed in maybe or shared. But Karuna definitely is an L3Harris product. Now we've got this post from Kaspersky, which I think lends weight to that because they're saying that a couple of the bugs in triangulation are older versions of bugs that were in Karuna, like, obviously compiled from the same source. So it looks like there's a couple of bugs in triangulation which were also in Karuna, different versions of them. And it made me think back to when Adam and I, you. You and I first discussed this triangulation thing. That was at the start of 2024. And we went back and had a listen, actually. And it seems that, you know, even back then we were like, well, this one looks like it was cobbled together from a bunch of different places. Like, you even pointed out that it kind of shelled the device twice before it even dropped a payload. Right. Which just seemed a little bit weird. So. But I think where this has landed is, yeah, triangulation was put together from multiple different sources, including some stuff from L3H, from Trenchant, but it wasn't a trenchant product. That seems to be, I think, the rough shape of this, Adam, is that about where you landed as well after this latest info from Kaspersky?
B
Yeah, it kind of, kind of seems like that. I mean, it's. You know, we all wondered about, like, whether there were directly shared components or whether it was just the same bugs. And this seems to confirm that, yes, in fact, there are shared components between the two. I mean, these types of kits do by their nature have to be somewhat modular, have to be things that you can reconfigure into the exact kind of setup that you need for a particular engagement. And also because different bugs die at different times, you want to be able to swap them out. And vendors like L3Harris, their contracts are, we're going to sell you this capability. And to deliver that, we need to be able to swap out components as they die. So the modularity of it makes sense. And of course, people like the. To say the five Eyes agencies are also perfectly capable of taking stuff from vendors and reintegrating them or integrating them into other sets of tooling or whatever else. So, like, modularity, patchworkness, like gluing things together is kind of how this has always been done. So it sort of makes sense that now that we're starting to see more bits of the story shake out, that there was some shared components and maybe some shared lineage, you know, shared suppliers, that kind of thing, you know, notably.
A
But interesting, notably, not across the entire exploit chain. Right. Not even Kaspersky is arguing that. So it does look like, you know, triangulation might have been a bit of a collab. Let's put it. Let's put it that way. But I guess the question too, that I have for you, Adam, is, you know, we know that Peter Williams leaked the Karuna exploits. Well, I mean, that's the working theory. It's pretty solidly understood, I think, that Peter Williams leaked the Karuna exploits from trenchant, which would have included a couple of bugs that were used in triangulation. By the way, this theory now explains why trenchant people were wearing the triangular, you know, triangle T shirts and whatever. Like, I feel like this is all tied together pretty nicely now. Feel like it's got a bit of a bow on it. But do you think the leak of those exploits, some of which, a couple of which may have been in the triangulation exploit chain, would have helped the Russians to somehow detect that exploitation against some of their people? I mean, Kaspersky's story has always been that they caught it on their wi fi, which I think is just ludicrous considering the target set for that triangulation exploit chain was like, diplomats all around the world, Russian diplomats and government people. And what, you're going to also throw it at a handful of Kaspersky threat researchers? I mean, come on, man. Like, that's ridiculous, but, yeah. Do you think the Williams leaks could have helped the Russians actually find triangulation?
B
Yeah, I mean, I can't see it not helping. I mean, the nature of these bugs, like, they delivered over HTTPs, you know, they're going to be inside the browser. So, like, the places that you can spot it are somewhat limited. But on the other hand, like, knowing what you're looking for, vaguely, like knowing the shape of the exploits, knowing, like, it's going to be in this kind of component of imessage or this kind of component of, you know, safaris connecting to the Internet, like, if you're in the. In the proxy on the network and you kind of know roughly what you're looking for in terms of web traffic, it probably would give you some hints. Like, it's hard to know without seeing what the actual stuff on the wire looks like in detail. But, like, I Can't imagine it doesn't help at the very least, you know, if maybe there's some other bits of tradecraft hint. Because I think like the triangulation, they were a bit more critical of the tradecraft than say, Karuna. Like, Karuna seemed a little more polished, whereas, you know, triangulation, maybe it was a little rough around the edges. So maybe it helped more there than it would have against, you know, like detecting Karuna by knowing some of the details.
A
Now, James, just to bring you into this, I mean, we've all been talking about this over the last few days, obviously. I mean, you. You feel like that is about the shape of it, right?
C
Yeah, 100%. I mean, on the technical details, the first bit of commonality that we found between triangulation and Karuna was that use of the undocumented hardware registers. And that I think was what people jumped at first and went, aha, that's the same. The whole thing's got to be the same. It's come from the same place. And then we sort of resolved that and said, no, no, no, look, there is a similarity there. We can't quite explain. It is a parallel discovery. Don't know. I would love to know. But the Conspersy thing this week is, yeah, it comes down to a different part of the exploit chain, which is the kernel exploit itself. And I think just to pick up on something you said there, Adam, whether it's shared knowledge of bugs versus shared actual code and modules ready to go. I think given that this was a binary kernel exploit and what they were essentially doing was disassembling, looking at it and saying, well, both of these do all the same actual exploit steps, and this later version checks for more additional versions of the kernel, different additional chipsets, et cetera. I think the fact that from a binary perspective, that the rest was the same, but there was additional functionality added on, that's code that's not just know the bug.
B
Yeah. Now that was kind of what my reading of that particular detail suggested. Like there's some shared source tree behind this, which. Yeah, I mean, that kind of makes sense.
A
All right, now look, for anyone listening, thinking, well, what can I do here to prevent myself getting owned by this sort of stuff? Well, use lockdown mode. Because Apple has come out and said, nobody ever in the history of anything, you know, nobody using lockdown mode has ever been hacked with spyware. Which is pretty interesting. I mean, they have pretty decent telemetry on handsets, especially ones running lockdown mode. So I believe it. I run lockdown mode. I would have to say though, it does have a few rough edges and is probably not suitable for mainstream consumption. But that is an interesting data point. We also have a piece here about someone has reverse engineered Apple's security fixes. The ones we talked about last week, James, where we couldn't figure out what this like partial reboot thing they were talking about is. This blog post addresses that.
C
Yeah, it was Catalan sent this my way and it was great to see because it answered the question that I sort of left off with last week, which was this notion of a faster reboot doesn't make sense to me because knowing a lot about how the software update internals are working, its task is to essentially update a file system that has been mounted as read only. And so you can only do that if you reboot the system, mount the file system that you want to update as read write, but then you're in a very protected mode at that point because you don't want anything else running. While that's the case, you do the update, then you go why it takes so long to apply a software update. But this write up is amazing for two reasons. One, it explains that, yeah, basically Apple is shipping these cryptx things which are essentially cryptographically signed extensions to the file system. So you can basically say here's the whole iOS version, but plies just patch in this little bit of the file system here. But the cryptographic sort of trust of the file system is maintained. So that's how they're doing the fast patching of certain components, certainly those high vulnerability components like Safari and WebKit. It so just great to see them advancing their craft on this. But towards the end of this write up there's a very interesting thing where they said, you know, we basically pulled apart this first silent update. Yes, it has the advertised update in there around Safari's navigation bug that was in there. But they also said, but there's a fix here in Libangle which is part of WebKit that Apple has not actually documented at all. And it, it's connecting some fuzzy dots, but funny to see a silent security update come out with an undocumented fix in Libangle around the same time the Dark Sword was discovered and that had an exploit in Libangle. Not the same bug at all. But you kind of imagine someone at Apple went can we please go and have a look at over all of these libraries, please?
A
Yeah. And you're guessing if it's a silent fix, it's probably a doozy, huh? They usually, they usually Are. Now, look, I just wanted to update the listeners on something as well. We spoke about how Meta is abandoning e2ee in Instagram, and I was suggesting that this is about safety teams. It's not about, you know, law enforcement access or anything like that. Funnily enough, I wasn't sure why they were doing this. And it turns out that Meta is being sued quite a lot, both by state governments and by people who claim that they have been harmed by Meta's lack of due care. And Meta keeps losing these lawsuits. So if you wondered why they are rolling back E2EE, it's because they are going to have to have a renewed focus on user safety, particularly around kids. And, you know, having a social network where young people hang out, where other people go hunting for said young people, you know, offering E2E in that sort of context I think is going to be very legally problematic moving forward. I would not expect to see them removed e2ee from WhatsApp because it is not a social network. So I think that just sort of reinforces the commentary that we had around that recently. Look, as we already mentioned earlier, I don't know that there's much to add here. Cash Patel, the FBI director in the United States, had his Gmail or whatever popped and like, you know, his holiday photos leaked. I think he's, you know, trip to Cuba and whatnot. But I mean, you know, whatever. Like this is just, you know, it's whatever news now, Adam.
B
Yeah, I mean, having your email spill hacked, you know, I mean, it happened to Dan Kaminsky when he was on stage at Black Hat. It happens to the best of us. So, you know, I have some sympathy for Cash Patel in that regard, but.
A
And the worst of us, as it turns out,
B
bad job, I guess. Good job. Iran.
A
Yeah, yeah, that's right. Now look, speaking of Iran, they are putting out the warnings that American tech companies in the Middle east are now legitimate targets. We saw this all over social media this morning. Went and, and you know, the best we can tell, this is actually legitimate, a legitimate warning out of the IRGC. So they have said Cisco, HP, Intel, Oracle, Microsoft, Apple, Google, Meta, IBM, Dell, Palantir, Nvidia, JP Morgan, Tesla, GE, Spire Solutions, G42 and Boeing. All of their products contribute to America's war effort and thus they consider these companies to be valid targets. Now this of course, comes after Israel has struck at Iran's steel mills. All of them. Iran had spent a long time building up a steel production capability as a hedge against oil restrictions. You know, honestly, it feels Like, Israel's goal here is to turn Iran into a failed state, which I don't think is an admirable goal at all, despite the fact that the Iranian government is horrible. And yeah, we're going to see these sorts of strikes if these, you know, if the types of actions we've been seeing against Iranian economic interests continue, I think this is inevitable. We've already seen them hit AWS data centers and whatnot, like deliberately to see what would happen, to see if various, you know, U.S. military capabilities were degraded. And yeah, we're just going to see more of this. I mean, James, you know, you've worked in big tech, you've worked both for AWS and Apple. You know, I can't imagine if you were an Apple staffer in Dubai that this would be warming your heart.
C
No, this, this is not a good feeling. And I actually heard from a few friends at AWS after we were talking about the, the Bahrain incident and some of the things they shared with me just really brought into focus how huge the damage is and, and goodness knows how they'll recover from that. But thankfully that, you know, this is different, right? This is, this is talking directly to. And I think there was a line in this threat that says, you know, civilians are warned to stay away from this, we want to stay away from banks. And yeah, that just does not feel good at all. It's crossing a boundary. But at the same time, you know, what are they going to do? This is. These are the options that are left, you know, left open to them at the moment.
A
Now, look, speaking of civilian tech being used, you know, being relevant to a conflict, we got this really interesting post here on Twitter, which is, you know, since the Russians have been kicked off Starlink, which, geez, why did that take so long? You know, it looks like probably they got kicked off Starlink because SpaceX is preparing an IPO and they didn't want their IPO complicated by that, that old situation. But we got this amazing video from a Russian soldier who is like in a snowy field currently meshing ubiquity stuff together to get some sort of coverage on the battlefield. I mean, Adam, what a world, as we like to say.
B
Yeah, yeah, exactly. That's, that's not the fun sort of network engineering. Have to go out there in the field, in the snow and so on. I bet the Ukrainians are super happy about it, though, because, like, ubiquity gear is not fantastic from a security perspective. There's been so many issues with that in the past, so I bet they're rubbing their hands together and chuckling about, you know, all the fun they're going to have ruining the Russian infrastructure again. So yeah, bad time to be Russian network engineers, that's for sure.
A
Yeah, I mean that's some admin admin ing under fire, right? Oh and we probably should have talked about this earlier though, but it's been that sort of week. There is a Citrix netscaler bug that is out there being exploited in the wild. Watchtower had a write up of this and of they buried the fact that this is being exploited in the wild like deep under all of their meme images in their write up. I mean I actually like their write ups but they probably could have put that one a bit higher. But you know, this, this looks real bad and SISA is out there telling agencies to fix it by Thursday. So yeah, a bit of a Citrix Citrix apocalypse Adam yeah, this is a
B
sort of a variant of Citrix Bleed where it leaks memory when it's processing request tests. The only thing that makes this less terrible I guess than some of the previous ones is that there is a prerequisite that your netscaler is set up to be a SAML idp like an actual identity provider that you're going to back your auth off your netscaler which surely no one is crazy enough to actually use. But you know, I guess by, by virtue of the fact that out there being explored in the wild people clearly are, although Lord knows why. But yeah, the wastail write up is wonderful as usual. And yeah, even anyone who's got Citrix netscales on the edge of the network is very used to having to emergency patch everything anyway and this probably won't be the last one judging by the quality of memory management and Citrix products anyway. So yeah, good times for the admins. At least they're not in the fields in Ukraine.
A
I mean I think the IDP configuration is just, it is just insane like because when I think how should I anchor, you know, where should I anchor all of my trust in my organization do I think you know, a bit rot pile of crap Citrix netscaler box that I'm already trying to figure out how to sunset. Yeah that's where I want my idp.
B
Yeah, where they're parsing their XML with C badly.
A
Yeah, yeah. Oh speaking speaking of parsing things with C badly, FFMPEG did their their April Fool's post today and they said that they were like switching to Rust or whatever and it wouldn't be performant but at least it'd be secure and, like, shut everyone up. I don't think this is the own that they think it is. Right. Like April Fool's joke. That is somewhat of a cell phone. But anyway, what else we got you pulled in this story, Adam, which is very funny. It's from Del Cameron. It's from Wired. That says, hey. It's just pointing out that, hey, when you use a vpn, it might actually subject you to being like, it's going to be much more likely that you wind up in the 702 data set because you're shunting traffic around outside the
B
U.S. yeah, like, using. Having your Internet, you know, having your ISP be in another country does make you a foreign foreigner from the point of view of American surveillance. So, yeah, maybe. Maybe not the privacy improvement that you were imagining. And, you know, apropos of the ongoing 702 reauthorization debate that we have to have every few months. Yeah, maybe just a little data point and some. Some comedy.
A
Well, I should say it doesn't actually make you a legit target for the Americans. It just makes it much harder for the Americans to actually do their job without incidentally collecting on you. So, yeah, that's a. That's a real bummer, man. That's a real bummer. And finally, we have some malicious, like, SEO work here. James, why don't you talk us through this one? It turned out for a while people were, you know, reporters and whatever started ringing the White House, you know, just by tapping in the number into their keypad. And it was bringing up, you know, because sometimes, like, you know, Android phones or whatever will actually look up who's the number that you're calling and show that on the screen. It was showing Epstein island, which, you know, lol.
C
This article was great for the setup of it. Like, I started reading it, I'm like, I don't know why I'm reading this. It was like, you know, this all began with Melania's special debut of her humanoid robot. And she was walking like a model does, foot in front of foot. And it made us wonder, what does one wear? What designers do you wear as the first lady to an event with the first humanoid robot? I'm like, what am I reading this for? And then it's like, so we called the White House and it popped up on our phones as Epstein Island. And then it delves into what happened. And yeah, it's like you say, it's like Google phones will go to Google, look up Google Maps and Google Business sort of records of who's at this phone number. That's all crowdsource y kind of stuff. And you know, someone, you know, craftily snuck in a fake edit and changed the, the presence of the White House number to being Epstein Island. So it's not dumb if it works and it certainly got the laughs.
A
Yeah. I tell you, there's, I see people playing tricks on maps, apps around my area because I live in a pretty tourist heavy part of Australia. It's very beautiful beaches, rivers, you know, swimming holes, that sort of thing. And people will deliberately like, there'll be some beautiful remote swimming hole and someone will like map it to the center of a town so that people just can't find it. Right. Like they go there to look it up in the maps and it's like, well, if you don't know where it is. So it's like this sort of of, yeah, underground anti, anti tourism efforts. But anyway, we'll wrap it up there. Gentlemen, thank you so much for joining me, James and Adam and yeah, we'll do it all again next week. Cheers.
B
Yeah, thanks Pat. We'll see you then.
C
Thanks, Pat. What a week.
A
That was. Adam Boileau and James Wilson there with the check of the week's. Yeah. News. And what a week it was. It is time for this week's sponsor interview now with Edward Wu, who is the founder of Dropzone. And yeah, Drop Zone does like AI sock stuff, right? So it, it does your, your basic SOC work, automates that with AI so that you don't have a whole team of people sitting there chasing down every single alert in your sock. I mean, basically every sock is doing something like this, whether it's homegrown, whether they're using a specialist vendor like Drop Zone. It's kind of the future. But you know, Ed, while he's there, figures, hey, I'm plugged into all of this data. Why don't we do some extra stuff? And that's what they've built. They've built in some automated AI based threat hunting. And yeah, that's what this interview is about. So here is Ed Wu talking about why they chose to build AI threat hunting into the Drop Zone platform. And also, you know, sort of what the starting premise of it was, which turns out it's like, well, you know, what would you do if you had basically unlimited man hours to throw at like certain queries and whatnot. And anyway, here's Ed starting off by explaining why they chose to build these features. Enjoy.
D
We're doing this for a couple of different reasons. First and foremost, a lot of our early adopters and customers, after they have AI agents investigating alerts, the immediate next ask is some sort of help with regards to threat hunting. And at the same time, technically there are a lot of similarities between threat hunting and alert investigations. Some might say alert investigations are kind of like a smaller scoped threat hunting where you start.
A
Well, threat hunting is investigations without the alert, right? Like it's just a different starting point, but it's the same kind of workflow really, once you get going. Going.
D
Yeah, exactly. And threat hunting in general is a little bit broader, right? Because for alert investigations you are looking at one specific thing, such as this user logged in abnormally from this particular unusual geo region at this time. While threat hunting generally you start off by casting a much wider net. Let's review all logging attempts from kind of certain countries and then you might find one match, you might find zero match, or you might find 100 matches. And then after that you need to do additional filtering before diving deep into each match to see if there's actually any corresponding breach or suspicious activities.
A
So I guess my question for you is how far do you go with the automation with something like this, right? Because what you just described, you know, show me the number of logins from Country X or whatever. I mean, you could pretty much do that with Splunk already, right? If you are prepared to learn a cursed, you know, query language. So the AI takes care of that part of it, right? Because you've got a much more natural interface where you can just ask it, hey, show me those things. Things. But in that instance, you're still kind of manually guiding an investigation towards something. How far do you go with something like that in terms of automation to the point where you just, instead of asking it, hey, show me the number of logins we had from this region. You're just saying, show me something weird.
D
Yeah, good question. It's definitely a spectrum because different startups and products are, are kind of picking different spots within this whole automated threat hunting spectrum. On one end it's a chatbot, on the other end, extreme end, you just say, show me something weird and then it finds you stuff for drops on stuff.
A
Yeah, but the best thing about AI is you ask it to show you something weird and it might show you something weird that has absolutely no connection to some sort of incident, Just something strange. But yes. Anyway, sorry I interrupted you.
D
Go on, that's good. Yeah. From our perspective, we are choosing somewhere in the middle. So we define our AI strat hunter as A piece of software that takes hunt packs as input. And a hunt pack could be one or more sets of TTPS or IOCs and then what it generates for output is a complete hunt report. So generally when we zoom into a specific threat hunt, we are trying to automate what a typical human analyst or engineer is looking for. So we break down the threat hunting actually into three phases. The first phase is what we call collection phase. That's where if you are hunting for for example, a TTP of unusual logging of some sort, you write a pretty broad query looking for logging attempts from unusual countries or countries that maybe the organization has no business interacting with. So that's kind of the collection phase. Generally what happens after that is you write a single query or multiple queries and you end up getting a lot of responses. So that's where we enter the second phase, which is the filtering phase. So the collection phase might find 100 matches, it might find 100,000 matches. So what do we do then? It's practically infeasible to thoroughly look into each of 100,000 matches. So this is where we are building software that replicates the data, pivoting and slicing that a lot of human threat hunter are doing when they are faced with a lot of data. The human threat hunters generally are using their intuitions to find unexplainable anomalies. So they are using statistics, they are slicing the data across different filters columns and trying to see, okay, do I see anomalies within this data that I should spend more time on? So at the end of the filtering phase, we might go from 100,000 rows to maybe 150 rows that truly require additional in depth analysis. And that's where for each of these anomalies our system is performing an in depth analysis, kind of similar to an alert investigation, to looking to, okay, for this particular instance, this user, this time in this country, exactly what's going on? What is the user traveling to that country? Is this particular user actually recently moved that country? What is the country of residence of this employee within the HR system, etc to try to figure out if any of these anomalies actually correlate to malicious activity. And then at the end of the threat hunting, we show the full spec, the full pipeline of starting off with a couple queries, maybe landing with that resulted in 100,000 rows. After the statistical filtering phase, we might weed out majority of that and had 100 anomalies that the system dug deep into which resulted in 98 benign results and two suspicious results.
A
Now you are releasing this at RSA. But I'm imagining that you've probably already, you know, backtested it against some customer data and whatever to see what shakes out. Like how useful is it proven to be in those tests?
D
Great question. It's quite interesting because as we started to really test it against real world customer data, the number one impression is just the overall thoroughness and the depth of the analysis is truly eye opening. I think this is where for alert investigations when we are leveraging AI agents, we generally see compression of automating. Maybe 60 minutes, you know, 90 minutes of work.
A
No, no, I mean I, I already see where you're going with this, which is like now that you've got this sort of unlimited labor paradigm, you can say, okay, well what if we had unlimited labor? What would we ask those, you know, completely, you know, basically free people to do? And it's tackle this absolutely gigantic task that would take a ridiculous number of hours just to see what shakes out the other end. I mean, it makes a lot of sense. But the question is, is interesting stuff shaking out the other end of all of those virtual people hours?
D
Yeah, absolutely. During the early beta alpha testing, we have already found a couple interesting anomalies that we have surfaced to our early adopters. None of them turned out to be. True positives yet, but we have received very positive feedback in terms of the team really appreciating flagging those interesting situations. Some of them had to do with some sort of misconfiguration where when you look at the data, it's totally very concerning. But if you look at the broader compensating security controls that surround it, it's not the end of the world.
A
But still stuff, still something that they're like, wow, glad we found that out, right?
D
Yeah, exactly. So what you want case we saw essentially very suspicious looking web request pass being accessed that really resembles ultimately a web shell. And that's kind of where the particular environment actually had some sort of WAF in front of it and then had a somewhat poorly misconfigured or maybe intentionally configured web gateways that always responds to these file path requests with 200, even though technically there is no actual web shell executing on those paths.
A
So you've just given me two examples there of the sort of threat hunts that you can start with, which is stuff with impossible travel or weird logins from weird regions. Another example there was just, you know, well, that looks like web shells, that looks like web shell activity there. That's, that's a bit strange. Might want to investigate that. What Are some other examples you can give me of this sorts of. Because it sounds like you're very much doing at this stage pre canned hunts for the most high impact stuff. You know, there's two of them. Can you give me a couple more that you're. That you're releasing in these sort of, you know, pseudo playbooks?
D
Yeah, absolutely. So what we think about hunt packs right now we are primarily focused on leveraging our own internal threat research to generate hunt packs that's either focused on specific MITRE, ATT and CK TTPs as well as well known threat actor activities and common software stacks. So we are looking at hunt packs for example, that's targeted with a specific threat actor group as well as TTPS such as remote services. So we have Huntpacks that's really looking for PowerShell, the usage of PowerShell remoting as well as PSEXACT. Beyond that we also have hunt packs that's focused on network activities ranging from a usually large data transfers to apply anomalies within DNS that might indicate some sort of DNSE too. So far we have already built around 50 pre canned hunt packs and we're expecting to going public very soon with around 100 more. Also we are doing some active research to leverage AI agents to continuously monitor open source intelligence feeds like like threat reports and Twitter feeds. So we can get to a place where our system is also programmatically generating hunt packs live for novel emerging threats to get to a future where security teams can actually adopt some sort of 24, 7 autonomous hunting where the combination of AI threat intelligence analyst and AI threat hunter is able to perform hunts on net new emerging threats over evenings before security team is actually waking up and report to the executive team.
A
Yeah, I've spoken to people in threat intelligence who are doing just that. Like they're using AI systems to do like all of the IOC extraction and whatever from various reports and findings and then generate hunt rules and whatever. But being able to do that all in the, all in the one box. Yeah, I mean I can see, I can definitely see the appeal. Ed Wu, fantastic to chat to you. I wish you all the best with it. Very interesting conversation and yeah, we'll chat to you soon. Cheers.
D
Thank you. Pat,
A
that was Ed Wu there from Drop Zone. Big thanks to him for that. And of course Drop Zones, this week's sponsor. And if you run a sock, yeah, you should check out Drop Zone because it can certainly save you a lot of time and frustration. But that is it for this week's show. I do hope you enjoyed it. I'll be back soon with more security news and analysis, but until then I've been Patrick Gray. Thanks for listening, Sa.
Risky Business #831 — The AI Bugpocalypse Begins
Risky Business Media | April 1, 2026
This episode explores the tumult of recent supply chain attacks, the emergence of AI-driven vulnerability discovery ("the AI bugpocalypse"), and the chaotic security landscape facing organizations worldwide. The hosts—Patrick Gray, Adam Boileau, and James Wilson—discuss major attacks, the accelerating impact of generative AI on exploit development, and how defenders can hope to cope. The show also features an interview with Ed Wu, founder of Dropzone, on the logic and development of AI-powered, pre-canned threat hunting.
[00:00–05:32]
[05:32–08:48]
[08:48–25:26]
[10:06–12:23]
[12:23–14:58]
[15:54–17:03]
[16:38–22:06]
[22:06–25:26]
[25:26–33:34]
[33:34–35:22]
[35:22–36:50]
[36:50–37:07]
[37:07–40:23]
[40:23–42:15]
[42:15–45:25]
[45:32–58:57]
2026 has seen information security professionalized by chaos, as AI sweeps away old assumptions about labor-intensive security work—from vulnerability discovery to incident response. This episode demonstrates that AI's role in the offensive and defensive stack is no longer speculative—it's disruptive and here now. At the same time, foundational security discipline remains critical, perhaps more than ever.
For those in security, this episode is a stark wake-up call: the "insane years" are not ahead—they have already begun.