Loading summary
A
Foreign. And welcome to Risky business. My name's Patrick Gray. Absolutely jam packed show this week. We've got a bunch of news to get through. And Adam Boileau and James Wilson will be joining me in just a moment to walk through all of that. And then we'll be hearing from this week's sponsor. And this week's show is brought to you by Nebulock, which is a startup that does AI based threat hunting. And in this week's sponsor interview, Nebulox head of threat hunting, Sydney Maroney is joining us to talk about a agentic threat hunting guide that she has written. But yes, Sydney Maroney is this week's sponsor guest. And that interview is coming up after this week's news, which starts now. And Adam, of course, the big news this week is the, you know, the war against Iran, you know, kicked off by none other than the FIFA Peace Prize laureate Donald Trump. The president of peace, Donald Trump, is bombing the absolute crap out of Iran in conjunction with Israel. And, you know, it didn't take long for the cyber angles to emerge here. Of particular interest is this piece from the Financial Times that cites some, you know, anonymous sources talking about how the, you know, IP traffic cameras and whatnot in Tehran have been compromised for just years and years and years.
B
Yeah, it's one of the things we've seen in a number of conflicts around the world is the importance of Internet connected cameras for reconnaissance, for battle, damage assessment for all kind of like on the ground, situational awareness remotely, like the sort of things that previously you would have relied on human sources for or surveillance, overflying, that kind of thing, like actually being able to see on the ground what something looks like really does it appear make a pretty big difference? And in this case, Israel's access to the camera systems there seem to have been pretty important at tracking movements in and out of the compound where Ali Khamenei was, you know, was eventually killed. Understanding the pattern of life around that, I mean that's, you know, you can certainly see how those dots will be joined together. And of course, Israel, you know, has been up in Iran's business so much over the years, you know, with the big gas pumps and camera systems and, you know, all sorts of things like that. There were also reports that they were or the Americans, whoever it was, it was in the mobile phone system around the compound and was able to disable communications for bodyguards in the time leading up to the attack so that they weren't getting any advance warning of incoming aircraft or whatever else. So, you know, the Cyber angle to this seems like it may well have been, you know, kind of more, you know, important perhaps, than in other conflicts that we've seen.
A
Yeah. I do think this really does cement the idea that IP cameras are a risk. Right. And it sort of explains why a bunch of the SIGINT agencies for years have been so absolutely terrified of, like, hikvision cameras and there's initiatives to rip them out of places and, you know. So, yeah, that's one aspect to this. I think another interesting thing here is a lot of the reporting is suggesting that the timing of this war kicking off. I'm sorry, it's a special combat operation, special military operation was. Was taken. So they've gone for special comb operation, I think. But, yeah, the reason this kicked off when it did is because they had the opportunity to actually get Khamenei because they knew where he was. So that is, you know, that might actually explain the timing where it was like, well, you know, we know that this meeting is happening. Let's go. And, yeah, following people around on traffic cameras through Tehran building pattern of life. You know, no surprise there that the cyber angle here is largely about intelligence gathering. We also have a write up here from TechCrunch, Lorenzo. Lorenzo has done a bit of a roundup of, you know, just a roundup of some of the cyber activities that have been reported, including, like, there was a prayer app that got hacked and, you know, was. Was giving people messages to lay down their arms and whatever and, you know, join an uprising against the government. I think there was some TV stations hacked as well, and they had Netanyahu and. And others on, you know, broadcast onto Iranian tv. I mean, it's much. Most of it. As much as you expect, right? Yeah, yeah.
B
I mean, we've certainly seen, you know, that. I'm thinking, like, all the episodes of Between Two Nerds that we've had over the last couple of years about the conflict in Ukraine and the extent to which cyber hasn't really been that impactful here. And there are certainly some elements of that here. Like, I mean, things like hacking a prayer app to tell people to lay down their arms. You know, that's the sort of thing that Tom and Grok would probably be like, you know, come on, that's not really, you know, that's not the good cyber. But, you know, there really is a gamut of legitimate uses here, from pattern of life stuff through to some of the disruption we've seen of air defence systems. Like, certainly going into Venezuela. We got, I guess, like a preview of what that looked like. And then there's been a number of stories about both in this set of conflicts against Iran and the earlier strikes about the extent to which, you know, air defence was not particularly effective and you know, where we saw like what power cut off the radar sites or something prior to the previous attacks. So. So you know, there are a set of things that cyber is legitimately being, is legitimately useful for that are being used. You know, now several times we see a whole this is a playbook now. It's not a one off. Right. This is clearly the way that they're going to do this in the future and everyone else is going to be paying attention.
A
There is cyber doctrine now. Right. And you know, we have seen some comments from US military officials along the lines of, you know, there was cyber and space based, you know, operations against air defense and whatnot. So yeah, it does look like there is a playbook for this stuff. Now we've seen a comment from Matthew Prince of Cloudflare, he said on X Counter to what some cyber vendors are saying there's been a dramatic drop in Iranian cyber operations likely as the operators are sheltering, they may pick back up. But right now there's a noticeable lull. Our read on this, you know, we had, all of us had a meeting yesterday, had a bit of a discussion and Tom Yuren, our colleague, suggested that perhaps they're not hiding from the bombs, but their normal officers are being absolutely ddosed to smithereens. And I think that is actually a fairly likely scenario.
B
Yeah, I mean either way access to the Internet in the country is pretty heavily restricted. So like there is a great many reasons why they might be somewhat occupied and not really able to contribute. Most of the talks of counterattacks we've seen so far have been of the like missile and drone variety towards other states in the Gulf. You know, we may see some cyber coming, but yeah, right now they've probably got other things going on in their lives, you know, regardless of whether it's packets or bombs.
A
Yeah. And of course Cyber Command's out there talking about how they've disrupted Iranian comms and whatnot. So, you know, I think just trying to get anything done in Iran at the moment would be actually quite difficult. But that's not to say that once this calms down, we won't start seeing Iranian threat actors really kicking up and causing a stink. I mean, I am not entirely sure how worried we should be about this yet, but my feeling is that it's not trivial like that. We could see a Bunch of nuisance attacks coming out of Iran. Eventually.
B
Yeah, because when else are they going to do it? Like if not now, what were they preparing for? What was all of the. Preparing the battle space?
A
I mean, what's the escalation risk now?
B
Right, exactly, exactly. Like what reasons would hold them back now after they've had their leader, you know, bombed by an adversary? Like why wouldn't they go and throw everything they've got at by launching missiles and everything else if they've got anything left in the tank cyber wise? Like surely we are going to see it in the very near future.
A
Yeah, well and look, there's the other wildcard here too, which is that we don't know how any of this is going to play out yet. Right. I mean, I think where I am with this and where most of the people I know are with this is that we have no idea what's going to happen here. And you can sort of really hope for a positive outcome, but it seems like a pretty narrow opportunity to get that outcome. Right. So this is a massively destabilizing event that's going to have all sorts of, has potential to have all sorts of second order effects and we just don't know how that's going to work out. But, you know, only time will tell. We've also seen a bunch of Starlink related news, Adam. We've got a report here from Forbes that says that Iranian hackers are using Starlink to stay online. Are these script kiddies or is this like state sponsored activity?
B
I think well, one of the groups is Handala, which is linked to the irgc, I'm sorry, the Ministry of Intelligence and Security. So like they are very much state aligned. But you know, I think anyone who's getting online from Iran at the moment is getting online through Starlink. It seems like there isn't very many other options. So you get a little bit of everything as to whether, you know, the mois linked hackers are able to kind of really operate in these circumstances, we don't really know. But yeah, I think anyone using the Internet in Iran right now, probably on Starlink, for better or for worse and
A
you know, staying with Starlink, it looked like the United States. According to this report from the Wall Street Journal, the United States has actually smuggled thousands of Starlink terminals. Iran. This happened amid the protest crackdown. So you know, that's a bit of a wild card in all of this as well. But it's like, yeah, I'd imagine the networks are not working particularly well over there. And Starlink is that. Yeah, it's a bit of a wild card.
B
Yeah. I mean, it certainly has changed how we deal with communications, you know, in places where the, you know, the local infrastructure either doesn't exist or the government, you know, the regulatory environment doesn't want it to exist. And, you know, anyone who's looking at building, you know, their own isolated Internet, I'm looking at you. Russia, you know, has to contest now with the existence of a global network that's very hard to block other than, you know, direction finding terminals or looking on people's roofs or whatever else which, you know, Iran has been very much trying to do.
A
Yeah, yeah, indeed. Now, staying with wireless stuff, don't rely on your GPS in the Strait of Hormuz at the moment, Adam. I think it would be the tldr.
B
Yeah. Like, I've been looking at some of the flight tracking sites and the boat tracking sites and, yeah, GPS is just all over the place in that region. You know, the planes, you know, flipping around the world instantaneously with, you know, spoofed signals or faked responses up to the receivers for the ADS B data. And yet the same in the shipping environment. So we've seen large scale GPS disruption for. It's kind of expected as part of any modern conflict.
A
Yeah, I mean, I. Color me completely unsurprised that this is happening.
B
Yeah, yeah.
A
All right, now we're going to bring James into this one because this story is actually kind of funny in that. Is it funny? I don't know. But a missile has hit an Amazon data center in Dubai, leading to an absolutely hilarious statement from Amazon saying that an object hit their data center, causing sparks and fire. Which is an interesting way to say that your data center got bombed. But there's a whole dimension to this that Adam and I, you know, we were completely unaware of. But James, you worked for Amazon, you're much more familiar with how they run their data centers. And from what you've told us, recovering from this is going to be an absolute nightmare.
C
Yeah, I wouldn't be surprised if they have to essentially scorch the earth and start again with the entire data center. So the thing with Amazon data centers is that they are largely something where equipment goes in and it only comes out wrecked. And what I mean by that is when you go into a facility, there's parts of the facility, particular the data center halls, where you cannot take any electronic equipment into. You don't have your phone, you certainly don't have any USB devices. You go through all manner of Metal detectors to make sure that it's literally just the human flesh and blood that is walking in to the data center. And anything that gets removed out of there, especially servers that might be carrying hard drives and SSDs, they're physically destroyed. We would drill through the drives, we would drill through the main boards before they would be permitted and they'd be inspected before they're taken out. So when you've got a situation where an object has broken the physical perimeter of a data center, it raises the question of, well, how can they re establish that lineage of trust of any of the hardware in there? You know, what's to say someone didn't sneak in there while there's a big hole in the wall and implant a bunch of things with a USB drive? You just don't know. And because you don't know, I kind of imagine you got to start again from scratch. So the recovery from this, I think will be quite, quite lengthy.
A
Yeah, I mean, you just sort of wonder if given the scale of this as a problem, that maybe an exception is made or a secondary process is worked out. Like, I don't know, maybe there's someone at Amazon who worked on a procedure for this and it's in a binder somewhere. I mean, is that possible?
C
Possibly, but I just don't see how they re establish the trust with their customers. You know, we made an exception here because it was an object that went through the wall. And you know, trust us, it's fine. But the reason they do such extreme care around devices go in and they're certified that they're blank and when they leave, they're destroyed, is that's how people universally trust, okay, well I'm going to put my data in a data center. I just don't think an exception would even be entertained here. But at the same time, this is the first time we've seen a multi az outage that hasn't been caused by a software bug. So we're definitely in some pretty uncharted territory.
A
Yeah, and look, full credit to whoever wrote that statement because that is just incredible BS that is like 10 out of 10 BS that an object struck the data center causing sparks and fire. And of course you made the point to us earlier that it's not just about a hole going into the wall, but presumably there were some firefighters and whatn on the scene and you know, people who are not Amazon staff who had not been, you know, they brought in their hose. That's not just, you know, just a fleshy meat sack going into the old data center there. So, yeah, probably someone at Amazon's having a hard day. Is the TLDR there on that one? Now we're going to move on to this story by Andy Greenberg. That implies a bunch of stuff I'm going to. I'm going to throws some weight behind what Andy Greenberg is implying in this story. I'm going to be pretty light on details about how I came to form a series of opinions that I'm going to share with the listeners right now. Now, the story, the headline is, a possible US government iPhone hacking toolkit is now in the hands of foreign spies and criminals. And the story, basically, it's along the lines that the exploits that were a part of the triangulation campaign that targeted, I think it was over 1,000 devices in Russia. That exploit chain is now being used by criminals to steal cryptocurrency. Now, this story also sort of implies that this exploit chain may have been the one that was stolen by Peter Williams, who was a manager at L3 Trenchant and sold to a Russian exploit broker. It has been my opinion, and Adam, I'm sure you'll back me up on this, that I'm not making it up. Now, it has been my opinion for several months that the exploit chain that Peter Williams did sell to a Russian broker was the triangulation exploit chain. It has been my opinion for several months that the reason the Russians discovered the triangulation campaign is because Peter Williams sold those exploits to a Russian exploit broker and somehow they wound up in the hands of the Russian government. And from there, the Russian government coordinated with Kaspersky to come up with a parallel explanation for how they found this tool chain. So if people have wondered why I've been saying that Peter Williams got off extremely light in his sentence, which was, what, seven and a half years or something? I mean, this is scandalous, what's happened here. Now. Now, Andy doesn't explicitly say, oh, that's what happened, but look, it's long been my feeling that that's what's happened. I have varying levels of confidence in little bits and pieces of it, but I could say that, and as I said, you'll back me up, that I've been saying this for a while. It is my opinion, and I haven't wanted to say it publicly, but it is my opinion that the. That what Peter Williams walked out of L3 trenchant was the triangulation kit and it caused a massive harm to the security interests of the United States and its allies and Ukraine in particular. Adam, I mean, what do you even say here? Right.
B
The story around triangulation and Kaspersky's discovery had always, you know, there was always, it always felt like there was more to say there. And you know, we, we very rarely see so much of these stories, I guess, you know, now coming out into the public eye and you know, seeing what, what that full chain looks like, you know, seeing some of the details of that Google published right up we've got a bunch of indicators of compromise from, I think Iverify have also been, they got a cop hold of a copy of it from a Chinese, like crypto, as you say, cryptocurrency. You know, criminals that were using it managed to exfil it out. They apparently got a debug build of it out of that process, I think they said, and they found a bunch of internal details and naming or maybe it was Google that got the debug. Anyway, someone managed to get a debug build of it that had a bunch of these internal names. And you know, the story is just, we don't normally see the inside of this unless you are in that kind of quite rarefied world. So it's interesting to see the details, it's interesting to tie it back to that campaign triangulation and just kind of see what the big end of the market looks like, what the expensive stuff looks like. And it looks pretty good.
A
Yeah. I mean at the time that triangulation was quote unquote discovered, we said, well, they were operating a pretty massive scale so it was kind of inevitable that they would have got caught. But you have to wonder how much longer could that campaign have operated for if one of the insiders wasn't flogging off the exploits for chicken feed like this guy deserves way more time in prison. Like this is serious stuff, you know, this is serious stuff. When this is presumably an NSA operation targeting a thousand devices in Russia in a time of when one of its allies is at war with Russia. I mean to undermine that is just mind boggling and to do it just for such a relatively small amount of money, like this guy is an idiot, he's a moron.
B
And especially given that he came out of ASD and you know, the, the intelligence world, like, you know, he understood what that mission was like. He worked with people, you know, who lived and breathed this mission to then sell them out. You know, I've, I've been to a bar with some people that worked at, you know, L3 Harris Trench and, and they've all got a lot of feelings about this oh, yeah, as do spooks and, you know, some of the other people. And, you know, to sell it out for so little does just seem a little bonkers, you know.
A
Yeah, no, it's. It's unbelievable. And yeah, the feelings are strong from people on those. You know, I do know some of those people. And I think if you put this guy in a room alone with some of his former colleagues, you know, he would not be emerging unscathed, let's put it that way. So, yeah, look, that's, you know, I'm going to go ahead and just say Andy Greenberg's reporting there is bang on. And, you know, what a, What a scandal. Just what a scandal. What an idiot. Absolute moron.
B
Yeah, really.
A
All right, so we're gonna move on now. Just to some, just briefly, we mentioned, I think, you know, a couple of weeks ago, this acting sister director, Madhu Gotta Makala. You know, we're sort of talking about how there's. There just kept being this steady drumbeat of weird news about the guy. And he seemed like he was. He was a bit strange and not exactly good at making friends in sisa. He's out now and Politico has, you know, written a, like, really fierce write up on his departure. It's. Yeah. The interim chief of the nation's top cyber defense agency had convinced many people he was not up to the task long before his sudden reassignment late Thursday. Now, that's a lead well done to Politico. But, yeah, he's. He's gone. And there's details in this story. I think one of the funny ones is that he drives a cybertruck. He's a cybertruck guy. And I think that sort of underscores the idea that he might be a jerk, but he would leave it, you know, like, in the charging bay and like, not move it. And he actually stood down an employee who was caught on security camera walking past his cybertruck and flipping at the finger. Right. So he winds up, like, suspending that guy, which is. That's seems a bit nuts. It does.
B
I think it was the cameras on the cybertruck that caught it. So, like, he had to go dig through and find that himself. So, like, yeah, this guy does not seem like he was good news. And you're right, the Politico piece is just savage. Right. I mean, describing him asleep at the wheel and has a bunch of examples of things where he was kind of. For example, one of the briefings he asked specifically about hackers from India where his ethnic kind of Roots are derailing the briefing, which was about actual threats that they actually face as opposed to whatever was his personal bugbear. So, like little things like that that just really don't make you feel good about his, about his leadership. And sure enough, yes, out the door. And I think, you know, Sister's gonna take a while to recover from the kind of savage mess that it's been over the last little while.
A
Yeah, As I say, as I've described it, it's Sister's century of humiliation continues and it's cio Robert Costello is gone as well. Yes. And the, the deck on this Cyber Scoop piece is his nearly five year tenure had been. Had recently been marked by turmoil. Yeah, no kidding. All right, so now let's follow up on a story that we talked about last week. You know, we had a chat about Anthropic trying to put technical guardrails on its models and Pete Hegseth, Secretary of Defense, getting really shirty about that and basically saying, no, you can't do that. We will, we will designate you a supply chain risk. Look. And that's what they've done, which is just insane. This is obviously going to be challenged in court because Anthropic didn't budge. But, you know, I was actually fairly sympathetic to the government's case. But the correct way to go about addressing that is to maybe, you know, go to a court and use the, what is it, the Defence Production act to argue your case. Right. Not to say, well, we're going to try to harm your business, we're going to take this punitive measure. Now, of course, this is wonderful news for OpenAI, because Sam Altman has just swept in and scooped up this contract. But now we know a lot more about why it is that Anthropic was feeling uneasy. There had been some very vague reporting about, well, it's about autonomous weapons and surveillance. Now we know much more about the details of that. James, what, what exactly were Anthropic's concerns here?
C
Yeah, super interesting turn of events. So the again, the two red lines were this cannot be used for mass surveillance of US citizens and it cannot be used in fully autonomous weapons. Now, the interesting subtext that we've since discovered about the latter point is they're actually completely fine with their AI being used in autonomous weapons, just not yet. They felt that it's not ready and that at the right time they'll happily provide the models to ensure that.
A
This was not a moral stance. This is like you want to kill a robot, man. We would love to help you with that, but our models just aren't ready. Like let's, let's just wait a little bit more time and we'll make you a killer robot that like, like the best killer robot.
C
Yeah, the current killer robot's in beta and we just don't think that's right to deploy it just yet. So that became the actual sticking point. And yes, like Sam went from Friday morning.
B
Wow.
C
We solidly support you guys putting out
A
statements of solidarity with Anthropic. Right. Yeah.
C
To Friday afternoon we signed the deal. It's going to be amazing. But the, you know, we just, we know so much more now about the anthropic side of things and what their challenges were, but we actually know a lot less about what OpenAI has actually agreed to. The only parts of the contract that have been made public at this stage are really things that are self serving for OpenAI's interest. But if you read between the lines of some of the things that Anthropic was rejecting, it's that the government would sort of acquiesce and say, yeah, yeah, yeah, okay, we won't use your model for autonomous weapons. And then they'd throw in an EM dash ironically and say, unless appropriate.
A
Yeah, yeah, or as appropriate or as, you know, yeah. So they were, they were, they were sprinkling gotchas through their, through their contracts. But again, like, I think the real issue here and when it comes to the mass surveillance, the sticking point seemed to be that Anthropic was really concerned that various bits of the US Government were going to start doing very powerful intelligence processing on commercially available information. Now this is the stuff that data brokers sell. There are loopholes galore when it comes to the Fourth Amendment in the United States where, you know, government agencies can just buy this stuff that it would be illegal for them to collect themselves and then just apply a bunch of processing to it. And Anthropic's like, well, no, we don't think this is, this is appropriate. However, the strange thing is none of these contracts touch nsa, for example. So are they concerned that the Pentagon is going to be doing mass surveillance of US citizens? To what end? Like, I don't understand, I still don't quite understand the concerns here. And I think if you're concerned about commercially available information and privacy and the Fourth Amendment and these sorts of things in the United States, you know, the correct group to remedy this is Congress, not the chief executive of an AI company and not, certainly not the, the Secretary of Defense. Right. So like, I think what everybody has landed on here is that there's. That this is Congress's job and they're missing in action, basically.
C
Yeah, exactly. And I think, you know, you and I have had a lot of spicy conversations about this, that even when I hear your very well reasoned arguments, Pat, it comes down to, yes, but that's all based on trust and norms and laws. And those things seem to increasingly not being paid attention to by the administration. And so it's difficult to work out where exactly the line is between is this just Silicon Valley paranoia or is there something significant here? I don't know, but it's just, it's so weird to see this playing out so openly. And like, as you say, the people that should be the adults in the room making this decision are absolutely absent.
A
Adam, you know, what's your read on this? Like, you know, I don't quite understand the concern. Like, the people I'm concerned about using commercially available information. If I were American, it would be the local police and it would be the FBI, because they're the ones who are going to actually arrest me. I would be less concerned about the Pentagon doing it because what are they going to do, like airstrike my house? Like, that's just, just, it's just not what the Pentagon is for. But like, you know, where did you land on all of this? Right. Because it is, it is a complicated issue.
B
Yeah, I mean, I think, you know, your argument that, like, what's the DoD going to do with this information? You know, makes kind of sense. But I guess the, the thing that I, you know, that struck me about this is like it really is a result of the cultural context in the US at the moment. Right. There's so much distrust of the government. There's so much distrust of like the government as a whole. Like not dod, you know, not just dhs. Like, it's just the government in general, whether it's law enforcement, whether it's military, whatever it is, is just, you know, currently doing things that many Americans find abhorrent. And so of course, you know, they want to do something about it. And so many feel disenfranchised by their, you know, political representatives not doing what they want, that they, you know, are there doing it any way they can. You know, some of. I don't know whether I'm cynical that anthropic and to a degree, OpenAI are trying to, you know, sort of virtue signal like this, or whether it is just generally they are so distressed as a country that it's hard to, you know, for all of the arguments to be sensible and logical.
A
Yeah, no, I mean, I get that. But it's always, it's always been the case for us as Australians when talking about America because, well, you know, you're not Australian. Should say me as an Australian and you as a Kiwi. But I think, you know, in our part of the world, you know, we just have a different attitude towards the state in that we recognize that the state is, you know, mostly concerned with doing the right thing by its citizens and is a necessary thing that is, you know, generally good but not amazing and can always be better. Whereas in the United States, they think the government is out to get them. And sometimes it is. So yeah, yeah, it's always a little bit of a challenge for us when analyzing the United States sites and, and the things that happen there. But look, on the topic of commercially available information, we actually have a report from 404 Media which looks at CBP, Customs and Border Protection, having bought a bunch of data to track people out of the advertising ecosystem. I mean, this is stuff that our colleague Tom Uren has written about a lot over the last few years in the Seriously Risky Business newsletter. I mean, clearly some legal reform is, is required here. I mean, this stuff, to be honest, like this concerns me. If I were American, this would concern me more than the Pentagon stuff. James, do you have any thoughts on this piece here?
C
Yeah, look, I was thinking about how when you first time you launch an app and you get that pop up that seems to be in every app now that's like, do you wish to ask this app not to track? And I think when you got a story like this, you've got to realize that it's far more than just the app and the advertisers that you're touching. When you say don't track or ask not to track, this actually proliferates out to data sets that will be readily available by, as you said, local police, dhs, et cetera, et cetera, FBI. So yeah, maybe the string's too long to fit in the button, but it really should say ask DHS and ICE not to track when you launch that app.
A
Yeah, I got 99 problems with the Pentagon. Ain't one right takeaway there. And look, we've got another really interesting piece here that touches on privacy and AI. This is about large scale de anonymization using LLMs, where you could basically take, what is it like a writing sample and then tie it to, to other online Personas that may be anonymous. Is that about it, James?
C
Yeah, look, this one is really frightening if I'm honest. So, yes, essentially what they proved here is that using an LLM and a novel way to use LLMs and the embedding technology that I'll get to in a second, they can do essentially cross platform de anonymization. Right? So if You've got your LinkedIn profile, but you run an anonymous Reddit account, an anonymous hacker news account, or whatever else it is now quite trivial they've demonstrated at scale is the other key element here to stitch those profiles together and essentially uncover who you are on these other platforms, even though you are technically operating as anonymous, as an anonymous user. What's super interesting about this is they talk back to the Netflix Prize data set, which is a 2008 paper that came out that said researchers took this large corpus of anonymous data from Netflix and they applied some pretty great at the time techniques to de anonymize you out of that with only sort of two, maybe three data points. And this was shortly before I joined Apple and that was absolutely our focus. You know, de anonymizing wasn't enough. You had to de anonymize the deanonymized information because correlation was so easy. But the problem with that exercise was it didn't scale. And the scary thing about this paper coming out about this LLM being used as deanonymisation is that first of all, they did this with entirely publicly available APIs. So there's no custom trained model, there's no huge financial barrier here that would make this unworkable for any given entity that has access to a frontier model LLM. And the second thing is, because they've used LLM embeddings, which is essentially using the LLMs, multidimensional space and understanding of language to create a database of relevant and related terms and language structures, they've demonstrated that this can be done at scale where the efficiency drops off really gracefully as opposed to the previous techniques which just fell apart at scale. And like, I'm trying not to be cynical, but it's incredible timing that this article drops right as there's so much concern about mass surveillance being facilitated with LLMs.
A
Yeah, I mean, I'm going to go ahead and just say that's a coincidence in my view, but it is holy dooley. Like, it is, you know, it's the sort of thing that, you know, has been theorized right, for a long time, this sort of analysis. But to be able to like auto, do it, put it on auto and just get it done like that is the the new part, and I think so much of what AI delivers is around scale, which is something that has come up again, again. And speaking of man, like what was it last week, James? You said when we were talking about these fortigates getting owned by Claude code, and it was pretty rudimentary stuff, you said, look, it's a matter of time before someone just tricks Claude into doing an end to end compromise just by asking it nicely. And a couple of days after we had that conversation last week, out popped this story which is everywhere about the Mexican government losing gigabytes upon gigabytes of data to a Claude code based attack that looked like exactly what you described.
C
Yeah, again, it's low rent hacking by all accounts. And interestingly, it hit many, many different properties throughout the Mexican government. This is not like one service got popped and they just siphoned out the database. This is like multiple places hit multiple different data sets taken. But it all tied back to use of CLAUDE and OpenAI. And the moral to the story here, to the point I was making last week, if you break a offensive cyber campaign down into small enough chunks, every single bit of those chunks, every one of those little chunks can be made to look like it is just defensive cyber work that it will gladly happily help you with. But then you roll them all up and it's just straight up hacking.
A
Matt, I actually last night did a sponsor interview with Tony Delafuente, who is the founder of, you know, Prowler. Right. Which is a online, you know, it's a cloud security scanner. And what's really interesting is if you ask Claude to go and help you with your cloud security, you know, posture, it actually will try on its own and then give up and it downloads and runs Prowler. Right. Which is kind of funny. But you know, we got to this situation where AI means that, that, you know, nailing down your cloud infra in particular and getting all of that stuff configured correctly is so important now because every little skid with access to an LLM these days, like if you don't find it, they're gonna, I mean, Adam, do you agree with that sort of view on, on this, which is that like, you got to get your ducks in a row these days because the, you know, Script Kiddie Mark 2s, they're coming and they're using LLMs.
B
Yeah, I mean, in the end, the important thing now is that someone wants to hack you, not that they can hack you. Right. If they show up and just say, hey, I would like this organization to be compromised, I give that to Claude or whatever to go get done. Like that's what you're defending against like a kid with intent. But all of the means is now, you know, automated by machines which yeah, it's a hell of a wild time. And yeah, you, you just gotta get everything right all the time. There's no longer, you know, you can. We've always said like security through obscurity doesn't really work. Like. But now we've got. You have to get everything right all the time for real now, not for pretend. Like it's been for the last, you know, 25, 30 years.
A
Yeah, I mean it's a bit satisfying I guess, being able to say no for real. Like it's not. This isn't theoretical anymore. But yeah. Anyway, wild times. Now we've got a write up here from Dan Gooden on the this thing that they call an air snitch. Now the premise here, I've seen this doing the rounds big time, but you're like very lukewarm on this, Adam. I think it's really cool. So the idea is there's a bunch of old school techniques you can use to bypass the isolation between a guest network and the primary network on a wireless access point. I mean I think that's going to be actually quite useful to a bunch of attackers in a bunch of different contexts. But you were like eh, this is all just Ethernet tricks. Ethernet wasn't designed to separate like this. Not really surprising. So you bah humbugged this when I was like all excited.
B
I mean honestly, both of those things are true, right? It absolutely is interesting. And being able to go from someone's public WI fi through their internal network or to steal traffic or interact with devices on the internal network like that is useful. But on the other hand it is still all old bar humbug Ethernet. So that the guts of this research is essentially manipulating the layer 2 traffic flows of the Ethernet. And WI fi is just Ethernet over radio to make traffic go where it's not supposed to. And the kind of concept of both client isolation and guest networks or having multiple networks on the same piece of networking hardware. So having one access point that runs multiple SSIDs, that's all kind of implemented on top of Ethernet. And Ethernet was never really designed for this type of segregation. And so the guts of the trick is manipulating say like the cam tables, the tables that map Mac addresses to the port, you know, the physical Ethernet port that a device is out and on wireless. That port is a logical construct enforced by crypto and on wired network Obviously it's a physical wireless port. And by saying like if you send a packet that says hi, my Mac address is 12, then and your Mac address isn't somebody else's is, the network will learn that Station 12 is in your direction and start sending you traffic towards it. And that gets you traffic in one direction. There's other tricks for getting traffic in the other direction. And so they can manipulate the layer to routing to cause traffic essentially to do man in the middle, where traffic between stations that you wouldn't otherwise be able to see gets delivered to you. And in a wireless context that means delivered to you with crypto keys that you know about because you've negotiated your connection to the network. So it's legitimately interesting work. But on the other hand, my Kiwicon one talk, which is what 2007, was doing this on Metro carrier Metro Ethernet networks to bypass client isolation on Ethernet large scale Metro Ethernet networks.
A
I remember that talk. Yeah, this was like the enterprise grade Ethernet stuff and you could just like hop your way through through like every other customer, basically. I remember that talk. 19 years.
B
Wow, this is it. But WI fi.
A
Yeah. Yeah, well, cool.
B
Yeah, I guess I should have thought to go back and try it against WI Fi stuff.
A
But no. So like it's 2026, man. So if you want to talk about hacking like it's now you just get a fish kit, right? And you do the sass and the, you know. Anyway, everything's changed. Everything's changed.
B
We just asked Claude to do it now.
A
Yeah, you just ask Claude and it's done real quick. We're just going to mention this one. SISA has ordered an agency to patch a bunch of Cisco devices. Not really interesting in and of itself. What is interesting though is when you were reading about this, you stumbled across the Australian Signals Directorate guide to threat hunting and you were like, oh, okay, this is actually really cool. So there's this Cisco SD WAN threat hunt guide from February 2026, version 2.4. There's some light reading for you, but you just wanted to get that in the show notes because you said it's actually a good read.
B
Yeah, I mean Cisco SD WAN gear is everywhere, especially in telco environments. And like that's so far so normal. But yeah, the threat hunting guide from ASD just says, we have been here and had to hunt for stuff in this environment a lot because we're at version whatever it was, 2.4 of this dock and the doc's very good. And that just says, you know, a bunch of people up in a bunch of SD WANs in Australia and I just thought that was interesting. And also good job AST and James,
A
this one you wanted included in the week's show, which is a security bug in openclaw. And I'm like, what is it? Some sort of like, you know, prompt injection thing? And you're like, no, it's heaps stupider. And I'm like, wow, stupider than prompt injection. Let's hear it. So tell us.
C
Yeah, so good. I got such a great laugh out of this. So the default stance for security for OpenClaw is they say, you know, the backend process only binds to localhost, so it's safe. You know, no one can access it from the outside. But we forgot to realize that any browser can happily talk to a local host because that's not covered by the origin restrictions that generally prevent that sort of cross site access in a browser. So it's such a trivial Vector here. Any JavaScript running in a browser can access the local OpenClaw service. And they forgot to put rate limits around how many times you could make attempts to authenticate to that WebSocket port on LocalHost. And so, so very easy to just brute force access into your openclaw. So just hilarious for the fact that like local host you think, yeah, of course that's safe. I can't access from the outside world. But buddy, the browser is where that boundary is now. So yeah, good for the LOLs.
A
And you pulled out this one too, which is a guy doing some research into his robot vacuum, some Claude based research, I think into his robot vacuum, managed to pull down some like an API key or something where he's like, oh cool, now I can get access to my backend for my robot vacuum. And he's like used it and it's like all of the backends for all of the robot vacuums and there's like 6700 of them.
C
Yeah, you got to read the article for this one for a couple of reasons. One, yeah, his whole use case here was I want to use my PS5 controller to control my robot vac, which, what an awesome weekend project. Gets out Claud code to do this and Claud code happily finds that there's a API key that you can use to access the backend services for these robots. And then he discovers it's the same backend key that's used for all of them. But one of the best bits about this is when you look through the article, he shared some of the screenshots and it's like Claude Code mocked up this like gorgeously late 90s hacker kid kind of world map and pings of where all of these vacuums were. But the vulnerability itself is a bit like pre shared key. But just the fact that this was found by accident and reverse engineered by Claude Code. Yeah, we're going to see more of these.
A
Yeah, got to love it. Now. Meanwhile, Suzanne Smelley over at the Record has reported that a Greek court on Thursday sentenced the founder of the intellectSA consortium and three associates to prison for their role in a sprawling spyware scandal that has dominated Greek headlines since it came to light in 2022. I think this is good. I mean, we're not used to seeing these people sent actually to prison. Adam, I mean, do you think this, this sends a message to, you know, to other people who might be thinking about cutting some corners when selling these tools and when knowing what they're being used for, when what they're being used for is not good? I mean, you know, do you think this sends a message there?
B
I think it will send a message. I mean, the founder and there was another associate from Intellect itself. I don't know that they're necessarily in Greece or within the immediate jurisdiction. I'm sure they will be be. If they're smart, they'll be out of the jurisdiction. But, yeah, it's got to provide some pause for other people playing in this game that, you know, you will get some blowback. What's interesting in Greece, though, is that none of the people who bought and used it have so far, you know, faced any particular consequences, you know, because this was used in the context of spying on political opponents and stuff. So, like, you would expect there to be some consequences for that kind of use of it. But so far we've only seen Intellexa and I think the local reseller, you know, know, facing some consequences. But, you know, I am sure it has to send some chilling message, some chilling effect to other people who play in this game and, you know, ultimately being, you know, found guilty in Europe, you know, the arm of Europe's X tradition treaties and other things, you know, that's pretty long. And getting away from that, I imagine it's going to be difficult for them.
A
Yep. Now moving on to some more Law and Order related stuff, Dorita Antonio over at the Record again has a report. And this is kind of our skateboarding dog this week, Adam, which is a guy in Moscow. He's been accused of posing as an FSB person to extort the Conti Ransomware gang. So it's good there's some arrests around Conti. Unfortunately, it's someone who was trying to shake them down.
B
This was after someone hacked Conti and leaked a bunch of their internal chats and that output a bunch of identities and cryptocurrencies and so on and so forth. Forth. Yeah, this guy apparently read the Conti leaks and just decided that the bright life choice was to go shake down some criminals for money claiming to be the fsb. And like, of course this was going to end well for him. So I don't know. Yeah, Mr. Was the name Ruslan. Ruslan Satchuchin. He is probably gonna have a bad time. I imagine it's ending up either on the front or in a special penal colony or whatever else happens here in Russia. But this guy, like, what were you thinking, buddy? What were you thinking?
A
Well, he's thinking he wanted to get paid. I just think it's funny that the Conti people, like the Conti operators didn't get arrested, but the guy tried to shake him down did. You know, like that's. And the fact that it's a perfectly reasonable scam to pretend to be the police in Russia to, you know, solicit bribes and people are going to go, oh, okay, you know, they're going to believe you because that's what the cops there do anyway. We're going to end on a sad note, which is FX Felix, FX Lindner, who is a. Was. I'm sorry, he's passed away. He was a very well known hacker and security researcher. Tremendously intelligent guy. He'd been on the show at least once, I think maybe a couple times. I'd met him before as well. I liked him. I hadn't seen him in a very long time. Either way, Felix is no longer. Is no longer with us. And that, that sucks.
B
Yeah, no, that's real sad. Like he was, you know, his name was one that I remember reading an old school school text files and things. I mean the work on some of the tools that came out of Fenlit and the other kind of European hacking crews. Yeah, his name was all over that and you know, he was, you know, a lot of people credit him with making, you know, introductions around the industry and in the scene and you know, just being one of those people that A, is a great hacker, but B, also, you know, is just a lovely person to be and makes great connections between people and, you know, those people are, you know, outsized in our communities and, you know, we value them very much.
A
So Vale, Felix, FX Lindner, and yeah, that's, it's very sad. Bummer, bummer. Way to end the week. Sorry about that, everybody. But yeah, and I'll just let people know too, after we finish recording today, I'm actually getting on a plane. I'm heading down to Sydney and I will be speaking. Well, I'm on a panel at a conference down there. It's the Atmos. Atmos are running a conference down there called Sphere 2026. It's a one day event. So I imagine some of you listening to this are going to be there. So come and say hello, by all means. They've got a great lineup actually of speakers and whatnot. And Chris Krebs is coming down too. So I'm going to catch up with him, which is going to be great. So, yeah, I'll catch some of you down there. But that is actually it for the week's news. Big thanks to you, Adam Boileau, big thanks to you, James Wilson. And we'll do it all again next week.
B
Thanks very much, Pat. I will see you then.
C
Thanks, Pat. See you then.
A
That was Adam Boileau and James Wilson there with a check of the week's security news. Now a little bit of housekeeping before we get into this week's sponsor interview. And we have launched two new podcast feeds. We have launched the Risky Business Features channel. James is publishing podcasts in there. He's recording chats with Brad Arkin, who is the former chief security officer of Adobe, Cisco and Salesforce. So they have some fabulous conversations and there's other interviews and stuff going on in there. So do head to Risky Biz and, and find that feed there or just search through your podcatcher for Risky Business features. So do check that one out. We've also launched a product catalog on this on the site. So if you head to Risky Biz and hit catalog, basically it's like a sponsor directory at this stage. It's just very plain language descriptions of what some of these companies do. And we're planning on building that out and adding more and more entries over time. So yeah, please do head to risky.biz to subscribe to James's podcast. We've also got Risky Biz Stories, Risky Business Stories, which is where we're going to publish some of the stuff that Amberly's working on. So you can subscribe to that one too. Please do subscribe to these feeds. It really does help us. And yeah, that would be great. Now it is time for this week's sponsor interview now with Sydney Maroney, who is the the head of threat hunting at Nebula, our sponsor for this week's show. Nebula does AI based threat hunting so you can do vibe hunting, which is a lot of fun. And Sydney Maroney actually like wrote the guide on agentic threat hunting. I've linked through to the GitHub page for that guide in this week's show notes. But she decided to come along onto the show to, to talk about that guide a little bit and agentic threat hunting in general. And here's what she had to say.
D
This is just an easy way for people to apply it to their threat hunting and also give their threat hunting memory and context, which I think are huge. So right now if you run a threat hunt, you might not know what you ran a year ago. So you start from scratch every time. And with a framework like I've created, you don't start from scratch. You have some sort of memory to go from.
A
I mean, it really does feel like people are just now realizing that these things don't have a memory correct. You know, these LLMs, they don't remember anything. So you've got to sort of prime it with the correct context every time you run a query. And that ain't, you know, then you're back to having the sorts of plumbing that you need in a typical, you know, enterprise solution in the first place. There's going to be a database, there's going to be some structured data, there's going to be like some sort of query language like that the agent has to know instead of a human operator. And this is where we are.
D
Exactly. That's part of the framework. The first step is to implement a repository. And so I use an example of Git, so storing all your hunting knowledge, your past hunts and your current hunts in Git and then using that and running queries against your git repository to find out information about your hunts and your program.
A
Where are enterprises at these days with trying to go agentic with their threat hunts? Right, Because I would imagine that there are few organizations that are trying to do this and the ones, I'm guessing the ones that are trying to do this are the ones who have threat hunt teams already. Right. And they're just trying to now automate some of that with. With AIA agents.
D
Correct. I think a lot of this is more cutting edge and like pushing the boundaries of where AI is going. Most companies are stuck using a certain model or a certain tool when it comes to AI and so they have a lot of limitations and so. And therefore it can be really hard to Implement. I've been working with a global manufacturing company that has implemented the Gentic threading framework and they are struggling just like any other global company to move up towards the Gentic layers because there's just so much process and it's just such a large company. So that's why as part of the framework I have a maturity model so you can start from just documenting your hunts into adding AI and using AI to run against the hunt and building agents out connecting MCP servers. So you really like layer on top and start gradually than throwing a bunch of agents or doing agentic AI against your threat hunting.
A
Now we were chatting before we got recording and you are what I would describe as a heavy CLAUDE user. You spend a lot of time in claude. I guess one thing that I find interesting about this whole like CLAUDE code in the enterprise, like SaaS evaluations, tanking and whatever, and the idea that coding is essentially free for some stuff these days is just going back to what I was saying earlier about how you need to provide these models with context and everything and you need to actually build stuff. You need to build an architecture that these models can then use to be more useful in threat hunting. I mean, is that easy now with some of these AI building tools or is it still. Is software design for something like this still hard?
D
I'd say it's getting easier. I think if you have an idea, you typically can build it with AI, but you still need to ensure that there's that structure. And sometimes that's the difficult part to figure out. And so I've done threat hunting for about seven years now. I helped build the peak threat hunting framework at that splunk where you apply structure to everything. So it's built on those foundations and I think those foundations and knowing those are still going to be important and applying those with the AI to build out things is going to be key. Instead of just letting the AI do everything and you be like it did something. I don't know what it did, but knowing those foundations is going to be crucial.
A
Yeah, now I totally understand where you're coming from with like a maturity model for this sort of thing, which is where people are using like a co pilot approach to document the things that they've been doing on a manual threat hunt that might be your lower level and then you're getting up to like the full stack solution. Probably something along the lines of what you've built at Nebula. But I got to ask, like, you know you've got about seven years experience in threat hunting, right? You are newer at, you know, you've been like, what, four months with Nebula now. I mean, how much is this changing the game? How much is the agentic stuff actually changing the game? And in what ways? Right. Like, I'm guessing a lot of it's going to come down to speed and volume and being able to go down rabbit holes that like manually you never would have been able to go down. But then the question becomes, is that useful? Like, are you actually turning up interesting stuff doing that? So I guess, yeah, I'm after a threat hunter's perspective on what the actual benefit is here, because there's so many imagined benefits. But what are the real benefits here of AI in this space, given that you are the expert in this space and have gone from, you know, as you say previously working at Splunk, doing this stuff, writing the Peak framework, and now you're moving on into the AI side of things. So, yeah, where is this all going and why? And like, what's the point? Big question.
D
Such a wonderful question. If you would have asked me last year where I thought threatening was going, I would tell you it's going to change incredibly in the next year and that is because of AI. AI is going to speed things up and allow you to do more. You nailed it. I used to run threat hunts in two to four weeks, I would say manually running the queries, doing all the research, everything involved in running the threat hunt. And now I can do a threat hunt in an hour, maybe a few hours, just depending on the scope. And it is just impressive at some of the analysis that it can do. Of course there's a lot of double checking of work we all know we need to trust but verify AI. So I do think it's going to change a lot of our work and it is currently, I think a lot of teams are starting to realize that.
A
Yeah. So I mean, is it the case though that you can, you know, that, that the agents, because they can operate at speed and scale, will run down, you know, lines of inquiry that you wouldn't bother with manually and then actually find something interesting, or is it more the case that they wind up going down dead end rabbit holes? Right, because I imagine maybe a bit of both.
D
Right, agreed. It's a little bit of both. And I think that's where something like memory can be really helpful. With agentic threat hunting framework. I've started building memory pieces in, in the I've named it sessions in that it records some of your queries and the results you get and how you deter Something and why you determine something good or bad.
A
Yeah, the result of what, when it came back to the human operator, it's like, yeah, you've done this 10 times and every time it turned out to be nothing.
D
Exactly. Because that's something that we tend to forget. And like I don't remember what I hunted a year ago, let alone what my colleague hunted three months ago. So that's where the framework comes into play and helps with the memory. And then those sessions can really help with remembering what decisions you make and helping you make better decisions in the future. Like that's the idea behind it.
A
Now, correct me if I'm wrong, because I'm not an AI expert, I am a podcaster, but when we look at these agents and trying to give them, prime them with context every time we want to run a query, is there sort of like an upper bound limit to how much context we can give them before they start getting confused? Because that's sort of my basic understanding of these things is, is you can only really give them so much context before they start getting a bit overloaded and their little, their little digital brains
D
just start freaking out the hallucinations. Yes, it's going to depend on your scope. It's like when you use chat, any ChatGPT or any Claude, any tool, if you give it some enormous scope, it's going to get confused. If you scope it down properly, which I try to do with threat hunting or if I'm using Claude code or whatever tool, then it does better at the context and not getting confused. But it's always a problem. And I think that's something that is the consumer right now is trying to figure out and all the companies, the models, they're trying to figure out how to resolve this. So I think we're all in it together. Really?
A
Yeah. I mean, as I said earlier, I can't remember if this was before or after I hit record, but I just think of these agents as self sourcing bash script scripts instead of like, you know, incredible super intelligent E beings. And I find that's a useful way
D
to think about is a lot of it's just scripting. I mean if you think of something like openclaw, it's just running a bunch of cron jobs, running reminders for you. It's not, it's a little bit, it has a little bit of knowledge and memory, but still it's just code on the back end.
A
So look, you've come here to talk about your agentic threat hunting framework and we've barely talked about anything. We've talked about the need for context and the different maturity levels and stuff. What are the other big things that you've squeezed into this thing that you think people should know about before they go and check it out?
D
So one of the features of the framework is something called the lock pattern, and this is just a pattern that a human and an AI tool can follow to do threat hunting. So right now, there's not a lot out there on documentation as far as when we are threat hunting. And I know documentation isn't exciting, but everyone in security knows that this is incredibly important. So it gives the path to document that both a human and AI can understand. So when you get to that point where you're using AI and maturing, you can feed it in and you can let your tool use it just as you would and, and probably understand it even more and do better analysis and
A
be a little bit more consistent as well.
D
Exactly.
A
All right, now, where can people find this framework?
D
You can find the framework on GitHub or just check out agenticthreathuntingframework.com and it'll point you to a blog post that will then lead to our GitHub repo that has everything. Again, it's open source and vendor agnostic, so it's just a methodology. So go check it out.
A
Awesome. I will drop a link to the GitHub in this week's show. Notes. Sydney Moroney, thank you so much for joining me to talk about. Yeah, AI in threat hunting. Very interesting stuff.
D
All right, thank you.
A
That was Sydney Maroney of Nebuloc there. Big thanks to her for that. And big thanks to Nebuloc for being this week's Risky Business sponsor. And that's it for this week's show. I do hope you enjoyed it. I'll be back soon with more security news and analysis, but until then, I've been. Patrick Gray, thanks for listening. Sa.
Podcast Date: March 4, 2026
Host: Patrick Gray
Co-hosts: Adam Boileau, James Wilson
This episode of Risky Business delves into the cyber dimensions of the ongoing war against Iran, discussing the real-world impact of cyber operations in modern conflicts. Patrick Gray, Adam Boileau, and James Wilson cover the hacking of Tehran’s surveillance infrastructure, the current lull in Iranian cyber-ops, new frontiers in threat hunting with AI, notable recent cyber incidents, privacy and AI risks, as well as law enforcement and intelligence community shakeups. The episode features a sponsor interview with Sydney Maroney of Nebuloc, focusing on agentic threat hunting frameworks.
Starts at ~49:17
This summary captures the original tone: lively, irreverent, and informed as the hosts dissect complex technical and geopolitical subjects with clarity and skeptical humor.