
Loading summary
A
Foreign and welcome to another edition of Risky Business. My name is Patrick Gray. We'll be chatting with Adam Boileau in just a moment about all of the week's security news. And then we'll be hearing from this week's sponsor. And this week's show is brought to you by corelight, which of course is the company that maintains Zeek, the open source network security data sensor thingy. Very much an industry standard sort of thing. But of course, there's a lot more to call it these days than just Zeek. And its head of strategy, Greg Bell, will be joining us in this week's sponsor interview to talk about a bunch of sort of AI features that corelight has now shipped, including a, you know, model context protocol and whatnot, which, you know, in the context of any security technology that's collecting and distilling a lot of data, I mean, that's really where we see some excellent use cases for AI. So do stick around for that interview, which is coming up after this week's news, which starts now. And Adam, we're going to actually start with a non cyber security story. About six months ago we had an interesting conversation when the Australian government had sanctioned Terragram, which was like a neo Nazi group of telegram channels. And I suggested at the time that maybe that was in connection to some of the anti incidents, violent incidents of anti Semitism that we've seen in Australia, including a synagogue being torched and, you know, cars vandalized and things like that. So I floated the idea that maybe this was connected in somehow in some way to those incidents. And then I had a conversation with a friend of mine after that show aired. And here's what I said the next week. Of course, last week we talked about the sanctions being imposed on Terragram and how that might be linked to a spate of sort of vandalism and, you know, torched cars and whatnot happening in Australia. This journalist, he's a bit skeptical that it's actually Terragram behind that stuff. He pointed to a similar campaign in Sweden which was actually traced back to Iran, but there was more of a sort of organised crime nexus there, I don't know. But anyway, the point is he's sceptical. He thinks the sanctions against Terragram could just be the government doing something to be seen to be doing something. Who knows? In time we will find out. Well, it turns out my journo buddy there was absolutely right. So that was Cam Wilson, who writes for Crikey, also writes a tech newsletter called the Sizzle. So well, called Cam because yesterday the Australian government ejected, Yeeted the Iranian ambassador to Australia and three other diplomats. It's the first time that we've expelled a diplomat, expelled an ambassador since World War II. Because it turns out that Iran was actually orchestrating violent anti Semitic attacks in Australia, which is just wild times. So the ASIO Director General, Mike Bird, just stood next to our Prime Minister yesterday at the press conference and, and talked about this and, you know, kudos, I think, to asio. We've got a bunch of listeners at ASIO and really, really good work there. But wow, wild times, right?
B
Yeah, it's certainly a pretty, pretty crazy story. And you know, when you see the pieces kind of laid out like this, like it, you know, it kind of lines up and makes sense. Of course, Iran is, you know, is denying it, as you'd expect. But, yeah, when we have seen in the past other attacks, you know, like ones in Sweden, you know, you can kind of see how it works, like when they end up paying local crims or whatever else to do these things without necessarily understanding what the big picture is. But, you know, I would have thought burning a synagogue, like, you don't need a lot of geopolitical nous to kind of understand what's going on.
A
Well, I remember we were talking about this, like some of the people who are being arrested in connection with some of these incidents were like just meth heads, you know what I mean? They weren't politically motivated. It was very strange. We had the Prime Minister saying that they believed there was some sort of foreign nexus with this activity. And to be clear, what it looks like Iran was doing was paying local criminals to do things like torch Israeli restaurants and a synagogue, which is just, you know, disgusting. And the expulsion of the ambassador is absolutely the correct move. So I think everybody top, top to bottom, except the Iranians, deserves, deserves a lot of credit for this. This has been a really sort of upsetting incident in Australia and it's good to see that the, the correct sort of action has been taken. I will say, though, you never know if perhaps the IRGC was recruiting through telegram and whatnot. I'm pretty sure there's going to turn out to have been some sort of online nexus with all of this. But given this is something that we mentioned on the show six months back, I just thought I'd, I'd, I'd follow it up because it was, I mean, it's big news. I mean, globally, it's big news that you've got the IRGC actually recruiting criminals to do these Sort of things in Australia, just absolutely nuts.
B
Yeah, it is pretty nuts though. And although there's no direct cyber component to this, like ASIO is following the money back through many, many degrees of cutouts and so on, and you kind of got to imagine that that probably involved a little bit of cyber. You know, you would think there would.
A
Be a process, you would think there would be some of our listeners who may have been involved in that. If that's the case, well done. But look, let's, let's roll on to some more you Cyber, cyber cyber cyber news and staying with Iran, actually we've got a report from our very own Catalan Kimparnu that a group of hacktivists has been sabotaging the comms on Iranian ships just to, I don't know, give them a hard time, which, given what we found out this week, I'm on board.
B
Yeah, yeah, exactly right. So this group who have targeted satcom for ships before, they posted on, I think it was on a telegram or somewhere that they had broken into a SATCOM provider in Iran and then had gone downstream into the actual satellite terminals on the ships and like destroyed the disks on them. So they rendered the SATCOM systems on these ships inoperable. They posted some screenshots of the Shell commander that they were using to wipe the storage on the actual terminals on the ship. So that was a nice detail. Thanks very much. This, I guess this group, when they did this last time, so when they hacked a bunch of other vessels, they crippled what, like 116 vessels back in March and this time around it was like another 60, something like 39 oil tankers, 25 cargo ships. So like, you know, pretty large scale attacks. So last time they did this, this coincided, timing wise, with the US bombing Houthi rebels. And this time around we had some like, sanctions in the US against organizations in Iran that were evading oil export restrictions. And now all of a sudden there's a bunch of oil tankers with bust Satcoms. So, I don't know, take from that what you will.
A
Sounds like totally legitimate activist behaviour to me. The timing is a complete coincidence, of course. Now just updating on something we've been talking about a little bit lately, which was this possible leak of that SharePoint exploit through Microsoft's MAP program. You know, I went and had a look at the time, at the number of Chinese organizations that Microsoft shared vulnerability data with through the MAP program. You know, and the MAP program is where participants get vulnerability data ahead of time, ahead of public release. And there were a Lot of Chinese orcs. When I, when I looked at, when I looked at who this information was being shared with. Microsoft is now scaling back the number of Chinese organizations that get this early access.
B
Yeah. And that seems like a reasonable response as to whether it's a meaningful one. We don't know. I mean we had that conversation about what was it, SharePoint on prem being maintained out of China anyway. So like maybe they don't need early access to the bugs.
A
Yeah, I mean, I'm sure that the MSS person who's moonlighting as someone, as a code maintainer for SharePoint can just pull the bugs out of the bug tracker. Right, Exactly. Oh dear.
B
But either way, Microsoft had to do something and they like, clearly they have done something. And you know, the MAP program, this is not the first time we've seen stuff leak out of it and I think last time they also did some similar kinds of things. So like big picture isn't an effective response. I don't know. But they're doing something and you know, something is better than nothing given the spanking. Microsoft is having another, you know, other aspects of its business.
A
We need to do something. This is something. So we'll do it. That old chestnut. We've also got some other Microsoft related reporting here. Another one from ProPublica. Now I spoke about this one and indeed I talked about the MAP one as well at some length with Chris Krebs and Alex Damos, because the Wide World of Cyber podcast is back. So we published that one into the feed on Monday. But it turns out Microsoft really did not appropriately disclose the fact that Chinese engineers were going to be supporting DoD private cloud instances, which I can't say is terribly surprising because it's really hard to imagine that DoD would have signed off on that. So the fact that we find out now that they said that they would use, you know, remote support agents and whatever, not so surprising they didn't mention they were Chinese because that I don't think it would have happened if they disclosed it appropriately.
B
Yeah, like you wouldn't want to put that like right up there in the executive summary. And ProPublica have seen, I think some of the documentation that Microsoft submitted, you know, when they were going through the process of getting these contracts, they submitted a system security plan 125 page document. So yes, putting in the exact summary up the front of that. Probably not what they did. Apparently what actually happened is in the middle of the document somewhere. It describes this escorted access process. Doesn't mention like kind of says they might use Staff that aren't cleared, but doesn't say, and by the way, we might just outsource this to people in mainland China. Yay.
A
I mean, it's funny because we had Alex describing how, because he's out of Sentinel One now, he's left, he's working for a startup again. And he said that at Sentinel One he had a bit of an internal bun fight as the CISO when he wouldn't let one of their Swedish staff touch certain systems just because of the passport that he held. And you know, so the idea that it turns out Microsoft is like just letting the, you know, Chinese team in is, is kind of nuts.
B
It is pretty crazy. And like that episode of Wide World Cyber is definitely worth, I think it's worth watching in the video just because Alex's facial expressions during that conversation, well, well worth the price of admission there. So. And Chris's beard too.
A
Yes, Chris, Chris. Chris has a very awesome vacation beard. So do check it out on YouTube. Now we talked about how Frack had had its 40th, had published its 40th anniversary edition and one of the articles in it. Basically the people who wrote this, they popped the workstation of an APT operator. Now they've done a write up here and it's, it's fairly brutal. So there's, I mean they've linked off to like an image of the guy's computer basically. Right. So it's, and they've got like, here's his domain name password. Like it still hasn't changed since last time we checked. Good luck to him. And it goes through and sort of looks at all of the targets in South Korea and Taiwan that this actor is hitting and makes a case that this person is a North Korean. Yeah, a North Korean threat actor working for the North Korean government. Although I gotta say I am not an expert in threat intel. But as I'm reading through this, it seemed a little thin, right? Like this, this sort of attribution seemed a little thin. So I reached out to two separate contacts in the threat intel world, one of whom does a lot of work on North Korea stuff, and they both said this is not a North Korean threat actor. Well, one of them said it's not a North Korean threat actor. They're Chinese. And indeed in this write up they say, they point out that this person seems to be a native Chinese speaker with pretty poor Korean. And there's a, there's a few, you know, they're using Chinese tools and things like that. So there's a few things in there. So the contact I spoke to who does a lot of North Korea stuff says no. And then I reached out to Intel 471 who provided me with some of their internal analysis that said it is more likely that this person is a Chinese threat actor. So I think they might have actually got it wrong. Nonetheless, still a very fun read.
B
It's still a very fun read. Yeah. And like as you said, the level of brutality is pretty great. Like they popped, it feels like they popped the Linux virtual machine that this person was using inside a VMware or whatever. They use them for virtualization on their Windows desktop, but they had their Windows C drive mounted inside the vm. So trivial to then kind of move out into the Windows VM and help themselves. It's quite funny. They also popped the virtual private server that they were using for infrastructure, but it has a bunch of interesting details of technical governance and stuff. And of course, as you said, they didn't redact any credentials or whatever. They're all live. They told some of the South Korean victims that they were going to publish these details but the, you know, the APT actor did not get a heads up. So yeah, a little bit of a scramble there to probably clean up the mess.
A
I mean, you know, just incredible. Opsec from our Chinese friends there, just amazing. Oh, there's a company called Spur which as best I can tell they like identified residential proxy networks and whatnot and they were able to use some information out of this dump to uncover a Chinese proxy and VPN service that was used in an APT campaign. So I've linked through to that blog post as well if people want to check it out. Yeah, fun times. Yeah, good hacking from whoever wrote that frack Ezine article, but I think they probably got the attribution wrong. Now we've got an awesome post. My favorite story of the week. This is some research out of Trail of Bits and it is my favorite type of prompt injection ever. Adam, walk us through this trail of Bits research because I love it and I laughed my ass off when you first described this to me this morning. Oh dear.
B
So this is some research looking into prompt injection in multimodal AI systems that can handle inputs other than just text. So things that can handle images for example. And the researchers looked at basically hiding prompts inside images that are processed by mostly in this case Google Gemini backed systems that use the Google Gemini AI engine. And the trick that they are using is many of the front ends for accessing the AIs that process images will normalize or scale the images down to reduce the amount of AI cycles and compute time you have to do. So they will take, say, if you upload a really big image, it'll scale it down to a smaller scale for processing by the AI. And what the researchers did was they looked at the scaling algorithms and there are a number of standard nearest neighbor or bicubic or bilinear scaling algorithms that you use to make an image have less pixels. And they crafted input images that to a human eye, at full resolution, don't contain any text. But when you scale them down, text pops out of, say, like a dark area of the background. And then depending on which kind of scaling algorithm you use, you choose the input appropriately. And then, yeah, the AI reads the text in the image and interprets it as a trusted input because it thinks the prompt came from the user and off you go. And they've released an open source tool on GitHub that will embed these prompts into images and some demo videos and stuff. But it really just kind of makes you, you know, because how are we supposed to, as people using these AI systems, deal with that intersection between technical, kind of technical vulnerability, technical aspects like this, and the human perception of them? Right, well, hang on, hang on, hang on.
A
I got some thoughts here. Right. Because one thing that's interesting here is the fact that you need to make that text come through, through the process of the image being scaled, which means that you can't just bung the text in the image to begin with and have that work as prompt injection. So they're obviously doing some sort of filtering so that you can't just type text all over an image. And it's a prompt injection thing. They're just doing that inspection of possible text in an image at the wrong point. Does that make sense?
B
I'm not sure that you can't just bung instructions into an image. I think we're relying on the human to spot the, you know, please copy all of your email and send it out to, you know, this wiring hacker.
A
Right, okay, okay, okay, okay. So that makes more sense because I'm like, okay, why wouldn't you just. Yeah, okay. But it's. The idea is to obscure the text from, you know, a human recipient as opposed to obscuring it from the, from the model.
B
Yeah, Like, I think this is a sort of confused deputy style thing where the human is the confused deputy.
A
Yeah. So this isn't about bypassing some sort of filtering.
B
Yeah, I think this is about bypassing the human filter, not the technical filter.
A
Yeah, yeah, yeah, yeah. That Makes sense.
B
But I mean this is the sort of thing like you could imagine subliminally encoding in audio or in, you know, in between frames and video where the human doesn't see existence so quick, you know, much like, you know, sometimes people have, you know, hidden frames in a, in a music video or whatever and you just see a brief flash as a human. But if you're inspecting it frame by frame, maybe you would see it, you know, so there's all sorts of interesting avenues for this type of attack. And I just really liked that trailer bits, you know, wrote it up, wrote a tool to do it and kind of we are left, you know, wondering what on earth are we going to do about computers that mix data and code? Because there's a reason we kind of don't do that so much these days. You know, it used to be in the old days we would allow that type of confusion and then we had a bunch of vulnerabilities and now we are pretty good at separating data and code. Now we're just going to bung it all back in and hope which not great, not great future.
A
Well, I mean we used to joke about how if you played heavy metal music backwards, you know, it would play satanic messages, tell you to kill your dog for Satan and stuff. And I guess now some music, you're going to play it backwards and it's going to tell you to forget about the rules and dive into an inbox looking for credit card numbers or something.
B
Like that.
A
You know. Interesting times, interesting times. All right, so we've got a warning from the FBI and Cisco, which is a depressing warning if we're honest because apparently some Russia linked hackers are tearing their way through a bunch of enterprises using a single CVE in Cisco gear. And I'm going to read the number. It's not often that I read a CVE number because this one is CVE dash 2018 0171. So the, the 2018 stands for 2018 Adam. So what are we even doing? Right, so the FBI is warning about these Cisco bugs being exploited. I think they're in switches, like end of life switches, but it's a, you know, it's a seven year old bug. My God, why do we even turn up?
B
Well, exactly right. And you know, the fact that the FBI has to put out a warning that says, hey, how about you patch your stuff like, and as you say it is the year 2025 A.D. we've been saying that for quite a long time. I mean probably it's not going to help the people who have not patched these devices because they probably know they're supposed to and they're just not gonna. But yeah, these devices are being turned into I think like orb or relay networks by the fsb. So yeah, nice.
A
Now you remember a while ago we spoke about how some of the big orgs like Microsoft, CrowdStrike and whatever we're going to agree on naming conventions for threat actors. Clearly hasn't happened yet because let me read you the headline and deck from this story we're going to talk about. CrowdStrike warns of uptick in silk typhoon attacks this summer. The China affiliated espionage group which CrowdStrike tracks as murky Panda has been linked to blah blah blah blah blah. So yes, clearly that hasn't happened yet. We're going off a a story written by Matt Capcoe over at cyberscoop here, but it looks like this group is seems to be in love with Citrix exploits as well. But like, why don't you walk us through what exactly CrowdStrike's talking about here.
B
Yeah, so there's two aspects to this that are interesting. The first one is this is a group that has done like cloud hopper style attacks where you compromise a service provider that maintains people's clouds and then go down into their customers. So they are specifically targeting cloud providers or cloud what solutions providers. We say like people who help people use clouds and then go down, you can go downstream into the customers and that's a thing that, you know, a works well and B gets you a lot of access. And then the time with Citrix is this is also a group that we have seen using Citrix zero day to get access. And the fact that we have a Citrix bug this week. Yay. More zero day in Citrix netscalers. These things kind of combine together where this group is very active, they're hitting targets that are quite high value. They are leveraging them by going downstream into their customers more than average. And hey, this fresh bug's great. What a great combo.
A
Fun times indeed. Yeah. So the new pre auth RCE is CVE 2025777 5. We've linked through to a bunch of resources on that one. Now this guy has been charged in Oregon. Federal prosecutors on Tuesday charged an Oregon man for allegedly running a global botnet for hire operation called Rapper Bot. I mean this is the typical sort of own, you know, Mirai style botnet for hire. We've actually got a bunch of news around these like residential proxy networks and you know, orb botnets this week, but walk us through this one.
B
Yeah, so this particular DDoS for hire botnet was something like, you know, 70 to like 95,000 compromised devices was being, you know, offered for sale for people who wanted to do it. This was capable of delivering, you know, multi terabit DDoS attacks which, and I think the biggest one that we've ever seen was 6 terabit overall. And this one, this botnet was capable of doing between 2 and 3 terabits. So like that's a pretty significant player. This guy Ethan Fault is 22 years old that ran this botnet and yeah, he is, I guess assuming that he is found guilty is probably going to end up going to jail for run for running this botnet.
A
Yeah. Now we've also got a Chinese hacker getting arrested for targeting a bunch of South Korean celebrities and, and taking off with like nearly US$30 million, including the singer in the Korean supergroup BTS. So that, that's just a fun one that I had to include this week because I hope that the Koreans will eventually make a TV show out of this.
B
I mean it would make quite good tv. I mean there's sort of, there's some good movie hacking bits where this, this guy or the group that he was part of broke into South Korean telcos and then used that access to, then set up accounts and kind of gain access to, you know, share brokerage and other kind of, you know, financial accounts that belong to their ultimate victims. So yeah, going through the telco first to get there, like that's, that's some good hack and that's kind of like you can imagine a heist, you know, a heist movie or a heist TV show about that kind of thing. So yeah, Koreans do a good job of that.
A
So yeah, they do, they do. It's high stakes hacking. It's celebrities, it's tens of millions of dollars, it's, you know, fleeing to Thailand. It's got all of the recipes there.
B
Absolutely.
A
Now look, speaking of law and order, as we are the one of the scattered spider kids, this guy got arrested, you know, quite a while ago. He pleaded guilty in April. Noah Michael Urban. Who, what was his handle?
B
Soza or King Bob?
A
Yeah, yeah, so he is, he got more jail time than the prosecutors were asking for. This is a guy, he was charged with doing sim swaps to steal something like 800k in crypto. But you read Krebs article and it feels like he was maybe involved in some of this casino stuff as well. And yeah, prosecutors were asking for eight years. He got 10. And one of the reasons the judge might have been a little bit shirty, shall we say, about all of this, is because some other com kid, like, hacked the judge's email account during the trial, which I don't think did this guy any favors. Funnily enough, he still seems to have access to his, to his X or Twitter account while he's detained because he has been posting F bombs upon his sentencing. Adam.
B
Yes. Yeah, he certainly has.
A
I don't think he was quite expecting to get more than the prosecution was asking for. Yeah, I'm, I feel okay laughing about this. I mean, it's, you know, these guys have just done so much damage and you feel like this might even send a message.
B
Yeah, well, maybe. I mean, I don't know. Kids are not super great at receiving messages. No, I did laugh, like, because, like the, the thing about the judge getting hacked, like, you really do feel like with friends like this, you know, who needs, who needs enemies? This guy was also behind a bunch of the campaigns that were breaking into, like recording artists. So like rap stars and whatever else breaking into their, you know, accounts and stealing pre release music. Like that was his jam early on. Or maybe that was the thing that kind of got him into hack and was stealing unreleased rap music through sim swapping and whatever else. And it kind of shows you that, you know, the slippery slope from doing a little bit of crime and then all of a sudden now you're in, you know, 10 years in federal jail. So sucks to be him.
A
Yeah. And federal time is like you do all the time as well. Right. So that ain't, you know, there's no, there's no getting out early because they're crowded.
B
Yeah, I don't think he's going to have Twitter access where he's going.
A
No, I think not. Now we got another one here from John Greg. I just included this one because it's funny. This guy's a Chinese bloke who was working for a company in Ohio and he had rigged things up such that if someone vaped his account out of the company directory, it would lock everybody out. So he set like this kill switch for his network and yeah, it tripped when they suspended him and caused all sorts of drama and. Yeah, four years.
B
Yeah. And that, that doesn't seem unreasonable to me. It's like leaving, you know, leaving logic bombs lying around your employer's network. Probably not a thing that you want to, you know, give a light punishment to. But. Yeah, this was, I think, back in 2019 maybe. So it's taken a few years to grind through, through the justice system, but hey, that's. He didn't get 10 years in jail, which seems more fair.
A
Yeah. Meanwhile, Nevada is having a real bad time. The state government there is suffering from some sort of cyber incident. We've got report here from Reuters. I actually tried to hit the nv.gov like Nevada state website, it is still down. So they are, you know, I don't think this one's fully picked up steam yet in terms of press coverage, but it looks pretty bad because this has been going on a couple of days now.
B
Yeah, I think when we were editing Catalan's newsletter this morning, he had a report that said like in person, like physical offices of the, the state government of Nevada are closed at the moment because the employees who work there also can't use their online systems. So it sounds like things are pretty bad. You know, if website's down, mail systems down, offices are closed. You know, that's, Nevada is a pretty big place. And of course, you know, we just had defcon and Black Hat in Nevada and now, you know, the state government of Nevada is hacked into the ground. So yeah, it's, I don't know, I think it will definitely pick up some more traction in the mainstream press if the level of disruption continues.
A
It's so hard to know where we actually are with ransomware in terms of whether or not all of this law enforcement and you know, sort of siginty style disruption has actually made much of a difference. You know, it's an impossible question to answer because even, even if you collect the stats, how do you measure impact? Right. Is that by money? I mean, it doesn't really apply in the case of a state government office being closed, does it? Like how do you measure impact? So that's one thing and the other thing, it's impossible to know how much worse it might be if those actions hadn't been taken as well. So you know, people sometimes ask me, well what should we do? What else should we be doing here? And it's like, I don't know, it's, it's, it's a really tricky problem. But the point is you got to keep doing everything, you know, you can't just, you know, this sort of suppression of ransomware with takedowns with, you know, intelligence agencies working on it as well. It's like mowing a lawn. You know, it's not a one time thing. You don't do it once and you know.
B
Yeah, yeah. And I mean all of the other bits and pieces we've done to the ecosystem with, you know, making money laundering more difficult, making cryptocurrency a bit more kind of transparent, being able to block cryptocurrency transactions because the exchanges are being forced to cooperate a little bit. You know, there's lots of little things and presumably they are making some difference. But at the same time, you know, when you see an outage like this, it's hard to argue that it's working super well.
A
I think in one of the stories that we talked about today, there was someone who wound up being in trouble because they were taking crypto and exchanging it for gold bars or something. Like the whole ecosystem around this stuff is just, is just amazing. Yeah, bitcoin to gold bars, that's a thing. Now we're going to talk about a piece by Brian Krebs about residential proxy networks. This is a really interesting feature, actually. So a little while ago there was this screen cap from Reddit that went around and I remember seeing it at the time when it was all over social media, which is this guy posted, I've been getting paid 250amonth by a residential IP network provider named DSL Route to host devices in my home. They're on a separate network than what we use for personal use, blah, blah, blah, blah. You know, is this stupid for me to do? They just sit there and I get paid for it. The company pays the Internet bill too. And then later edit. Thanks for the info. This was something I started doing as a naive 18 year old a few years ago to help pay for my college. I'll be getting rid of everything, lesson learned, blah, blah, blah, blah. And then so what Brian's done here is he's really written up like a lot about DSL root, which is the company that was paying this guy. Very shady, like very, very shady proxy network that's being used for very bad stuff. But what's really interesting is you read this story and it seems like the sun is setting on these types of residential proxy networks that actually use dedicated hardware. Because what the next generation of these companies are doing is just getting people to basically install malware on their own computers and getting the access that way. So I just found this, top to bottom, a fascinating read.
B
You missed the one extra fun bit though, that the guy that posted this on Reddit, I think in the rest of his Reddit posting history it became clear that like he works in the Air National Guard and has a TS clearance. Yeah, I mean, admittedly it sounds like he did this before he got his clearance, but either way, like you gotta wonder, like, surely at some point during the clearance process you would have thought these guys that are running computers at my house that are paying me, do you think? Nah.
A
Well, this is why you're not allowed to take top secret material home. Exactly. Right?
B
Yeah, exactly, dear. But no, Brian's write up that does kind of make it seem like this DSL root, I think it's been around for a while. It feels like the sun is setting a bit on its business model of having to actually pay people to do this kind of stuff. And yeah, just with dedicated hardware at least and just deploying malware. We've seen lots of residential proxy botnets built out of compromised machines where people download pirated software from torrents or whatever and it drops a proxy on you. But yeah, also just paying people to run malware because it's easier than you know, getting past Defender or whatever else. That's absolutely a viable business model as well.
A
Well, I mean, you know, you just see this guy, they've quoted, Brian's, quoted this guy as saying these days it's become almost the guy who runs this, this DSL root thing. Saying these days it's become almost impossible to compete in this niche as everyone is selling residential proxies and many companies want you to install a piece of software on your phone or desktop so they can resell your residential IPs on a much larger scale, so called legal botnets as we see them. So it's almost funny that this guy's like, wow, what we're doing is, you know, so much cleaner than, than that, you know what I mean? He's like, sees these new upstarts is just doing it in a dirty and wrong way. So yeah, that's interesting.
B
Exactly. One, one tiny bit of krebsing delight in this story is he doxes the guy behind DSL root, including his addresses in Moscow and I think Minsk and Belarus based on leaked data from Russian food delivery services. Like he orders quite a lot of pizza to his house from Papa John's too.
A
We even know where he gets his pizza.
B
Yeah, exactly. So yeah, Brian's got his like home address and like figured out how often he orders pizza and all that kind of thing. So that's, yeah, we, we talk a bunch about how like data breaches in Russia end up being used by open source intel people to do this kind of thing. So it's really nice to see a great example of that.
A
Yeah, I mean it's such a leaky environment. I mean you think America's Bad. And then you see the stuff that you can just torrent in Russia, which is like passport logs and you know, like which passport number crossed which border when. Like that's, that's how bellingcat were able to figure out a whole bunch of stuff around. The Sergei Skripal assassination thing was just stuff lying around in torrents. And like who torrents this stuff? Like, you know, that's the part that just boggles my mind is that people actually package this up and make it available. I'm like, well, but often it's free, often it's around, maybe it's free.
B
Now imagine when you want to get fresh access, like if you want the latest border crossings or pizza orders then probably you have to pay. But you know, six month old pizza orders, you know, who's going to pay for that? So I might as well give it away for free. It's a lost leader, brings in the customers, smart businessing path.
A
You really, you really wonder if the Russia desk at like GCHQ or at NSA actually just has bitcoin set aside to buy this sort of stuff because often it's going to be easier.
B
Why wouldn't you?
A
Why wouldn't you? Exactly. So we're going to finish up this week just with a few stories about Max messenger, which is Russia's answer to WeChat. We've spoken about it a bunch in the last month or so. Dorina Antonio over at the Record has a story about how, you know, Russia is weighing a ban on Google Meet and they've been doing the same sort of thing to Google Meet that they've been doing to WhatsApp, which is just degrading it, making it sort of crap to use. You sort of never know when it's going to work or it's not. Again, this is just a way to funnel people into Macs. We got a story from Thomas Brewster also at Forbes where he's written up, you know, some security researchers contacted him and said they through Max into Corellium to do some analysis on it and found that it was like, you know, a security horror show. It tracks your location always. It doesn't encrypt data. I think it means it doesn't encrypt data like stored data. But that part's not clear in this write up. But you know, it looks like a giant pile of you know what, I don't think we should be terribly surprised there. But did you also find this story a little bit confusing?
B
It's certainly a little bit thin. So Forbes source, you know describes their research, but it wants to remain anonymous because they're worried about retaliation from Russia. But it's very thin. There's no link to the research or more details, so there's not much there. Thomas Brewster did say that he ran the contents of this research past Patrick Wardle, who was a guy that knows a bunch about mobile devices and that kind of thing, and said that he's an Apple guy.
A
Like he's all things Apple Waddle.
B
Yes. Yeah. So they. But yeah, he knows things about mobile apps, I guess, is what. Is what I mean. And he apparently kind of, you know, confirmed the findings. So we don't have much detail and of course, you know, we want more detail because we like detail around here. But I don't think anyone be super surprised that if you're going to make a messenger that everyone in the country has to use, then, you know, as much data as you can get out of it is probably going to be useful. As to how, you know, whether it's really doing precise location tracking in real time, like it says, I don't know. But hey, I mean, if you were an oppressive government, that would be a pretty good thing to have.
A
Yes. So we're going to finish up with a funny story here. And it was sent to me by a Ukrainian listener who I've been in touch with for many years who said, look, I'll just read you the translation of the story. A representative of the Russian Orthodox Church called on Russians to pray for the National Messenger Max. I mean, you know, we often hear that the Russian Orthodox Church is sort of a little bit too close to the state in Russia, which is one of the reasons it got kicked out of Ukraine. But I mean, this, I mean, this may support that idea.
B
Maybe just a little bit. And, you know, I guess, you know, blessing technology is not that unusual. Catalan was informing me this morning when we were talking about it in Slack that, you know, this is a thing that happens sometimes in Eastern Europe. You do want to prey on the computers and, and the software and so on. But on this, in this radio interview, the monk said one should pray for Max messenger because of a person's desire to use earthly goods to achieve useful results.
A
There you go.
B
The Kremlin wants useful results. So, yes, pray.
A
Yeah. There's also a little interesting detail in this write up that said that the head of the State Duma Committee on Information Policy, Information Technology and Communications, Sergey Boyarsky, previously stated that Russia may begin checking citizens for unjustified criticism of the National Messenger Max. So things are Going great in Russia.
B
When you and I are not going to Russia in the near future anyway. So we can.
A
No, no. When they release their version of WeChat and then outlaw saying that it sucks. Right? Yeah. Fantastic. All right, mate, we're going to wrap it up there. That's it for this week's news section. Thank you so much for joining me and we'll do it all again next week.
B
Yeah, thanks much, Pat. I will see you then.
A
That was Adam Boileau there with a check of the week's security news. Big thanks to him for that. It is time for this week's sponsor review now and we're chatting with Greg Bell who heads strategy over at corelight. Corelight makes a network sensor which, you know, you put it on your network and it collects a whole bunch of metadata and you know, security related information. You can do with that what you will. They also have a cloud based NDR product that uses this sensor. They have commercial versions of the sensor because it is an open source thing. Zeke. They have commercial versions that can just handle like mind boggling amounts of collection. That's one thing that they specialize in. But now they're doing a bit of a AI push initially. We actually spoke with, with their CEO a while ago now about some of the early stuff they were doing with Gen AI in terms of getting it to explain various alerts and whatnot. You know, very baby steps into AI sort of stuff. Now they've come out with a big push, a big release involving a model context protocol server and a whole bunch of other stuff that sounds, you know, it's funny, right? Because more and more when you're hearing about people adding these sort of AI things to their, to their products, they sound actually really sensible. So here is Greg Bell explaining all of that. Enjoy.
C
The models have just natively understand our data. We're an open source company and so every foundation model, the big ones that are being integrated into products, have been trained on decades of our content on the logs, the documentation mailing list archives and Q and A Reddit conversations. And so we think we have a pretty unique ability to harness that pre existing capability in the models and to deliver it to customers in a way that's sensible. We are definitely not making outrageous claims that we're going to own the SoC or that everyone will converge on our platform. But I think the ability to combine great data with pretty thoughtful UX and thoughtful integration of the data is going to be impactful. And we've done, as you mentioned, we've delivered a gentic triage. We've announced that last year we were actually the first company to announce any genai integration in our category just a couple of years ago. There's a lot of workflow automation improvements coming over the next say six to 12 months and generally focused on removing drudgery, providing just in time context, highlighting what's really important to investigators in the heat of an investigation. So just making things go faster, taking away repetitive kind of boring work. But what we just announced is a moment in that evolution. We certainly want to participate in the emerging AI SoC ecosystem. So the black hat, the recent announcement is about an MCP server that we've developed along and this is pretty important with Playbooks and prompt books that make it a lot more useful. MCP by itself is just a protocol. It's kind of glued between the models and juicy sources of data that they might use for our benefit. But they really need guidance on how to use that data. So we've packed a lot, I would say hundreds of engineer hours worth of hard won working knowledge into Playbooks and prompt books that help guide the model to do the right thing and to work surprisingly independently when given high level tasks to perform. So that's what we're doing. And before I stop talking, one more point. We're not trying to build a little straw in front of our own platform with this NPC MCP server. We're just presuming our customers already put their data in a SIM or a data lake in our case, we're integrating with Splunk initially and we're bringing all this capability to where the customers already keep their data. We don't know if that will be the right design pattern in the future, what the future will bring. But for now it seems to resonate with our customers. So that's the announcement in a nutshell, a big nutshell.
A
Yeah, I mean, I think I've said it on the show previously where we're at the point where any vendor that is, that generates data like this, like alert data, that is not adding gen like triage and analytics capabilities to their products, they're going to get left behind. I mean it is as simple as that is. It's like it almost doesn't matter why you're doing it, you just, you just have to do it right.
C
And I think consumer apps are teaching people to expect that. Right. And it's not just cursor, it's the apps, it's confluence, it's the apps we use internally for doing surveys within the company. We're just used To I would say thoughtfully augmented data. And we're doing the same thing. I think we're doing it from the perspective of open source company committed to an open vision of ndr, open interfaces and interoperability. So that gives us a little bit of differentiation and we're trying to go as fast as we can and learn with design partners.
A
So what's the idea here? Right, so you've got this new model context protocol server and you've got a whole bunch of prompt books and investigation prompt books and whatnot. What's been the emphasis in terms you talked before about being able to abstract away repetitive dull tasks? What sort of repetitive dull tasks are we talking about? What is it that a corelight user is going to actually use this stuff to do?
C
Yeah, a good example would be to think of a multi part investigation that requires a certain methodology to it, like investigating an alert, investigating and digging up context around an alert, but something that is fairly repetitive and that can be done with a model that's sufficiently powerful and that has enough context to work with. And the user experience. It's a little bit like if you've used Gemini Deep Research or you've used GPT5, you've given it a really sort of a significant problem that requires it to analyze the intent, to break it down into a series of steps that it displays transparently to tell you that it's working and to come back with both the conclusion and the underlying data that justifies it. That's what the experience is like. And frankly it's pretty amazing that this experience is available to analysts today. I mean we couldn't have imagined that being possible a couple years ago. I couldn't have.
A
Well, there's a lot of, just like with these investigations, it's always click pivot, look this up over here, come back, plug that in, click pivot. You know, it is, it is fairly dull stuff and there's no reason you can't get a model to do it.
C
Right, right. That's exactly right. What's surprising is how given the prompt books and the playbooks and the intrinsic power of the model, just how good the experience can be. And I think we're just getting started. Eventually we'll put these same kinds of capabilities into our SaaS product. We're starting now with the customers that have already that tend to have large socks with data scientists. They have standardized on the data lake and they want help now with an MCP like solution. But eventually we'll have agents that run in pretty short order in our SaaS. Offering that are just doing this stuff behind the scenes sometimes while we sleep, and allowing analysts to focus on what is most urgently important.
A
Yeah. And I guess the point of having a. So I work with a company called Dropzone. Right. Which does a lot of tier one, tier one alert investigations in the SoC. I'd imagine though, that like this wouldn't even necessarily compete here. We sort of are heading to that future of the agents all just sort of talking to each other and figuring it out amongst themselves.
C
Right, Right. It's amazing that as we're quipping in the company, that English is the new JSON and that a lot of this interaction will be over the A2A protocol, a different protocol. And we're working on those integrations with a couple of partners and they are effectively asking us in English, what do you know about this host? What more can you tell me? Is there anything alarming about other devices that this host spoke to in the last month? And we can answer those questions pretty crisply and accurately and I'm sure, I have a suspicion we could work with dropzone efficiently as well. So that's on my to do list, is to reach out to that team too.
A
Yeah, I mean, it is a fascinating thing. I mean, what I'm more curious about than what it can do. And I guess for click pivot. Click pivot, it can do that. What I'm more curious about with this stuff is what it can't do. Where does it fall over? What was something ambitious you reached for and you couldn't get there? Because that's a conversation that people aren't having at the moment. And I think they should.
C
I think we found without the prompt books and playbooks, without all the what's called context engineering and this enormous effort that goes into the trial and error and it's becoming more scientific. But for now, a lot of the open source developments in this space are around frameworks that allow the automation of that process. Right. So we were surprised by how much better a result you get through supplementing the raw power of the model with all that context, which is effectively just a different form of distilled human experience. But without that, you'll get more hallucinations, you'll get illogic, you'll get limitations, you won't get what you ask for. You really have to apply QA and you have to automate it to get good results. And of course you need great input data. Without that, you won't have anything. And that's our sweet spot as a company, is just fantastic input. Data?
A
Well, I mean, there's a question, right? Like through this whole process, did you realize, hey, there's a type of data we're not collecting here that would be very useful to the model? And did you then write new collections, I guess, to bring in that data to provide the model with some more context? Did it actually change the way your core product operates?
C
Yeah, it hasn't yet, but we're actively exploring that question because our data is effectively programmable and so we're often adding fields and adding new parsers in response to customer or community requests. We haven't yet done that in response to a model's request, but I anticipate it happening. The other thing I'm always trying to ask our team to answer is what questions can we uniquely answer because of the data we have access to? And I'm exploring whether we can have agents help us answer that question. So the deeper you get into this stuff, if you learn LangChain, if you start coding with cursor, the more you begin to bring AI into the workflows that are involved in developing and deploying AI, which is fascinating.
A
So how much of this is going to be Zeek versus like corelight Enterprise Y stuff? Because for those who aren't familiar, Zeek is the network sensor that corelight maintains. It is open source and free. And of course corelight has traditionally made its money by selling modified, I guess, Zeek sensors that can handle insane amounts of traffic for mega corporations. They've also got like a commercial, you know, cloud, cloud SaaS NDR version of it, which I believe there's enterprise licensing around that. But with something like this, I'm guessing this is pretty strictly in the enterprise line.
C
Yeah, we had a discussion of that this morning. We're an open core company and what you, your description is accurate and I would only add that we do a lot with detection now. So that's been a big part of our story over the last few years, using supervised and unsupervised ML OG AI, you know, and lots of other techniques.
A
Old timey AI, right? Yeah, yeah, exactly.
C
Old fashioned AI still incredibly effective for certain classes of computational problem, but we use lots of other techniques besides ML to do detection. I just want to make sure that that point was made. We have a pretty structured process for deciding what to open source and what to keep commercial. And, and we'll go through that process when making this decision. We really have a bias towards open sourcing, but this is also fairly distant from the Zeek project itself. So we'll need to get input, talk with our community team and open source team and talk with our product team before we make that decision. What we're doing now is just getting it out so that design partners, and we've already got four or five of them signed up and I think we'll have more soon have the chance to play with it and give us feedback because we want to learn together with them.
A
So just as we're moving towards wrapping this up, the question is, are you an AI optimist who thinks that these models are going to progress to PhD level smarts, or do you think they are just basic probabilistic models that are never actually going to be that smart? How far do you think this is going to go? It's a question I like to ask people who are working in developing this sort of stuff.
C
Yeah, I think I'm a moderate booster in those terms. Like I think the models will get better than they are now and I'm not sure I need to have an opinion really about AGI or about the hype. I'd say as a company we kind of want to be outside that binary even. And we just, we're a company about data and it doesn't matter what our faith in AI is, our belief or non belief. What matters is whether there's demonstrated impact and we're finding repeatedly there's demonstrated impact and we'll continue to follow that. If the models get better, that's great. If they sort of plateau in the next year or two and all the attention goes into context engineering, we have a lot of work to do to integrate the capability that currently delivered or delivered over the next year for our customers benefit and we'll do that.
A
All right, we're going to wrap it up there. But just a parting anecdote which is I recently had, just as it relates to AGI, I recently had a bit of an issue with an electric vehicle that the family owns and I punched a few words into Google to sort of see if I could find forum posts about it or whatever. And the Gemini suggested suggested text at the top told me to check the fuel system on our electric car. So I'm not, I'm not a huge believer just yet in the whole AGI concept, but as you point out, a lot of this stuff is becoming very useful already. Greg Bell, thanks a lot for your time. Appreciate it. Great chatting.
C
Take care.
A
That was Greg Bell from Corelight there. Big thanks to them for that and huge thanks to Corelight for being a sponsor now for many, many years. You know I'm a big believer in Corelight's, you know, Zeek technology. It's the industry standard for network data collection. So, yeah, Nice one, corelight. That is it for this week's show. I do hope you enjoyed it. I'll be back next week with more security news and analysis, but until then, I've been Patrick Gray.
B
Thanks for listening, Sam.
Risky Business #804 — Phrack’s DPRK Hacker is Probably a Chinese APT Guy
August 27, 2025
Host: Patrick Gray
Co-host: Adam Boileau
This episode covers a wild week in global information security, blending geopolitics, hands-on hacking, and the growing synergy between crime, nation-state operations, and technological advances. Main stories include revelations about Iran’s efforts to orchestrate anti-Semitic attacks in Australia, ongoing cyber sabotage in Iranian shipping, the dark side of Microsoft’s China-related business dealings, a high-profile hacking attribution misstep involving FRACK magazine and supposed North Korean APTs, mind-blowing AI prompt injection research, persistent threat actor naming chaos, major criminal cases, ransomware’s unrelenting impact, evolving residential proxy networks, and the state-driven Russian Max Messenger app.
This episode encapsulates the intersection of geopolitics, persistent technical threats, and tech industry growing pains around AI and security—seasoned with recurring themes of opsec failures, compromised infrastructures, and the never-ending race between attackers, defenders, and regulators. The show’s blend of serious reporting, skepticism, and security community in-jokes is on full display.
Patrick and Adam’s advice, as always: patch your stuff, embrace AI carefully, expect hackers (and the state) to be one step ahead, and never underestimate the weirdness of the internet.