
Loading summary
A
Foreign.
B
And welcome to Risky Business. My name is Patrick Gray. We have an absolutely insanely jam packed show for you this week. We've got a whole bunch of news to get through in just a moment with Adam Boileau and Mr. James Wilson. And then later on in this week's show, we'll be hearing from this week's sponsor. And this week's show is brought to you by Island. And island, if you are unaware, is an enterprise browser built really with the sort of features you'd imagine an enterprise browser browser should have. So it's heavy on dlp. You can do things like per domain file restrictions and whatever. You get incredible visibility into what your workforce is doing. Like, it's just like there's a lot you can do with it. You can do sort of like pretty secure app delivery and whatever web app delivery. So, yeah, there's a, there's a lot you can do with it. But Braden Rogers of Ireland joins us this week to talk about how their customers are using it to restrict the sort of unsanctioned use of AI, you know, personal AI accounts and stuff. This is a problem for a lot of enterprises. So, you know, with island, you can see when someone is like using a personal account with like chat GPT instead of the corporate account, for example, because you have that visibility in the browser. So he'll be along to chat about that a little bit later on. But let's get into the news now. And goodness, there is a crew called Team PCP that we've just watched over the week, gradually causing more and more havoc. Adam, let's start with you on this one. Tell us about Team PCP and what they've been up to.
C
Yeah, so this group has been out compromising a bunch of like, you know, supply chain stuff. On GitHub, they were involved. There was an attack on a security scanner called Trivi last week sometime, and then a subsequent security scanner from Checkmarks, both of whom had their GitHubs compromised. But once the attacks were in here, they were dropping, you know, credential, stealing malware. And then there's sort of been this process of watching them evolving their tooling kind of in real time as they've been, you know, deploying it out to people. And it kind of feels like they're using a bunch of AI to build it as, like build the plane as they're kind of flying it. But they seem to be doing it at kind of a scale and a speed that, you know, whilst blunt, is pretty effective. And then we saw them drop alongside the credentials stealing parts of it. We saw them drop some like a wiper that targets specifically Iranian machines as well. So that was, you know, another kind of aspect of this story that's been pretty wild.
B
We're going to do some cyber war as well. Why not? You know, why not?
C
Why not throw that in?
D
Yeah, the wipering Iranian machine seems to be just a bit of like a weekend project of, hey, we've got all these crez. Why don't we see if we can write a little script that's going to mess with the Iranians?
B
Like it's like a, like a side quest.
D
It totally was. And like, I'd have a hard time calling it a wiper because it was just literally a script that was if in this time zone or if running Farsi rmrf and nothing special to it. So what are they doing? Look, they've heavily focused on supply chain. That's their bread and butter. That's how they get the credentials and then what they do with it seems to be at the whim of whatever takes their interest on any given day. The thing that happened today, or at least in the Last sort of 48 hours, is as Adam mentioned, they went after trivia recently, then they pivoted and went after checkmarks. Not the main checkmarks product, but one of their products, I think it's called Kicks or CS or it's essentially their infrastructure as code scanner, which is not as prevalent as their main code scanning SAS sort of thing, but nevertheless it's the same pattern. Compromise an open source repo that's producing GitHub Actions, put some malicious code in that GitHub Actions. GitHub Actions is such a disaster in terms of security. Millions and millions of GitHub actions just say, grab this image and use it. And so they pick it up, credential stealer runs and off they go. Fascinating thing they did today was they took those credentials they'd gotten from the checkmarks git GitHub action running somewhere and realized they'd gotten enough credentials to compromise Light LLM, which is a heck of a package to Compromise, that's like 90 plus million downloads a month. So it was only live for about an hour. But if you managed to do a Python install or more to the point, something else you were installing, grab that as an indirect dependency, all your credentials got snapped. But again, like, what are they doing with it? We don't know. Sitting on massive pile of credentials and, you know, wrecking balls. Still to come.
B
Yeah, I mean, it says they're grabbing crypto Wallets, like, where they can and whatever. So, I mean, I think it's like, yeah, I think this crew just seems to be going out there collecting as much access as they can. If some crypto wallet keys happen to fall out of whatever thing that they've owned, then that's great. But there doesn't seem to be any sort of clear, like, criminal strategy here. It's not like it's a ransomware crew trying to do xyz.
D
Well, I didn't even mention the wallet stealing because frankly, that seems to be just par for the course for anything.
C
Right.
D
That's like your base level activity you do now to make sure you can continue your efforts. But what you're doing novel on top of that, not clear. The one novel thing on this, though, is, and we all sort of nerded out about this because we discovered it together, was this use of this thing called the Internet Computer Protocol, which when I looked at it, I was like, how can something be called that in today's day and age? And then, even then you go and look at it and you're like, oh, my God. Not only is it called Internet Computer Protocol, it's a blockchain thing and essentially allows, like bulletproof hosting on the blockchain as well as as long as you're willing to dip in a little bit of cryptocurrency along the way. See earlier comment about crypto wallets being par for the course, but this is how they ran their C2. As exciting and weird as that is, I also would not be surprised if they said to an LLM, I need a really novel way to store my C2. I don't want to pay for bulletproof hosting. What else you got out there? And it said, I've heard of this thing. Why don't you use that?
B
Right. Like, it is a deep cut that an AI model would absolutely suggest, right?
D
Totally.
B
Yeah, yeah, 100%. Now, speaking of AI models, you know, a big thing that's come up over the last couple of months, and we've talked about it at length on the show, is openclaw. And in fact, openclaw is now used by you, James, to do various tasks for Risky Business. So, you know, we got a little lobster on the team. That's fine. You know, we keep it away from anything sensitive, don't worry. But it looks like Claude, you know, anthropic, got the message and now they've released a similar sort of thing for Claude. Right. So you can now control your computer with Claude, unlike openclaw. Though this will actually run in the cloud and they've got like a little agent that can run your machine to enable it to, you know, move the mouse around, do whatever it needs to do. But, like, I feel like this is going to be, this is going to provide us with a fair bit of content for the next six months. I feel like, like probably Anthropic is going to do a better job of wrapping some guardrails around this, but I just wanted to get your feelings on this, James, as to how this is going to go.
D
It's such a wild ride. I mean, you know, OpenGL, yes, I use it, but I keep it on separate VMs, separate machines, all the separation and I still feel nervous about it. And, you know, there's a couple of different ways to use OpenClaw. There is. Use it on a separate machine. Good. Use it on your main machine. Not so great. Use it on your main machine and allow it to be remotely controlled through Telegram or other messaging. Well, then you're really on the, on the frontier. And then, you know, Anthropic turns around and says, oh, hold my beer. We're going to productize this for the masses. So over the last couple of days they released things like that, I think they call it Dispatches, which is essentially a way to talk back to the Claude instance running on your local machine, which I would find handy.
C
Right.
D
I might be out and about and think, hey, there's that, that feature. I want to go on creating this repo. Can you get a head start on it? That's fine, I love that. But then they just went to the nth degree and said, actually, you can now chat to your home computer and that home computer will have full computer control and do things like, hey, I'm going to be late for the meeting. Can you find that PDF document that Charlie's waiting for and email it to him? And I'm just like, that's not safe. That's not good.
B
I mean. Yeah, yeah. So the question is like, well, the question is, can this be secured to an acceptable degree? The answer to that is it has to be. And it's going to be the definition of acceptable. It changes not the actual security threshold of that product, but Adam. Well, I mean, you know, is that the feeling you get, mate, when you look at this, is that, well, we're just going to have to tolerate these things occasionally doing unpredictable and damaging things on our endpoints? Or do you think that Anthropic has a reasonable opportunity here to make this thing behave sensibly? What do you think?
C
I'm kind of torn in a way, because on the one hand, this is kind of what we expected Microsoft to do with Copilot, right? Was to just go buckwhile bolt an LLM into the core OS and just let it go crazy on your stuff. And clearly Microsoft, for all of its faults, is not quite that crazy. And so then somebody else is coming on and doing it as an aftermarket product. There's a degree of, well, at least it's like Microsoft can say, well, hey, we didn't do it. And then Claude Anthropic will kind of try and do a competent job because it is their core business in a way that it wouldn't be for Microsoft. But the whole concept is just terrifying. And to think back to how we felt when MCP started to become a thing, we're like, oh, my God, we're going to let the AI make HTTP requests. Oh, my God, how terrifying. And now we're just going to let it pointy, clicky around your desktop, like an image recognized, you know, a VNC stream of your computer. Like, what kind of mad world is this? And yet you're right. Like, they are just going to have to make it good enough, because people are going to use it and it's going to go horribly wrong. We're going to get some great fodder for, you know, for talking about terrible things, but it's just going to be wild, you know, and there's no way to make it okay. So it's just gonna have to be okay enough, right? Because.
B
Well, that's. That's kind of what I'm getting. Like, our definition of what's okay is probably gonna have to change. And sadly, this is just how it is. It's funny, actually, James, you're working on a podcast at the moment where you are actually having a conversation with your Claude bot with your openclaw instance, about trying to get it to recreate the Karuna exploits. I've heard some of that. That is gonna be absolutely hilarious. When's that one getting published?
D
I would like to get that out this week, Pat. And I got to tell you, when I sent the demo last night, I was sure you were going to absolutely hate the concept. And so when you said, I think this might work, I was like, okay, let's get this going. But, yeah, just as Claude created a C compiler by saying, hey, here's a working reference and I've deliberately removed a part of it. Can you recreate it? And I thought, why not do that with the Exploit kits. Let's take certain elements off and see if an LLM can actually reason with it and recreate it from scratch and maybe even help us, you know, find angles to deal with parts that have been patched out of the os. I don't know. We're interesting to see what we've, what we come up with.
B
Yeah, that's right. So can you cajole openclaw into being your exploit writer to complete, you know, once you remove part of the Karuna chain, can you get it to actually fix it? So interesting experiment. Can't wait to hear the whole thing. It's sort of very strange hearing you have a conversation with a clanker that you have, you know, animated with like 11 Labs voice. It's like the whole thing's very weird. Yeah, it's sort of oddly compelling. Kind of creepy. Like everyone will see what I mean when we publish that one real quick too. Just make sure everybody that if you do want to rush out and install the latest CLAUDE tools, make sure you are actually getting it from the correct anthropic website. Because people, this is something we've spoken about on the show with the team at Push Security and Now there's a 404 Media piece about it as well, which is there's a lot of malicious SEO, malicious Google Ad for Claude that aren't Claude. So you go to what you think is a Claude download page and it's actually an install fix thing that gets you owned. So I'm guessing most people who are listening to this podcast are not going to get tricked by that. But it is like they are spending the money on the malicious ads because they are getting shells with it. So that's something to keep in mind. We've got a report here from Cyber Security Dive saying that Lockheed Martin was targeted in an alleged breach by pro Iran hacktivist. Eh, you read this story, it doesn't seem that compelling. These people claim to have stolen 375 terabytes of data from Lockheed Martin, including like the so called, what they're calling the blueprints for the F35 aircraft. Look, I'm guessing most of the sensitive blueprints for an F35 aircraft are not going to be lying around on an Internet accessible lan. This whole thing smells highly suspect. Adam, you had a look at this this morning and felt the same way that I do.
C
Yeah, the story that cybersecurity Dive links to as kind of the upstream source is pretty like there's almost no detail. Basically we have a telegram post claiming Some stuff and that's about it. The only response from Lockheed Martin that anyone's got so far is that they are aware of the claims and that's about it. But it just, it doesn't really pass the sniff test and you know, maybe they got some related documentation or something, but you know, the idea of it just being like there's some blueprints in an S3 bucket that they're going to find lying around doesn't seem super credible.
B
Yeah, yeah. We do have a fair bit to get through on the Iran stuff though. First one of them or next one of them is that CISA is urging companies to maybe have like dual key controls on like sensitive operations in Intune after this Striker breach. And this seems like good advice. We actually heard from a listener, Matt Flanagan, who's a Australian security guy, he's been kicking around the industry for a while and you know, he wrote to us and said, look, you know, sisa's announcement here is on point because it's something like if you're an admin and you haven't done this, it's like a nine click operation to go and just vape everything. And that doesn't seem real smart. I mean, it's funny actually, James, because when we first spoke about this like your immediate reaction was like, why is it possible to do this so easily with, with Intune? You know, maybe they should time lock it or something. But I mean, you know that there is a, you know, dual admin control for operations like this. Makes sense. It's just no one seems to know that it exists.
D
Yeah, like I said, it's really good advice from cisa and Brad Arkin and I sat down and talked about this on the most recent Risky Business features episode around. Even if you do turn on this dual key approver or some sort of extra step in the way, like as not, they're still going to find a way around that. There might be like an API way to do it, a PowerShell way to do it. Maybe it's just another fish that they've got to do to get the second key to turn. The point is though, don't think about turning things on like this as a complete mitigation. Think about this as just one of many things you can do to make it harder for the attackers. If it's harder for the attackers, they've got to spend longer in your environment. They'll be emitting more signals and that's how you can catch them because that's where Striker failed here. Nine clicks bang, gone. So fast, no hope of catching it. So, you know, good advice, but just remember it's not, you know, complete resolution, it's just yet another way to make your environment a harder place for attackers to actually get stuff done.
B
Yeah, and just a note too, we keep talking about these podcasts James is doing in Risky Business Features. You can go and find that feed in your podcatcher. Just search for Risky Business features. That's our new podcast channel. Do subscribe to it. It would be great for us. Like you would be really supporting us and supporting James's work. If you do just go out right now to your podcatcher, type in Risky Business features or head to Risky Biz, find that feed and subscribe to it. Adam, I'm guessing your take here is going to be broadly similar.
C
Yeah, much the same. I mean, you know, ideally you would have as, you know, as good a controls as you can possibly have around it, multi people involved. Obviously a useful thing. You know, the question is if you're already admin in intune and the only thing standing between you and wiping everything is, you know, another intune admin, then the path to get one is probably not that dissimilar from the path to get a second one. Right? And being able to leverage through, you know, between Active Directory and Azure ad, you know, to follow that path doesn't seem super unlikely. And I would have said that, you know, in this case the attackers, maybe they landed on Intune, they got lucky and they were able to pointy clicky their way through. And maybe they didn't have the expertise to go around these controls. But with LLMs these days, like navigating through the Microsoft ecosystem now you just ask Claude to help you out and it'll probably find a way.
B
You know, it's funny man, I'm remembering actually IRGC linked attack against the Australian Parliament years ago, they actually managed to get like directory access of some kind, like it was read only directory access or something. But they got a bunch of stuff that like was not good for them to have. And sources inside the Australian government actually said, man, that they just happened to get creds for a misconfigured account. And they were really lucky because there weren't many accounts with that misconfiguration. They just like absolutely got so lucky. And I think, you know, I mean this James and I, we did an interview with the team at Spectrops actually about you know, attack path management and AI and all this sort of stuff and it just really is the case that, yeah, there's always going to be those misconfigured accounts, right? There's always going to be that one that you can hit that gives you that path to something like, to something like Intune. Stryker, meanwhile, has lodged a bunch of documents with the sec. You know, the AK filing says they're not sure if it's material yet, and they're rebuilding. They've described the attack as being contained, which I'm guessing means that they have evicted the attackers from their environment. James's joke this morning was, what environment? Their environment is gone. Their environment was deleted. So, you know, they evicted themselves. But, yeah, it looks like rebuilding has begun and hopefully they are back up and running soon because they're a very important company to the medical supply chain now. As Tom Uren and the Grok put it in this week's Between Two Nerds again, one of our other podcast feeds, you can subscribe to the Risky Bulletin feed if you want to listen to that podcast. But they did a podcast this week about how it is currently raining iOS exploit kits, which is not something I think we've ever said before. I think I'm thinking of like a George W. Bush meme here, which is. Sir, a second iOS exploit kit has hit GitHub. But, you know, James, you've obviously been following this one very, very closely. This one is called Dark Sword. Looks like it is being used by the same crew who were using the Karuna exploit kit. But the origin of the kit itself, it doesn't look like it was also L3 trenchant. Looks like it came from somewhere else. But I mean, seeing one of these kits in the wild is pretty crazy. Seeing two of them in a couple of months is insane.
D
Yeah, it's. It's just like the most incredible coincidence, I guess. But, you know, in saying that, I guess not surprising because you're right, it does. It does have the same hallmarks of Karuna, insofar as this was clearly once something that was a very prized set of exploits that then sort of went on to a secondhand market for, for some shady work with Russians targeting Ukrainians and then ends up in like this. Almost like the, the, the dollar store of exploit kits that it just gets snapped up and, and looks for Crypto Wallet, you know, it's like the saddest end for the most advanced code.
B
Yeah, I think in a previous podcast of Tom and the Gruk, they described, or was it Tom and Amberley In Seriously Risky Business, Tom described it as like, you know, what was once a mint, like mid-1980s BMW M3 is now being used as a paddock basher, you know, driven around on a farm by a bogan. You know, it's just really, it's just a really sad end for something that was really beautiful once upon a time.
D
Yeah, exactly. And the great thing from that was it spawned an argument between you and I as whether it went as to whether that was the E46 BMW or the E30 BMW. And I think we can probably say the, the Karuna was the E30 and this is more like the E46. But in terms of actual laughs in
B
Car Guy, I'm just going to sit
C
here looking blank because I, I got no idea about.
B
Yeah, it's about like 10% of the audience got that, but I, you know, I'll allow it. Go on.
D
So just in terms of the, the tech behind this, it, it's, it's like, it's almost like the same playbook and I found that fascinating insofar as, like, if you zoom out of what Karuna did, Dark Soul does the same things. It's, it's late in WebKit, gets you your read write primitives, escalate up into your sandbox escapes, then you've got access to talk to the kernel, get your privesque. The one thing in this that I'm still pulling the thread on that I'm curious about is the really fascinating thing about Karuna was its use of undocumented registers in the ARM processor to get basically a arbitrary read into memory that was otherwise protected by the page protection layer. There's no mention of that in this Dark Sword kit. And so I'm kind of curious as to what did they do that didn't require that. So it's kind of interesting.
C
Yeah, there was some notes in it about like some kind of user mode pack bypass thing, but then there was no real specifics. Although I guess we've got the code now, so we can go, go dig in. But yeah, that also struck me as like, how did this work? Is it similar? Because, like, it did feel like some work has been done on this kit after it was sold or shared around. Because I think Google's write up has a bit of detail about, you know, where different users have bolted on extra stuff to meet their own particular needs or to solve bugs. And there's some examples of maybe the code got fixed at some point and then you can sort of see the lineage, like the family tree of users of this code sort of starting to diverge a bit. There's some interesting nuance in here. And I guess as people dig through it more, we'll discover more of the nuggets of juicy detail that really only tragics like you and I would care about, but still super interesting.
B
I think a nice thing here though, like the silver lining here is that for people who are interested in learning a bit about exploit development now, there's a couple of reasons with some really good stuff in it that they can use to just get an idea of how all of this stuff works. And I don't think that's necessarily a bad thing. I think it's good for more people to understand the way this sort of stuff works. I should also clarify too because I used a term there that is extremely Australian, which is paddock basher. A paddock being a field in a farm. And a paddock basher is typically a car that is like no longer suitable to be registered on public roads. So you just drive it around the farm and bash the crap out of it. That's why we call it a paddock basher. And y yeah, these are the paddock bashes of exploits at this point. It's like so sad. And yeah, being used to target Ukrainians and then onwards to target like Chinese cryptocurrency users and whatever. Just very, very sad. So we've got, we've linked a bunch of stories about that in this week's run sheet. This one is very relevant to you, James. Apple, according to TechCrunch, has rolled out its first background security update for iPhones, iPads and Macs to fix a Safari bug. Apple has previously rolled out one of these background fixes to macOS. And I guess the reason it's relevant to you is one of the first projects you ever worked on at Apple was actually like writing this bit of macOS to do these updates.
D
Yeah, I managed the team at Apple that looked after the Mac App Store, amongst a few other things. And this was back in the day when software updates were done through the Mac App Store. Right. It feels quaint to think back to that time you used to launch the Mac App Store to do your OS updates. And so we were basically the UI that was driving the software update code behind the scenes. And one of the features that I personally wrote was this thing called Code Red, which allowed Apple, with a specially crafted and very highly guarded software update configuration, we could push something out that would be silently deployed and installed on macOS machines. Now, at the time that was only for macOS and I'd left Apple before it was first used. And you're right, it was used in 2019 to basically patch Zoom when Zoom released a version that had an open web server that could arbitrary launch apps. And that was so it was fun to see my work get used eventually after I'd left there. So seeing this story this week is interesting to me because it sounds like that little simple thing that I created has been evolved significantly since then because the original version I did, I don't think it would have gracefully handled installing stuff that needed a restart. And this is something that piqued my interest in this story it talks about. I think the original write up says that when the update had been installed, it only did a quick device restart rather than a longer reboot. And I'm really interested to know what exactly that means. When you install a Software update on macOS or iOS, there's two, sometimes three reboots involved to boot the system into something that can write to the os, deploy the fix, rebuild the caches, come back up in the right security environment. So I just wonder if Apple has sort of ring fenced stuff that they know they need to update often looking at your webkit and made it such that you can actually update those and there's a lighter method to restart the system. Maybe it's as simple as like a kill all and the system comes back up without a full reboot.
B
Yeah, it's evolved, it's interesting, it's funny man, because having you working with us, it's sort of like hiring a North Korean defector or something, right? Because Apple do not talk about any of their engineering.
D
For all you know, I am a North Korean defector given the state of AI.
B
Yeah, I mean, let's see if they send a team of ninjas after you to neutralize the threat. But yeah, we've looked through to the TechCrunch piece on that one. Look at Update 2 on the mobile Internet Internet situation in Russia. St. Petersburg has had its mobile Internet fully cut as well, which the war Translated account on Blue sky has called a Russian Russian edition or Russian style digital detox, which is not a bad gag. Just wanted to put that out there because it's. It is strange. Like is this a drone defense measure? Is this Kremlin paranoia? We don't know. Things are getting a little bit interesting in the war over there as well. The Ukrainians have started targeting soldiers instead of materiel and that is actually proving to be quite successful and they are succeeding in attriting Russian forces at the moment. So just feels like the pendulum is swinging a little bit back towards Ukraine's side at the moment and things are a little bit politically sketchy in Russia, so, you know, always careful about, you know, being too hopeful about anything happening there. But there's some interesting signs, let's just put it that way. But we've got a piece here from Matt Burgess and hey Newman over at Wired. And this story is about Moxie Marlinspike. Apparently he's going to start applying cryptography to AI and we are all somewhat confused by what, to what end. Adam, walk us through this story because, like, none of us can make really heads or tails of it, but I, you know, let's start with you.
C
Yes, I mean, most people would know Moxie from his background with the Signal project. He did a lot of work on the underlying protocol and then led the project early on in its life. One of the things he's done later on is this thing called Confer, which is his work on integrating, you know, some, I guess, some ideas from the Signal E2E model into the AI world. And one of the things that they're trying to do there is to make chatting with a LLM that is hosted on somebody else's environment feel more like a private chat. And like the current model of bolting LLMs into existing chat applications, the security model of that doesn't really match people's mental one. And so the idea I think is he's trying to come up with a way to bring some of the privacy and security guarantees that Signal provides into a world where you're chatting with a computer. And obviously the idea of end to end crypto, when one of the ends is meta, is kind of a bit arsed backwards. And I think that's the thing that we've all been struggling with is like, even if you had, like, how is this different from SSL of you talking to meta? Right? I mean, you're protected against intransit communications interception by the underlying, you know, by ssl. I think what they're going for here is to try and anchor it into some kind of trusted execution environment. So like other Apple's private cloud computer have an LLM conversation where the operator of the hardware and the operator of the LLM stack doesn't just by default get access to everything. So they would have to do some extra work to, for example, train their models on the contents of your chat or whatever else. So I think they're trying to build a system where the client can to at least some degree guarantee that you are only talking to, you know, some GPU somewhere in a cloud and that there isn't something else involved. And Apple tried to do that by making private cloud compute sort of publicly auditable and having some kind of guarantees that they're only running hardware or they're only running combinations of software that have already been approved or that have some manner of inspectability, but bodging all of that together into something that really means something to the average person. Kind of difficult doing it inside Meta, which is not exactly the world's most trusted technology provider. Also kind of difficult. So, yeah, feels like Moxie's got his work cut out for him here, you know.
B
Yeah, it's funny you mentioned Apple's AI there because like what AI? I had to re enable it recently because I was in Brazil and I wanted to use, you know, back when I went on my trip to Brazil. So I had it disabled. I had to re enable it to enable the like, the Live Translate stuff, which is, which is pretty cool. But now I keep getting those like Apple's like notification summaries that like are just comically indecipherable these days. Like, you know, AI is really good at summarizing stuff, but like, I just. You never know what your notifications actually say when you get a summary from your iPhone telling you what your notifications are. James, is this about your understanding of this as well?
D
Yeah, 100%. Like for all the points that Adam just mentioned, I just don't get it. You know, you have to trust both sides of the conversation for this to be worth anything. And as Adam said, if they could do that with private cloud, oh great. To be honest, this just feels like Meta really struggling to get a good headline out there. Hey, look at us. We're still relevant with AI. And hey look, we got that signal guy to come along and yay, privacy will result.
B
I mean, I've got a different, I get a different vibe off it, to be honest. Like, I don't think it's. I don't think anyone working in comms at a company like Meta thinks that this is a good headline because they just don't think about stuff like this. Right. I think this is more likely some project manager or someone in Meta trying to do the right thing. And you know, we just got. And that's how you wind up with a lot of good stuff. Right? It's always one champion in an organization trying to do something. I just don't think it's been clearly articulated here. And we, you know, we have to wait, we have to wait and see what he actually comes up with there. But at the moment I think, yeah, we're all just a little bit confused. Okay, moving on. We've got a piece here from Rafael SATA over at Reuters which is about a. There's this online, online platform. It's like a Crime Stoppers style deal where you can route like, like law enforcement tips to various law enforcement agencies and whatever that company got owned. 8 million confidential tips have been stolen and that ain't good if you are dropping a dime on a Mexican drug cartel. And in fact we have the original write up which I went down a rabbit hole today reading about this news outlet which is called Straight Arrow News, which was founded like by a Republican mega donor billionaire in, in the US who just like wanted Straight Arrow unbiased news and has founded this website. Now the report's actually pretty good. They've got a really comprehensive report on this, on this incident. But yeah, look, I think this is just one of those cases where we've got a whole bunch of information in the hands of God knows who, which could people put people's lives at risk. You know, you would, you would really hope that police agencies would do better due diligence on organizations like this to prevent their, you know, exposed S3 bucket or whatever it from leaking this sort of info. Adam, do we have any idea how this breach may have gone down or is it still a big mystery at this stage?
C
So the hacker group that was responsible for doing it and then subsequently leaked it to distribute Denial of Secrets, like they've got some text files that are not particularly detailed. It feels like direct object reference. Like they mentioned that as a particular aspect that was weak and they mentioned making like 8 million requests and those 8 million records. So it feels like direct object reference.
B
Gee, their WAF did a really good job of stopping that. Right? Like 8 million records, 8 million queries from one IP presumably and like not nary a peep.
C
Yeah, so I mean that's kind of what it feels like. Which, you know, I guess law enforcement doing due diligence on their suppliers are unlikely to go, you know, try incrementing the tip ID by 1 and seeing what happens. Like, you know, they ought to, but that seems, seems unlikely. But yeah, that's as best we've got the actual data itself, I think Distributed Denial of Secrets are attempting to constrain who gets access to it. But you know, once the stuff is out, you know, it'll be bad and as you say, like the kind of people who are giving tips to law enforcement about, you know, organised crime or whatever else, it's just it's not a pretty story for anyone involved.
B
And James, you know, we were talking about this and your point on this one is that deanonymisation has been de. Anonymisation tech has come a long way thanks to AI, particularly this year. And you know, even though these tips are supposed to be anonymized, I mean, in some cases it's going to be very clear who someone is. You know, and there's even tips that have been pulled out by this San Straight Arrow news website where they're talking about these tips where they're saying don't tell the person because, you know, just this information being exposed, they're going to know who it is. But, you know, your point more broadly is that like, you know, de anonymization now is like a lot easier than it used to be.
D
Heaps easier. You know, if you think back to the state of the art of the Netflix Prize and that that was like the big oh wow moment of like three. I think it's typically three data points is all that's required for a really high accuracy de anonymization of someone. And now you've just got more and more and more of this data coming out. You can marry this up with, you know, the private data that's available about location and other things. It's been trapped through marketing and ad tech, et cetera. You know, for all the work going into section 702, it just feels like we're almost at the point where anonymize being anonymous is not kind of possible anymore. Unless you've got an extremely high level of opsec, which is a sad way to be.
B
Yeah, we've got. Speaking of the commercially available information, we did see a very. Look, there's an unverified and probably not true report from some sort of military journal or podcast spreading over X saying that the way the Iranians were able to locate the troops who were killed in the American troops who were killed in Kuwait rate was through like ad tech tracking, which I don't know, man, they were sitting in a double wide that was probably quite visible. So I don't know, I don't know how true that is. But we've also seen news that, you know, Cash Patel's FBI has resumed the purchase of commercially available information. Like this is a big issue. It's one that we've been talking about, particularly in seriously risky business for years now. And you know, it's. It has a big impact on anonymity. Now look, let's move on to our next piece now and this one is from Krebs on security. Brian Krebs has written it up. The Isuru, Kim Wolf, Jack Skid and Mossad botnets have been disrupted by the Justice Department in conjunction with authorities in Canada and Germany. Now what's funny about this is we've talked a bunch about like Kim Wolf and whatnot over the last month or so because Brian Krebs has been writing a series of articles about them and this is the one where they were like, you know, elements of it shipped with like set top boxes, Android set top boxes. Other ways that this was deployed was through people's like residential proxy networks. People would like rent access to the residential proxy network or the operator of that residential proxy network would just basically go onto the local IPs of the network of the user and like start infecting their Android, you know, set top boxes or whatever it is. End result is that you wound up with a, a, with a, you know, residential proxy network inside another residential proxy network. And you also wound up with like, you know, a lot of DDoS capacity here. But my main takeaway from this whole thing is when Brian Krebs starts writing a series of articles about your botnet, like when he's like, this is part one of a series that we're doing on this botnet. Look, it is time to move to Berlize fellas. Like it's time to pack it up, get the hell out of this there. I mean Adam, you know, like what, what, what more can we say?
C
I think, I think you summarized it pretty neatly there. Like when Brian Crab starts doxing you, like it's just going to end badly. Like either you're going to end in jail or all your infrastructure is going to be disrupted or he's going to phone your mum or yeah, like it's just not going to end well for you and it's time to find a new hobby or find a new way to do business or you know, just kind of get out and move on. Because, because like Krebs on your case, that's just, you know, that's just end times. Move on.
B
Yeah. Now moving on to another story and noted whack job and FCC chairman Brendan Carr has made an announcement. The FCC has made an announcement they're going to make home routers great again. And the way that they're going to do this is they are banning the import of new models of consumer grade routers. Right. So basically if a model of router does not yet have an FCC mark on it, that's it. You are not able to import it into the United States if it's foreign, which is like what seems a little bit bonkers. So there's a process here for getting some sort of exemption to this where I think it's like the. Is it the DOD and like the FCC or DHS or something? It's not really clear. But I think the idea here is if you want to start importing route is you need to go through some sort of bureaucratic process which presumably involves buying a lot of Trump Coin or Melania Coin or a series of Mar A Lago memberships in order to be allowed to import your home routers into the United States. I mean, Adam, you looked into, you know, the FAQ on this this morning. I mean, that's about the long and short of it, right?
C
Yeah, pretty much like new devices are going to have to go through some kind of approval process. Everyone is disallowed by default. And I think you're right that this is just going to involve having to either bribe someone or do some kind of administrative process to get on the allow list to be permitted to sell their import routers into the US and the thing that seems strange though is the incentives are just all out of whack. I mean, there was this cyber Trust Mark program that we talked about a while ago that ended up being abortive, where they were going to get Underwriters laboratories to test the security of devices. They would have some kind of cybersecurity rating for thing. This feels like the crazy Trump world alternative to that program. Since that one kind of failed. Then we've now got this where as you say, you buy Trump Coin or whatever to be allowed in. But the thing is, it's all foreign routers. Like what domestically manufactured American home routers are there. I mean, even the American ones are still made in Taiwan or Indonesia or Vietnam.
B
So James, you looked into that part of this, right? Which is like Netgear's stock went up and it's like. But they don't. They're foreign, like, right? What?
D
Yeah, yeah, but, but why?
B
Like, it's just.
D
The only thing here that I think could be successful is we've got to remember, folks, that for us as tech guys, yes, we, we go down to the Best Buy or whatever store and, and we go and find the best router that we can possibly get with the latest, you know, 28 Wi Fi antennas on it. But that's not what your typical, you know, cable or Internet subscriber does. They take whatever router comes from from the carrier. And so, you know, so I'm trying to imagine a positive outcome for this. And if this forces a carrier to adopt a router by default that's more secure, even if it costs them more, maybe that's good. But I'm really out on a limb here, trying to add some logic to this.
B
Yeah. And I'm just checking. And for anyone who's curious, Melania Coin is trading at about 12 cents at the moment. So get cracking, everyone.
D
Hold on, hold on.
B
Yeah, that's right. Might be a good buy opportunity, you know what I mean? Just ride the back of corruption code. Bitcoin. That's, you know, go long. Corruption. There's an easy way to do it now.
D
Well, the Trump router. The Trump router is surely what comes out here. We've got the Trump. Yeah, We've got the T1 mobile. I want to see the Trump router.
B
Yeah, that's it, that's it. Now, moving on to another story now and the White House, through statements of a couple of different officials, we've had the National Cyber Director, Sean Cancross, and also another guy, the senior advisor at the office of National Cyber Director, Thomas Lind. They've come out and they've sort of rubbished this idea that the United States government is going to start issuing letters of marque, allowing private operators to go out and just like, hack back and whatever. Our colleague Tom Uren, I mean, he predicted this in his Seriously Risky Business newsletter over the last, like, six months. He said, like, this is a bit of a distraction, really, and it's not where it's likely to go. And, you know, when companies like Google announced they were going to launch, you know, threat disruption units and everyone thought, oh, they're going to get their letters of mark. And no, it just hasn't worked out that way. So we've had those announced announcements from the White House. And also Google has launched its threat disruption unit. Doesn't use the word offensive anywhere. And, you know, their most recent takedown, which I think was just over the last week or so, they, you know, it was mostly like legal takedowns of domains and whatnot. But I think the point is companies like Google are going to get more aggressive, but I don't think they're going to be popping shell under colour of law. Right. Like, I don't think that's the way it's going to go. Adam, thoughts here?
C
Yeah, I think you're right. I mean, the idea of letters of mark has always been like, we've enjoyed talking about it, but it just, it's so kind of complicated and impractical to do in real life. So I'm not surprised to see them pouring water on it. And yeah, the, you know, making the current disruption process of, you know, taking down domain names, taking down C2 infrastructure, taking down hosting, you know, that does basically work. It's just clunky and takes a long time and smoothing that over. I think Google was talking at RSA this week about, you know, kind of basically just kind of making that smooth and integrating its so that all the players in the industry are kind of working together better. And maybe that gets us a bit closer to, you know, being able to do the kind of stuff that, you know, Krebs is getting done to the botnets without it taking a five part, you know, Krebs story to get to make it actually happen.
B
Now this next story we're going to talk about is like my favorite of the week. It's not a cybersecurity story, but you're going to have to listen to me talk about it anyway because it is just too funny. A founder, a co founder of Supermicro has been arrested for smuggling $2.5 billion worth of Nvidia GPUs to China. China. And what's really funny about this, I mean this is, you know, this is a billionaire co founder of Super Micro. And what's funny about this is like, okay, so the way the scheme worked is that he would get Southeast Asian companies to order a whole bunch of Nvidia kit. It would arrive in warehouses in Southeast Asia. Then he would swap the labels, like showing that it had all of the Nvidia stuff inside onto like replica hardware which would then stay in these warehouses in Southeast Asia. And the real stuff would be stored, sent, sent across to China. So you know, we've talked about how it's funny that China always claims that deep seekers being trained on a series of rack mounted potatoes, right? And they don't have, they don't have Nvidia stuff but yes they do and this is how they're getting it. But the detail in this and in the indictment is just absolutely hilarious. The feds or NSA or whoever, whoever were all up in his messages and they were also all, all up in the security cameras of the facilities where they were swapping the labels. And here's this guy, a billionaire co founder of Super Micro, using a hairdryer with his wife to swap the labels personally. And you know, say what you want about, you know, violating these sort of trade restrictions, but this is a win for customer service. Like, this is a billionaire who's happy to get his hands dirty and do all of this. The whole thing is just absolutely crazy. Adam.
C
I mean, it's funny because Supermicro gear is actually pretty good. And like Super Micro, the company has had all sorts of like, financial reporting problems and they've, you know, like, they haven't done themselves many favors. So, like, in that respect, I'm kind of not surprised that, you know, sea levels or, you know, exact Super Micro were involved in kind of shady sanctions busting. But it just does seem weird for all of the things that they have kind of screwed up over the years. I mean, we had that story about like, you know, grains of rice chips hidden on motherboards, which kind of turned out to be punk them. But, you know, there's just been a lot of weird smells around Super Micro over the years. And for like, for this to be the thing that sees some of them go down, it just is just kind of weird. But on the other hand, like, if you Want to buy x36 kit, you know, server kit, and you don't want to buy, you know, big name like Dell or hp, expensive, like the Super Micro stuff, as generic rack mount server gear went, was pretty good. And like a lot of their gear is still used in or is used in some of the big cloud operators, you know, for making custom hardware. Like they're built on, are built by Supermicro under contract. So, like, they are deeply embedded and very American. American.
B
Right?
C
I mean, despite all of the kind of Taiwan integration of it, like if you were going to name an American hardware company, Supermicro is kind of more American than any of the router vendors that we imagine the FCC are talking about. So, yeah, like, it's a, it's a wild ride. And yeah, I mean, when you posted the story and I was like, what? What do you mean? He was doing it himself. It's crazy.
B
But that's where we are moving on. And this we've got a couple of skateboarding dogs to get through this week. The first one is that there has been an attack against one of those companies that makes like the breathalyzer interlocks for people who've been done for driving while drunk and they have them installed in the car and they need to blow into them before they can drive their car. And there was some attack that bricked them so people couldn't drive. I mean, probably a net win for public safety, right? If the sort of people who drive drunk can't drive their Cars? I don't know. Probably. Probably not the end of the world. What do you think, Adam?
C
Yeah, yeah, probably, probably. I think this was the back end server that supported calibration went away. So they kept working for a while and there was like a regular recalibration process and that wasn't working, which is why the cars then stopped working. And I think, yeah, net result still probably a win for everybody.
B
And now we've got our two favorite AI scams of the week. A man has pleaded guilty to netting 8 million bucks in an AI generated music scheme where he would just like load up Spotify and streaming services and whatever with like thousands of AI generated slop songs and then get bots to actually listen to them and, you know, get money that way. I mean, that's pretty straight up, like bot fraud. So, you know, they're going to be in a bit of trouble. But, you know, this dovetails nicely with our final item, our final skateboarding dog of the week, which is an indictment in Israel of these two guys who were basically who approached Iranian intelligence and said, hey, like, I'm totally like a, you know, Israeli spy and I'm going to sell information to you. And they were just like making stuff up with AI. And it's really funny, they've been charged these. I personally think they need like 10 bucks and a sun hat for, for having done this, but just it's full of all these amazing details. Like one of them was using a stolen identity of an Israeli citizen that they got out of Telegram. And then when the Iranians, like, I need proof that you're real. You know, send me a photo of you doing the hand, you know, the okay hand gesture. He just groked it, you know, just used grok to generate an image of the person in the license throwing an okay sign and like, just really, really wild stuff. I'm guessing you had a look at this one, James? Any thoughts?
D
It's just beautiful, right? This is our future now. Nothing is real. No one is who they say they are. And I also just, I comically imagine the moment when this scheme came together. I sort of imagine like two people sitting at the pub, like, hey, jerk. And we can make some fake intels. Yeah, I reckon we can make some fake intel. Let's see if we can sell it. And then it just gets wildly out of hand. Like they do manage to sell it, then they've got to verify who they are. But who would have seen the right turn on this? Which is, yes, you successfully sold it to the Iranians. Good Job. You send them fake intel, you drained their coffers, you've taken money off the regime. Good job. Here's your espionage indictment.
B
It does seem right. Yeah. It's funny too, because an Israeli fellow sent this one to me. And what was funny about it was he pointed out to me that someone who actually had done real espionage for the Iranians wound up getting paid 1000 bucks. And these guys got away with tens of thousands of IRGCS crypto, which is just the cherry on top, really. But guys, we're gonna wrap it up there. Thank you so much for joining me for this week's news segment. Really fun one this week. Adam, James. Yeah, great stuff. Can't wait to do it again next week. Cheers.
C
Yeah, thanks, Pat. We'll see you then.
D
Yeah, perhaps. See you then.
B
That was Adam Boileau and James Wilson there with this week's news segment. Big thanks to both of them for that. It is time for this week's sponsor interview now, and we're talking with Braden Rogers, who works over at Island Ireland is the maker of the island browser, which is an enterprise browser that can do all sorts of really interesting things. You know, the browser is kind of the OS these days, so it makes sense that we would want some better instrumentation and visibility. Island is really popular in heavily regulated industries where people need to keep an eye on where company data is going and what people are doing with it. And in that that spirit, today's conversation with Braden is about how customers are using island to really make sure that people are using only the sort of corpo sanctioned AI tools. So if they're trying to go off and do stuff in their personal open AI account, like, you know, blocking, that is one option. But with island, you can get a little bit more granular. You might say, hey, they can go over there, but they can't cut and paste into that, or they can't, you know, grab something from the file system and just throw it, throw it into OpenAI. So, yeah, here's Braden Rogers talking about how island is helping customers deal with that problem. Enjoy.
A
It does vary a lot, but I'll tell you, it's interesting. So this is. There's pressures from a lot of different angles to use AI. Number one, the users feel the pressures. Number one, they're intellectually curious. But then they're also, they feel the pressure they don't get. Everybody's telling them if they don't learn AI, they don't figure out AI, their job could be at risk. You know, so they're all experimenting and to trying, trying stuff. You've got the lines of business that are pushing tools that are specific to their business into the org that, you know, how do I bring this thing into the call center and make my call center folks, you know, more capable in this process. And you've got the executive pressures as well. So the boards and all the others are thinking, look at the competitors going, how do we stay competitively capable? So it's just pressures from all over. I walk in all the time. It's all over the map. We see, you know, there's kind of common, a couple of common little factors. I see the old school traditional block page and I'm sure you, you know these things, the upstream kind of proxy SASE provider, whatever they are. This at the end of the day, some block page try to go to go to Claude, you're blocked, you go to Blood ever, you're blocked until you go to the sanctioned destination. And then it starts getting tricky too. We start seeing challenges with tenancy. Being able to actually discern tenant A versus tenant B because they're all using the same URLs and they all look the same to each other other. So a lot of times they're struggling.
B
Well, I mean this is why, this is why at the intro that I will now no doubt put on this interview later, I mentioned that this is actually a tricky problem because it's not like you can just block a domain. You know, as you say, like there's a tenants tenant problem there which is like how do you know they're using the corporate account versus a personal account. The only way you're going to do that is via something like island or you know, like a plug in braced product as well can surface these sorts of issues. But, but you know, I guess it's hard to know how prevalent it is generally. But, but I'd be stunned if there's any enterprise out there of scale where this isn't happening, at least to some degree. Right?
A
Yeah, 100%. I had one recently where the. They had an executive in the company who insisted. A very powerful executive in the company insisted it's not the company standard. But you're going to give me access to Gemini. And they're literally all screaming about it, but they can't do anything about it. And they're looking for an answer. What they're, what they really want is an answer that's not a block page. They want an answer that lets that person have access to Gemini, but the company data doesn't go there. So how do we Balance that that's like it's almost the impossible physics and that, that was one we did a really good job of solving with you know, the thing we've talked about in the past called say yes where you can just say yes to personal stuff and corporate data won't spill to personal stuff. But that, that was an interesting one because it was again it was a pressure from a powerful person in the organization that just goes against everything the policies of the. Org wanted. But they just preferred Gemini for themselves.
B
Yeah. Now when it comes to these DLP use cases like I find island actually much more compelling than a lot of these plugin based products because you've got things like the ability to apply file system restrictions to different domains. Right. So you can say like someone can go to OpenAI, you know, they can go to a chatbot there, but they can't hit the file system at all. Like they can't upload any files to it. Right. So I mean that, that's, I guess that's the thinking there, right. Is being able to apply those like DLP like controls to. Yeah. Chatbots, AI, AI bots everywhere.
A
Yeah. The important part is balancing the control with context. So to the thing that we talked about a moment ago, the tenancy for example, if someone's going to the corporate Gemini tenant, well, cool. Those things are freed up. If they're going to the personal tenant, they have access to it, but that stuff's not available now in this particular situation. So a different policy over the different tenants in the process. But certainly your point, the governance living locally in the browser because the natural habitat of all this stuff, it's kind of starting habitat from the majority of user work as a browser and you know, makes sense whether it's Chrome or Edge with an extension control or whether it's the full enterprise browser, controlling at that presentation layer is most important.
B
Now normally we're used to hearing of examples of this sort of stuff going wrong in, you know, it might be in an SEC filing where there was a data exposure or whatever because a staff member, you know, accidentally pasted a whole bunch of sensitive information into a personal account or blah blah, blah, blah blah. You weren't even aware of this one because it only happened a couple of weeks ago. But it's very funny, which is a guy at the Chinese Ministry of State Security actually got in trouble recently because he had been using ChatGPT to summarize internal documents, which is just, I think when, you know, if there's one example of this sort of stuff that belongs on the slide deck. That's got to be, that's got to be it, right? Which is the actual Chinese intelligence operative is pasting stuff into an American chatbot. But what sort of stuff? I mean, why don't we pull it back for a second and talk about like, I understand that there's rules and, you know, governance concerns about corporate data going into these sort of things, but what are the practical concerns about where this information can wind up if it is pasted into like a personal, you know, AI chatbot thing? Like what, what's the, what's the actual concern that you know, what's going to happen there? That's, that's bad?
A
Well, I certainly think there's the fear of the unknown because you don't know how it could be used. Is it being used by the AI provider for their purposes, for their own organization? To make, it's one thing to make their models better. It's quite another thing to make the intelligence of the model better. Now your intelligence is feeding your competitors as they're using the same model models. So you could be inadvertently in it, especially when you start getting into purpose built models or purpose built providers, you know, legal providers, for example, you're feeding your legal documents in there and then all of a sudden competitive legal firms, they're not seeing your data specifically, but it's certainly better at its job and more in tune with the legal system because of what the work that you've put in.
B
But wouldn't that assume that the data that you're submitting to those, you know, queries is actually then being used on the next training run for those models? I mean, is that how they do it?
A
I think in some cases that does exist. There's, because you, when you go into the settings of the providers, there are core settings, I know, like in some of the OpenAI universes, there's checkboxes. Do not use my model. Do not use my data for training. For training your broader ecosystem of, of learning on that particular model or that particular provider. So at the end of the day, they do use the data on the back end. I'm not, certainly not an expert on how they're doing all that stuff and how that ultimately carries over to how other clients may use that data. But that's the kind of fears people have. And obviously the example you mentioned with the China example, okay, cool, they use that now I'm sure there's somebody sitting in a government somewhere going, how does that get used against us? Somewhere down the road, maybe other nation state stuff or something else for the provider who knows what it is.
B
So, yeah, it's funny, man. That Chinese MSS example, that's like someone from the Director of National Intelligence doing the same thing, but using Deep seq. Right. So I think there's the, you know, there's the national security concern there. I mean, look, you know, when we started off this interview, you were talking about like, oh, there's a lot of people pressure to use AI and whatever. And, you know, everyone's worried that if they're not using AI, they're going to get left behind. I mean, I'm guessing in most situations, though, there is going to be a sanctioned corporate chatbot that is included with their productivity suite. Like, we use Gemini because we're a, we're a Google shop. You know, other people are going to use M365. You know, I'm guessing where this is most a problem, though, is in some of these regulated industries where they aren't using AI yet in a sanctioned way. Like, is that where this pops up as a serious concern? And I understand too, that that's where island finds a lot of its customers is in those regulated industries where they're likely to be most cautious about AI. So is this a thing that's popping up mostly in, you know, those regulated verticals?
A
Yeah, the example I used earlier was exactly. That was a financial services example where the executive wanted to use a specific preferential thing and because maybe there were no options. And again, at the end, at the end of the day, to the sanction option may not be what someone prefers. You know, in the past, I've seen, I've joined calls before. You know, I've had the other end bring recording tools to the table and someone on the call goes, hey, wait a minute, we have a corporate standard recording tool. Are you using that one? And they prefer that one versus the other because it gives them a better transcript or whatever. So again, the user's preferences start weighing into some of this stuff, which creates the Wild west, the same thing we saw with Shadow it years ago. It's just that on steroids at the end of the day, because. Because the intellectual stimulation for the user. Everybody's curious about it and everybody does have their preferences. And not everybody wants to use Copilot. Somebody wants to use Anthropic because maybe they're a developer and they lean more that direction and they hear more about the capabilities of Copilot or Claude in the, in the development world. So they may lean that way. They may want to experiment with Claude Cowork and the company doesn't provide that with their copilot environment or their chat GBT environment. So as a result they can start experimenting with those tools.
B
Yeah, I'm with you, I'm with you, I get it. So what's the desired end state for most of these orgs? Right, because you mentioned earlier that, okay, you've got your sort of block page thing, right? That's one way to do it. You can just block it and then you've got, well, different controls on the file system for different tenants and getting a bit more granular. I'm guessing these features have been around long enough now that you've got a sense of what the average org is doing, you know, with these controls. Like what are they doing with the controls? Are they going more the block route or more the fine grained, you know, access control route?
A
Well, the challenge is they're going the block page quite often because that's the tooling, that's the status quo. When they start getting into the need to get into granular stuff, the tenancy thing is a real challenge. I see that all over the place. They just struggle to gain tenancy control.
B
And how are you doing that? Are you doing that by sensing like by logging when someone's logged in with like their corporate domain or something? Is that how you're doing that?
A
That you can do it? You can do it a number of different ways. So that is the advantage of sitting at the presentation layer. So I can see the input of the credential and know, oh, they just plugged in the corporate credential. That's this policy. They plugged in this credential, this policy. I can see attributes that are passed by the provider. Sometimes they throw attributes in the headers, so in the HTTP headers, sometimes you could filter those on firewalls, et cetera. The challenge is there's just no standardized way to present tenancy. So like one provider does it this way, another provider does it this way in the SAS universe and same in the AI universe. And as a result you have to have a number of different angles to be able to hit it from sometimes it's attributes on the screen. There's a couple of SaaS providers we work with where there's nothing that presents the tenancy except for the login username and then they actually throw the tenancy in the actual DOM of the application on the screen to show you what tenant you're working in. So those are things we can see because living at the presentation layer in the process too. So combination of these things usually gives us a pretty foolproof way to identify tenancy. And I've not yet run into one. One thing that's been presented to us where we couldn't find our way through it. Again, again, status quo is really hard.
B
Yeah, I mean I guess the question was like are you, you know, are people going for that sort of tenancy identification thing and just blocking everything else or are they dialing back what people could do in those non corporate sanctioned tenants? Like are they still allowing some access to the unsanctioned stuff? I guess was the question.
A
More often I'm not seeing people allowing the personal stuff. I see the block page for the personal stuff and better yet for the non sanctioned providers more often. Often. And what they want, they would love this kind of holy grail of you can go to your personal stuff, we don't care, but the company data is not at risk and everybody wins. The challenge is that's just very difficult to pull off. And again to your point you made earlier, it's a lot easier to pull off when you're controlling the mechanics of the browser at the end of the day. And that we find a lot of value in that. What I do see you said what's the end state earlier. Orgs are going to be multi provider and multimodel. Every large org, you're going to have have a Microsoft license and you're going to have anthropic licenses and you're going to have Gemini licenses and OpenAI licenses and different parts of the business. The legal team is going to use Harvey and the medical practitioner is going to want to use Chat GPT health. And so you're going to have different environments and you're not going to want 10 different front doors to each one of those. You're going to want a homogenized front door and you want to orchestrate the right provider to the right user at the right time. So based on context, this user developer. Oh, let's orchestrate Claude to be a part of their ecosystem and a part of their workflows. But this user is a designer in our marketing team. Let's let that be grok because that's our preferred image creation tool.
B
Yeah, I mean this just becomes like a directory thing, you know, ultimately like it's about permissioning, you know, provisioning. It's provisioning and directory at the end of the day. But you get to enforce it because you've got the presentation control.
A
Yeah, you're going to want to ultimately you're going to really need two things in that process. You're going to need context about the, about what the user is. So, and Directory Services doesn't tell you enough about that. Like you think about. Directory Services tells you about somebody's title, but it doesn't tell you the day to day work they do. It doesn't observe their workflows and see that, oh, one minute they're working in the system over here and doing this ticket triage and a minute, next minute they're writing a summary of some sort of root cause analysis. At the end of the day, if you can observe those things, you can build the profile and learn about what the user does, then you can make recommendations on the right model to be used at the right time. You can make recommendations right at the right provider. And then when you bring content to the table too, you can say, you know what the context is. This user doing this, the content they're engaging is code. We should probably use Claude over here for that. But the user's asking about the weather in Seattle. Let's use free ChatGPT for that query in that particular case. So the content and context, orchestrating the proper provider at the right time, that's an ultimate end state where I see a lot of organizations go going rather than the fixed state of just one provider for this user. And then I've got a different thing that's tied into a different provider. You're not going to want the Atlas browser for OpenAI just for their world and then the Comet browser for Plexty and then the Gemini browser for, with Google and the Copilot browser, you're not wanting different doors. You're going to want something that orchestrates those things. And that's what we're doing with the enterprise browser.
B
No, it makes a lot of sense. Makes a lot of sense. All right. Brayden Rogers, great to see you again. It's been a little while. Good to chat to you about all of that and we'll catch you again soon. Thank you very much.
A
Thanks, Patrick.
B
That was Braden Rogers there chatting about how you can use an enterprise browser to get a better handle on the use of unsanctioned AI in your enterprise. You can find them at island IO and big thanks to them for being this week's Risky Business sponsor. But that is it for this week's show. I do hope you enjoyed it. It was a fun one this week. And, and yeah, I'll be back in a couple of days with more security news and analysis. But until then I've been. Patrick Gray, thanks for listening.
This episode dives deep into an exceptionally turbulent week in cybersecurity, focusing on a series of aggressive supply chain attacks by the threat group "Team PCP," which led to compromises in key projects like LiteLLM and prominent security scanners (Trivy, Checkmarx tools). Discussions extend to the fast-evolving risks and realities of AI-assisted attack tooling, cloud controls in enterprise IT, high-profile takedowns, privacy breaches, and a wild series of "skateboarding dog" stories—all delivered with the show's signature blend of banter and industry insight.
[01:35–05:53]
Key Points:
Notable Quotes:
[04:53–05:45]
Key Points:
Quote:
[05:53–10:46]
Key Points:
Quotes:
[12:33–13:04]
[13:04–16:19]
Key Points:
Quotes:
[18:41–21:36]
Key Points:
Quotes:
[22:57–24:49]
Key Points:
[25:01–26:40]
[26:40–29:56]
Key Points:
[31:48–34:06]
Key Points:
Quote:
[34:06–36:46]
Key Points:
Quote:
[36:46–39:59]
Key Points:
[40:06–42:02]
Key Points:
[42:02–45:00]
Key Points:
Quote:
[45:00–47:37]
[49:28–63:08]
Guest: Braden Rogers, Island
Key Points:
Quotes:
| Segment | Topic | Timestamps (MM:SS) | |---------|-------|-----------------------| | Team PCP supply chain attacks | Trivy, Checkmarx, LiteLLM compromised | 01:35–05:53 | | Blockchain C2 via Internet Computer Protocol | Novel threat TTP | 04:53–05:45 | | AI agent risk (OpenClaw, Claude) | Security & endpoints | 05:53–10:46 | | Lockheed breach claims / Intune attack | Supply chain/Fake news | 12:33–16:19 | | iOS exploit kits (Karuna, DarkSword) | Exploit commoditization | 18:41–21:36 | | Apple silent security updates | Secure, quick patches | 22:57–24:49 | | Russian mobile internet blackout | Geopolitics | 25:01–26:40 | | Moxie, Meta, AI crypto | Privacy in LLMs | 26:40–29:56 | | Law enforcement tip platform breach | Privacy, de-anonymization | 31:48–34:06 | | Botnet disruption/Krebs | Defensive wins | 34:06–36:46 | | US router import ban | Policy, protectionism | 36:46–39:59 | | Letters of marque, Google disruption unit | Policy | 40:06–42:02 | | Supermicro Nvidia smuggling | Espionage, hardware | 42:02–45:00 | | “Skateboarding dog” round-up | Light/humor news | 45:00–47:37 | | Sponsor: Island browser | Controlling AI shadow IT | 49:28–63:08 |
This week’s Risky Business offers a vivid snapshot of a threat landscape defined by rampant supply chain risk, AI-powered mayhem, and a world where security, privacy, and trust boundaries are rapidly shifting—often at the whims of users, attackers, and regulators. The episode delivers a blend of practical advice, technical curiosity, and sardonic observation: essential listening for keeping pace in today’s cyber domain.