
Loading summary
A
Foreign.
B
And welcome to another edition of Risky Business. My name's Patrick Gray. Adam Boileau and James Wilson will join me in a moment to have a chat through the week's security news. And of course, the big news this week is that Anthropic is putting its Mythos model into like a limited preview because it's too powerful, an oday discovery machine and whatnot. So we'll be talking through that in just a moment, plus all of the other news over the last week. And then we'll be hearing from this week's sponsor. And this week's show is brought to you by Persona, which is an identity verification company, I guess you would call them. I mean, they started off more in that sort of KYC space. But these days, as you'll hear when we are joined by Benjamin Chait, who works in product over at Persona, that sort of technology is actually very useful in the enterprise these days, particularly with like, remote workers who you may not have ever met irl, like making sure they're not North Koreans maybe kind of useful, or making sure that the person you hired is the person who's still doing the work. So that is this week's sponsor interview coming up later. And for a first in Risky Business history, it's not actually me who did the sponsor interview. James Wilson filed that one because I'm on light duties this week because it is school holidays in my state. So yeah, that's, that's a one for the history books. All right, let's get into the news now. And yeah, as I mentioned at the intro there, Anthropic is releasing its so called Mythos model, but it's not releasing it to the public just yet because as per Anthropic, oh my God, it's too powerful. The world cannot contain its power. Adam, walk us through the rough shape of this story, if you would, please.
A
So we've been seeing LLMs getting better and better at interpreting code and reasoning about it, you know, over the last couple of years. And this Anthropic says is kind of their work at building a model that is kind of specifically well suited to thinking and reasoning about code. And have they found that it is very good at doing specifically cybersecurity things, understanding code, finding bugs, reasoning about it, writing exploits, or at least proving that the bugs are real things. They appear to have hooked this Mythos LLM up to a whole bunch of code bases out of some of the open source world and let it loose. And it's found a heap ton of bugs so far. They are documenting a few of them in this particular blog post talking about it. They say there's plenty more like they said, like thousands to come. They've managed to put some hashes in here of things they're going to release in the future once it's been through the patching and disclosure process. But this thing, per their write up, is terrifying and amazing and as good as high end humans at finding bugs. And that I guess this is not unexpected given the trajectory that we are on. But it feels pretty interesting times.
B
Yeah, I mean it feels real, right, I guess is what you're saying there. And the fact that we woke up this morning here in Australia and New Zealand and it's all over the New York Times and Wired and here and there. Kudos to anthropics PR people because they've certainly managed to make a splash with this as like a cyber security story. You know, it's been an interesting few weeks for this stuff. Right, because as you say, like we're seeing more and more LLMs being used to do this stuff. I'm seeing a lot of cope on social media frankly from security researchers. I'm seeing them say, oh well, you know, there's going to be a very limited pool of one shottable bugs that you could just ask the model to produce. And I'm thinking, okay, that might be true, but then, you know what happens when the next model comes out. Right. So I mean even anthropics say that, you know, keep in mind that today's best model is, you know, tomorrow's worst model eventually. Right. So you do, you do sort of wonder how this is going to shake out. I kind of feel like, and I want to get your opinion on this, James. I kind of feel like the job for vulnerability researchers in the future might be leveraging these models into finding stuff that is not one shotable until the next model. Right. So that's going to be the job, is going to be actually trying to stay ahead with the models that are available now before the next one comes and like auto magically discovers those sort of bugs. I kind of feel like that might be where we're heading. What do you think?
C
Possibly. It's interesting. You know, the concept of prompt engineering seems to be all but gone. Right. If you were to say the goal of the human now is to work out how to craft the prompt to stay one step ahead of these one shot fixes. I don't think they even need to do that. You know, the prompts that they're sending to Mythos is Things like, please find a security vulnerability in this program so you know, what do you do? And then append. And I'd like to stay ahead of the next model, please. I just don't really know what you can do to stay ahead other than just really be on the forefront of adopting this, pointing it at as many repos as you can that you're concerned about. But it still hasn't solved the problem of how do the defenders actually go about triaging this and making a meaningful change to their repos and also avoiding the fact that we got to remember pointing a model at hundreds and thousands of repos and producing heaps and heaps of bugs and getting those all fixed is wonderful. But the number of times I've found a simple bug fix turn into yet another exploit is like, you know, it's a common thing that happens. So there's all of a sudden going to be a massive perturbation throughout the open source community of all these fixes. Landing that is probably going to give rise to the next wave of bugs anyway. So it might not be that the next model has to come along, Pat, it might be just that throw the model at all these fixes that are getting generated in rapid time and that's where you'll find your next set of exploits.
B
I mean, maybe, but I would point to a tweet actually from ffmpeg and we covered this months ago, like last year sometime, that FFMPEG were getting really salty when people kept submitting bugs to them without patches. And they've posted thank you to Anthropic AI for sending FFMPEG patches. Now, I did talk about the coverage in Wired and the New York Times, and this coverage is looking at the fact that Anthropic has launched this thing called Project Glasswing where they're giving preview access to this, you know, this model to all of the major vendors, right? Some security vendors as well, like CrowdStrike and whatnot. But you know, Microsoft, Apple, whatnot. The idea is they're going to get a head start on finding bugs. But I again, I do feel like this means that every time they update their model, we're going to have to wind up with something similar, right? Because otherwise it's going to be like a mini catastrophe every time they do a point release, which is probably not an ideal way for things to go. But I just want to mention one thing there, James, which is you spoke about the hype factor here. This is something you've got to really keep in mind with Anthropic is Anthropic decided early on that their brand, that its brand is very much around safety, right? So they do this whole thing about like, oh no, you know, AI is big and scary and can cause so many societal harms. And this is why you really need to regulate strongly to prevent people from using Chinese models, right? Like this seems to be a big part of their brand and it is sort of self serving where they want to ban, I guess, models that have been distilled out of theirs and whatever. It kind of makes sense. But look, another factor in all of this is, as we'll see from the news this week, it's not like we're already short on bugs, right? Like Adam, do you think an Infinity O day machine really actually changes much? I mean, I think it does, but maybe not in the way people are expecting.
A
I mean it definitely changes some things. And like, I mean, I'm thinking about, you know, all the people I've worked with over the years who have been very good bug hunters. You know, the sorts of work that we did. Like, I think when we first saw LLMs and AI models starting to rise over the last couple of years, one of the people that felt particularly victimized were artists and creatives. People who felt like all of a sudden anyone could generate art and post it on social media and they felt like their work was being undermined and undervalued and that the human touch on that in art was really important. And I think the funny thing for being a vuln hunter is the human touch doesn't matter here. What matters is the shell out the other end. Like the artists have something to point to. Like you need us because we bring the humanity for us.
B
Exploits don't need to have soul and suffering, right?
A
They don't need to have suffering. Like Nick Cave said, like people are having AI write fake Nick Cave songs. He's like, yeah, but where's the suffering? That's an important part of goth music. We don't have that in the exploit dev world. And so AI is actually going to be pretty good at replacing us. And that's a craz crazy time. And I, you know, the, the Anthropic blog post has a line I always said, what was it? Ultimately it's about to become a very, about to become very difficult for the security community. After navigating the transition to the Internet in the early 2000s, we've spent the last 20 years in relatively stable security equilibrium. It's like, excuse me, like, did we work in the same industry? It's been bonkers since the Internet. And it's going to continue to be utterly bonkers. And like, as you and I have said on the show before, like, we are here for that. Like, that's what we are writing. That's what this podcast is all about, is the bonkers ness. And so, like, in that respect, like, it's just going to be fun, it's going to be wild, it's going to be messy. And, you know, even if they keep this model secret for a little bit, like, what's that going to buy us? Like, three months of Microsoft getting preview access?
B
Okay, okay, so what's that going to do? So let's take a little tour, shall we, of this week's horror show bugs and explain, I guess, why Anthropics Preview. You know, are they. Are they giving this technology in preview to F5? Because there is apparently a critical flaw in their big IP stuff, which is facing widespread exploitation risk, according to cyber security. Dive this piece here. There's people out there still exploiting React 2 shell. According to Cisco, this has turned into a bit of a drama. And, you know, there's a whole bunch of stuff getting stolen in that critical flaw in 40 client EMS under exploitation, Adam. I mean, so I sort of think, okay, we're already in trouble with these sort of bugs. When Mythos goes ga, what happens to Fortinet, right? Like, what's your feeling there, Adam? I'll get your feeling on that first.
A
Well, I guess step one, they can fire all three of their QA staff because clearly they don't have very many and replace them with an LLM. And that will produce hopefully much better results. But the hard part is, I think, hooking up test environments and test harnesses to the lms. If you're Fortinet, you have to get an LLM into a position where it can introspect your products and test them. And that's for some people, that's going to be more difficult than others. And I think some vendors, obviously, Fortinet should run all of their products through anthropic stuff and hopefully that will produce a significant improvement in their things. And maybe this is what Fortinet needs. Whether everybody, Cisco and what else do we have, like, progress software? All the people who've F5 who are in our list of terrible vendors this week, are they going to be able to survive this much change, like organizational change, like the tech side of it, like throwing AI at your products? Probably. Good. Can they be like, what's Dan Guido's trail of bits, like reorganizing their whole company around AI. Can you see a fortinet doing that? Probably not, right?
B
Yeah. So I want to bring you into this part of it now, James, because I had a really interesting conversation with Adam Poynton the other day, CEO of Knock Knock. And one of the things that they've done is they have worked on getting their whole code base into a state where it's Claude Friendly. The way Adam described it, I thought actually was really interesting. He's like, you know, this product has been around for a while. A bunch of people have contributed to it. It's sort of like an apartment Bl. Every apartment is. Has a different style and, you know, has been decorated according to different taste. And so if you throw an AI agent at it and say, hey, you know, we would like to build another wing to this apartment building, it gets a little bit confused because it doesn't know what style to use and whatever. So you sort of have to refactor your code base a little bit and get it in the right shape. But it feels like that's what everyone's going to have to do now is work towards being an AI first code generation shop. Right. And, you know, you're the one with the experience in software engineering. James, what do you think of that idea that, you know, now the job is about figuring out how to make your code base friendly for Claude to work on, because that's like, that's a sign of success for anthropic. They've done well.
C
Yeah, absolutely. And, you know, it even challenges some of the, just fundamental, I guess, paradigms we were operating in in software engineering. You know, for the longest time, there was this debate of do we go mono repo or do we have lots of repos? And the argument was, well, Monorepo's got all the code in one. It's just easier. But that repo gets massive. And so people would break it out into, you know, here's the repo for our iOS app, and now here's different repos for our web services and whatever else. And that made it easier for a human to reason with and to sort of keep separation of concerns. But certainly when you've got Claude coming along and you want Claude to reason with your entire software suite, you kind of have to go back to putting all of these things into one repo. And it does actually. I don't actually think it takes a lot of human work these days to think about how to structure it. Like, is not as long as you get all the software in one place as the as the starting point and then prompt Claude and say, okay, this is, this is a long standing product. Here's the general run of the mill of what it's got. How would you structure this to be more accessible and then write an agent's MD along the way and in partnership with the human there? I think you can, you know, you probably get a long way towards that. And that, that is, I mean, you know that that is literally table stakes at the moment now for, for any business that wants to remain relative in this emerging landscape.
B
Well, I mean, I can't think of a bigger change in software development like ever. Like it makes like the conversation around something like, oh, DevOps, should we or shouldn't we? DevOps versus Waterfall. I mean, that was the big topic for a while and now you just look at that and you're like, how is this even a conversation?
C
Yeah, I haven't thought about DevOps for a long time and I'm still amazed that for someone that loves writing code, I haven't written a line of code since probably November last year. I was amused. Last night I'm popping open a code editor to just look at, you know, to compare to dot environment files to make sure secrets are there. And that's the only time I'm opening a code editor anymore. Wow.
B
Yeah, yeah. And you're working on some stuff for Risky Business as well, which could be a lot of fun. We'll stay mum on that for now. But yeah, it's going to be cool. Now look, you know those bugs I mentioned before and not the only ones. We've also got bugs in something called Progress Share File. Is that a file transfer appliance, Adam?
A
It most certainly is. And actually this one is a like cloud control plane, but you can store your data on prem kind of situation. And this bug is super funny. Like it's a. You hit the like admin page and it redirects you off to the login page if you're not logged in. But it still renders and runs the page. So like the page is still there. Your browser just redirected you away before you could see it, which is the kind of thing that happened in PHP apps in the early 2000s and they've managed to reinvent that bug class on Modern Net. And yeah, literally you just go to an unauthed admin page and then the forms are there and you can fill it in, edit the config and onwards to. I think they work it up Watchtower, work that up into codexec. So yeah, it's Just a very funny, like, throwback from the old days of a bug. So, yeah, fun.
B
Yeah, let's see if Klopp goes nuts with that one. What else have we got here? We got CISA warning agencies, government agencies that they need to patch a bug that is being exploited by Chinese intelligence services and they've got two weeks to fix it. This is in software that I'd never heard of. What is a TrueConf? What is TrueConf? I've never heard of it, Adam.
A
Yeah, so it's like a audio video conferencing kind of platform thing. So think like zoom. I suppose the funny thing is it's kind of not really so much a bug as a sort of a missed feature where you can. There's like client software that the server will drop on you and there's. There's some kind of, like, if you're an admin, you could upload updated versions of the client software to this thing, which will then push down to end users. And so this is Chinese getting access through probably cred stealing to these servers, putting backdoor updates on them, and then initiating video conferences with people which will push down the modified client with them and then shell them. So it's quite a, you know, quite a fun campaign in that respect. I don't know how widely used this is, but CISA seems to think that it is being used by government organizations across Europe and Asia and America, obviously.
B
Yeah, I mean, this is what struck me about it as well, which is like, oh, here is CISA warning government agencies to urgently stop using this piece of obscure teleconferencing software that I've never heard of. Then again, you know, someone might pop out of the woodwork and say, obscure. It's everywhere. But yeah. Anyway, we've also got some Rowhammer news where someone's figured out how to apply row hammer against like, Nvidia GPUs. Now, Nvidia GPUs are a big thing in the news, obviously, at the moment, Adam. But is this research actually interesting? Is it novel?
A
It's actually pretty interesting, I thought. I mean, obviously row hammering has developed a lot over the years. There's been a lot of interesting techniques and different types of memory and work that you have to do. And a lot of it honestly isn't super exciting for us to talk about. This one is there's three separate sets of research that all basically did the same thing. Thing which is to rowhammer Modern Nvidia GPUs. Some of the researchers leverage that to bit flip the contents of memory on the GPU and degrade the performance of LLMs that are running or other AI models that are running on the GPU's not very exciting. A couple of the researchers leveraged it to modify memory page tables in the GPU and then leverage that into arbitrary memory write. And one set of researchers turned that into manipulating the Nvidia driver back over in CPU memory to then code exec that. And so you can go into the GPU and then across from there back over into the driver and pop out in a privileged context in the kernel. So like privilege escalation via the gpu, even in the case where the IOMMU controls are turned on, which is, you know, that's kind of like best case security setup at the moment. So yeah, that's pretty cool research, like actually weaponizing it like that. So yeah, that's cool. Pretty, pretty cool.
B
Yeah. Nice. Do that, Claude. I mean it probably will, right? So I shouldn't joke about that now. James, let's have a chat about the postmortem on the Axios supply chain compromise. We talked about that last week. Jason Seymon, who maintains Axios has published a write up, a bit of a postmortem there, and Zach Whitaker wrote it up for TechCrunch. The gist here, I think is that the North Koreans did some very on point social engineering which enabled them to do this. But when you read this, you're thinking, geez, it wouldn't, it didn't take much right to do something so high profile. So walk us through this if you would.
C
Yeah, I remember when we talked about it last week, I said, you know, we weren't sure what happened and I think you said, I was probably just, you know, they managed to get the credentials out of the browser, but it was kind of more interesting. But also. Which is kind of sad the way it ended up. Right, but. But yes, I think this is yet another data point of the North Koreans getting more and more crafty and actually taking the time to really plan this out like this. This one was a couple of weeks in the making essentially. You know, they put together a very real looking slack workspace and invited Simon to come along and join it. And very, very convincing. You scheduled a meeting to connect this. This was basically people that were going to help and contribute to the project and the effort. But the thing that got me is that it's like, yes, they took the time to build up rapport and they took the time to build up trust and they took the time to look Very convincing. Then when you joined the meeting, they said, oh, there's the meeting. Popped up an error and said, oh, you need to update something. Don't worry, here's the update. Can you just go and install that? And he went and installed it. And it's just like, buddy, like, I don't want to throw too much shade on the guy. But why, why would you have just taken that link?
B
I mean, I think, I think it's because, as you say, like, they made the whole thing very believable. And I think when you in that situation, you're going to be thinking, all right, so this whole thing is just a ruse, is it? To get me to run this executable. You're not going to be thinking that, right? And I think that's the whole point of spending weeks and faking websites and Slack workspaces and all of these conversations over a long period is so that you don't have that suspicion. It's not like this was someone they met five minutes ago in a forum. You know what I mean? So I think, I think we've got to be careful not to throw shade at people on this. I think it's the same as a lot of these people who get victimized by scammers is. I think I remember one point, like 10 years ago, there was evidence that there were psychologists working with some of these criminals to help build these scenarios that were most likely to get people to respond. That said, I think I can't see myself doing that, but I'm a professionally paranoid person. This guy isn't. You know what I mean? So I think, I don't know, cut him some slack, I guess, is what I'm saying.
C
As I said, I'm trying not to throw too much shade in because I do feel for the guy, but I think maybe I should frame this better and say, look, it is fascinating that despite the fact that the whole concept of a click fix attack and the install fix attack is a known vector, you almost wonder, did they even have to go to this much level to be that convincing? When someone is still willing to just install an update on the fly when you can't join a meeting, right? There's something about that concept of I joined a meeting, there's a problem and they gave me the update to install should be ringing bells. I've, I've heard about this before. This is a relatively known concept and not falling for it, and yet people do. So maybe we're just too close to it and I'm being unfair.
B
Maybe, maybe who Are we to judge, etc. Moving on though, and staying with the pesky North Koreans. They pinched $280 million from a DeFi platform called Drift. Is there anything particularly novel about this one, apart from the fact that it's $280 million?
A
I think the novel thing here was like the previous one James was talking about, like, that was a couple of weeks of prep. This one was months. Like they went to conferences with this Drift platform's people. They met them irl, they shook hands, they made some deals, they invested like a million bucks of their own money, the North Koreans money, into building rapport with these guys to get to the point where they could compromise enough systems to bypass the like, multi signature controls and eventually loot to the entire platform. So it kind of made the level.
B
This is the one where they got the multi signatures done months ago and then waited for a chance to use them. Right? Yeah, I have read about it. This.
A
Yeah. So, like, it was pretty good work. And I think, you know, the, you know, the level of north. That amount of work that North Korea is willing to put in clearly suggests they're making enough money out of this for it to be worthwhile. So like, you know, months worth of prep, like, that's a pretty high bar and, you know, people are going to fall for it, as clearly happened here. And in the previous one.
B
James, you had some thoughts on this one. You said there's some interesting tradecraft here.
C
Yeah, there's. It's like a one liner in this article. But it really stood out to me, obviously, because my knowledge of the Apple ecosystem, two of the attack vectors are kind of interesting. The one that really stood out to me was they deployed Test flight apps to the victims. Now Test flight is, of course, is a way for developers to get unfinished apps that aren't yet in the App Store into the hands of people that are testing it. The other bit that was just kind of amazing that they're latching onto this is they put hooks in the repos that if you even point your VS code editor at it and say, yes, I trust this directory, which is something we all do whenever we open something in VS code, there's a ton of scripts that actually run with zero user interaction at all. And that's how they deployed some of that malware in other cases. So they're really stepping up their game.
B
And speaking of, you actually have a podcast going out today in the Risky Business Features channel for those who are not yet subscribed. Just load up your podcatcher and look For Risky Business Features, which is James's channel, you've done an interview with Jeff White, who is the guy who put together the Lazarus Heist podcast. He's got another season of that coming. And you did a big deep dive with him on the entire ecosystem around. Yeah, North Korean fake IT workers and whatnot.
C
Yeah, I've always been sort of intrigued as to what's beyond the headlines of North Korean IT worker was found and evicted or laptop farm was found and and torn down. You know, what actually goes on behind the scenes. How did they get to that point of even assembling a laptop farm? How do they produce enough credentials to get a job? How do they build up credibility to move from small time job to medium time job to big time job at an enterprise? And so Jeff and I sat down. I think the podcast episode's about an hour. It's a thorough deep dive and an end to end look at everything from the moment when they apply for the job, how they get the job. But then what's really interesting is they don't just show up to work and wreck things. They work really well. And it seems to be a farm of people delegating work out there as one person. But interestingly, there's a lot of triage that goes on. You know, it really is a. Seems to be a well oiled machine where if there's a target that seems like it's got interesting IP that gets moved to a different department and they handle it differently and you might suddenly get the A players behind the scenes working on this. And then culminates in how does the money get to Pyongyang? And actually the most interesting thing is a lot of it doesn't need to get to Pyongyang to be super valuable for the regime. So incredible, incredible ch out it was, it was so much fun.
B
All right, people can check that one out in Risky business features. Meanwhile, CIS's century of humiliation continues. Apparently the 2027 financial year budget looks to be cutting like hundreds of millions of dollars out of their budget. Yeah, they're going to reduce CIS's budget by 707 million. That's like 30% of its budget. It looks like a whole bunch of stuff is going to get chopped. According to this piece from Cyber Security Dive, written by Eric Geller. Their vuln scans, field support, like a whole bunch of stuff is getting chopped. I mean, you really do get the impression that the reason this is happening. There are two reasons this is happening. First of all, cisa did disinformation takedown coordination with a bunch of the social media platforms, which was a, you know, controversial among MAGA types. And I guess the other reason is because Chris Krebs said that the 2020 election was secure and Donald Trump really didn't like that. So you really get the impression they're trying to make it as dead as possible. I mean, Trump's only been in power like just over a year in this term, so you get the feeling that by the time he's done with SISA in a few years from now, they're going to be a shell of what they were. I mean, James, what's, what are your thoughts here?
C
Yeah, exactly. And you know what's the most just galling about this is the, the capacity that they're cutting is exactly what they need right now. It's, it's, it's the scanning, it is the relationships with industry. Like, I don't know, it's just a massive own goal. But I agree, like this, there won't even be a CISA to talk about, I would imagine before long.
B
Yeah, I mean, and we've seen like FBI doing some CISA related stuff as well lately. So it really does feel like they're trying to move some stuff away from CISA and onto other agencies while cutting their funding and just like anyway, and
C
also they said here that they're going to move it to the states and the states have just all thrown their hands in the air and said with what resources?
B
Yeah, yeah. So depressing for those who are left at cisa. And as you pointed out, James, this is, it is the case that the sort of stuff that they're cutting is what they need now. Like, if we look at this next story from NBC News, apparently Iranian hackers are doing their best to break into US industrial control systems. And you know, this is a warning from federal agencies and like they're cutting CISA as this happens. This is happening also at the same time that the FBI has just labelled China's hack of their like CALEA system again as quote unquote, a major cyber incident, which means there's some sort of material impact or threat to life or you know, demonstrable harm. So all of this is happening and they're cutting back cisa. I mean, Adam, thoughts?
A
Yeah, I mean it makes no sense whatsoever. And as soon as the Trump presidency is over and whoever comes in and replaced him, the first thing and they're going to do is build something to replace the functions that they're just cutting out assessor. This stuff is absolutely necessary and it's just so pointless to go throw it all out and then start again for no reason other than partisanness. So yeah, it's frustrating and like, as James said, like this is what they need right now and it's exact thing they're throwing out and it's so dumb. But I guess that's where we are, right?
B
Yeah, yeah. Now Brian Krebs has got a report on this one. Apparently Russia's military intelligence is like hacking home routers, fiddling with the DNS entries which. And sending them to fake like Microsoft login pages, which is generating a zillion cert warnings and just hoping people click through them so that they can capture a whole bunch of creds. I guess my question, Adam, is but why?
A
I mean, it is a very good question and I don't know that the but why is particularly well answered here. Like they really do just seem to be putting themselves in the middle and stealing auth tokens to Microsoft services for we don't know exactly what. And like the idea that somewhere in Russian military intelligence there is a group of people whose job is breaking into residential TP links and changing the DNS settings. Like in the time when Russia is at war with Ukraine and there's all this stuff going on in the world and like that's your contribution to it is compromising someone's TP link, you know, on their cable modem in Florida somewhere like that doesn't feel particularly satisfying. We don't know exactly why they're doing it. The same group at the GRU were doing slightly more stealthy things with their compromise. Lots of TP links and microtechs and other routers like that. They got called out by the ncsc and then they've kind of pivoted to now more overt hacking of Microsoft infrastructure. But yes, to answer your question, I don't know why. I don't know what they're doing. Who knows?
B
Yeah, yeah, who can tell? Meanwhile, there's this piece here from the Record, which on the surface of it is not that interesting. But then you read it and it is kudos to Jonathan Greig, who wrote this, who wrote this one up for the record. But there's a hospital in Massachusetts that turned ambulances away in the midst of a cyber attack. And it looks like they kind of got this under control. But they interviewed this guy, Errol Weiss, who's the chief security officer at Health isac about basically what's going on in like health care with attacks and whatnot. And he said what they're seeing is a sustained high level of malicious activity targeting the health healthcare sector. There's been a bunch of incidents, but in many cases, organizations were able to contain the activity before it reached the level of a major public outage, which is why you haven't seen them disclosed because these incidents are still being worked through with law enforcement regulators not in a position to share specifics or name organizations. Now, I'm going to infer a couple of things from that. I'm going to infer that perhaps the attackers these days, given the disruption to ransomware as a service operations and whatnot, the attackers have got a little worse and the defenders have maybe got a little bit better. That's what I take away from this. Adam, what's your take on this story?
A
I wonder, reading this, whether or not some of the outages we saw in the past was because of overzealous responses. So, like, how many times when you see, you know, a cybersecurity event that's led to a bunch of interruption to service is because the company had to turn everything off because they didn't have a plan about how to deal with it in a more controlled manner. And I wonder if some of the fact that these disruptions are incidents are smaller and aren't necessarily being publicly disclosed is because they've gotten better at coping with them without going scorched earth, turning off the Internet, turning off the network, unplugging everything because we didn't understand. So if that's the case, then that's also a good improvement. Like it may be that the attackers are constrained by some of the hound release that's been happening over the last couple of years. But I feel like we are probably getting better at more managed, more controlled, probably more playbook responses to bad stuff happening.
B
Yeah, I mean, I get that impression, too. Defenders a little better, a little bit more organized. Attackers may be the opposite. Now look, look, Discombobulator, eat your heart out because we've got a new Trump disclosed mystery device to talk about via the very reputable masthead of the New York Post. And we've got to read this because, like, we've got to talk about this one because it is just too interesting. And what I love about Donald Trump is that occasionally he just gets up and starts talking about stuff that like other presidents absolutely would not have. And we've got all these sources apparently talking to the New York Post about a device called Ghost Murmur, which they use to find the heartbeat of a downed, you know, F15 pilot or whatever in the desert of Iran by, I don't know, detecting his heartbeat electromagnetically with some sort of quantum device. Now, could this be someone making stuff up and telling New York Post people about it after Trump spoke about a mystery device? It absolutely could. Could it be real? It also absolutely could be real. So that's why I just thought we have to talk about it because it's like, it's right up there with the discombobulator. What's your feeling on this one, James?
C
Yeah, I loved reading this one. Just some of the details. It's, you know, specifically built sensors built around microscopic defects in synthetic diamonds. Like, oh, okay, wow. But also it's such a weird story because it sounds fanciful, as you said. And also the article kind of gives a blueprint to anyone that wants to defend against this working in future. It basically spells out that, you know, this is super sensitive electromagnetic sensing technology that was able to pick up a heartbeat, but. And it works really well in the desert for two reasons. It's an area without a lot of electromagnetic interference and the heat signature difference is, is, you know, quite wild. So it gives you a very good secondary indicator. So you can bet that the IRGC is now readying a whole bu of electromagnetic interference things to go and deploy whenever there's a rescue mission going on. Like this is just. If it's true, it's a wonderful use of technology. But, you know, you think of an o day being burnt. This is a fantastic technology. If it's real, that's just been burnt because it's exposed its fundamental weakness.
B
And who knows, like this could just be CIA disinformation, you know, which is one thing apparently where this, where one of these pilots went down, like there's some talk that, like there was some OSINT people who kind of got it wrong and thought the person went down in this particular area and then like CIA controlled accounts were like, yeah, totally. That's where the guy is, you know, so there's a lot of fun disinformation happening, I guess, throughout this whole thing. So anyway, fun stuff. All right, so we're going to wrap it up with this one. You found this one. It's from 404 Media and you wanted to talk through it because it is extremely funny. There is a secure chat app and Jo Cox has put the headline on it thusly, which is a secure chat app's encryption is so bad it is quote unquote meaningless. And you've looked at this and you're like, yeah, wow, this is, this is really something.
A
Yeah, this is Some, some proper comedy here. So there's this app called Teleguard which comes from a Swiss, you know, kind of privacy centric technology provider with the very reputable name of Swiss Cows. And they have built this like end to end encrypted messaging app. And when you install it, so it's kind of like signal, like you set up an account and it generates some crypto keys to then use to end to end crypto your data. Someone looked into it and discovered that in the process of, you know, signing up your account, it generates private key on your device and then it sends that private key to the server, to the teleguard, like centralized service, sort of toy encrypted, like not properly encrypted, sends them the private key. And then when other people want to chat with you, there's some kind of like API that lets them, you get like the public key for the people you want to talk to and so on and so forth. But the server has the private keys and then you can just kind of ask the server for those private keys and at that point you as well as the operator of the service can both just like decrypt other people's messages if you're on the wire between them, which the whole point of end to end crypto somewhat defeated by sending the private key to the server to start with. But then the fact they also have bugs that just expose those private keys to everybody. It's deeply, deeply funny. And the 404 Media folks talked to Trail of Bits, Dan Carito over at Trail of Bits and they, you know, had a look at Teleguard and Dan sent back a very fine meme which four or four media have now been running with as the artwork for the story. And yeah, like, it's just if you want, you know, a brief smile in these otherwise very bleak times, then the story is definitely worth a read because the details of the fail are most entertaining and rewarding way to spend your lunchtime.
B
All right, guys, we are going to wrap it up there. Big thanks to both of you for joining me to talk through the week's news and yeah, I'll catch you both next week.
A
Yeah, thanks so much. We will see you next week, Pat.
C
Thanks, Pat. See you next week.
B
That was Adam Boileau and James Wilson there with a look at the week's security news. It is time for this week's sponsor interview now in which James Wilson interviews Benjamin Chait, who works, works in product at Persona. So Persona is an identity verification company and you know, this is the sort of company that would do a lot of stuff around kyc. Right. So you might need to do a Persona check if you're trying to open a bank account or whatever it may be. There's a few companies doing that sort of stuff these days. But where it gets interesting for our purposes is that these days there's a need in enterprise security programs for similar sorts of technology. So like as you'll hear in this interview, James posits, well, you know, maybe at Stryker it would have been useful to have a check, an identity check before someone could like remotely wipe everything via intune. You know, maybe you want to insert a check there or you know, if you're hiring someone for the first time, maybe you want to actually do a bit of an identity check and make sure that the person who does the interview is the same person who turns up for the job. So Benjamin joined James really to talk through these, you know, more modern and contemporary enterprise use cases for KYC technology. And here's what he had to say. Enjoy.
D
KYC is not a new concept. You've been doing this many, many businesses have, have handled some form of government ID and kind of selfie verification for years. I think a couple things are changed in 2026. One is that we are building tools to connect directly to platforms that companies already deploy. So whether it's their identity and access management platform, whether it's their applicant tracking system, we can talk about candidates in a few minutes. But you know, we've got those types of integrations. We connect with productivity tools like Microsoft Teams or Slack so that when someone fails a verification, you're not waiting for that, that employee to contact the help desk. You get an immediate notification or your help desk can then proactively reach out. We have also got some integrations where we're able to go get the employees legal name and date of birth from a system of record because not every company, many companies actually aren't putting that information into their iam. And if you don't have that information in the, in the verification flow, you're limited based on, well, was this a legitimate human who seemed to go through the verification? Yes, but that doesn't actually tell you that that human is meant to access this system. And so I think where, where we've been building and spending a lot of our time is making sure we can connect to the tools and ecosystems that already exist. And as any of us who've worked in any enterprise know, every company is a little bit different and a little bit unique. And so you've got many different combinations and permutations of. Here's where we're going to go to get this name, here's where we're going to get the date of birth, and here's the tool that's going to trigger this or that's going to lock someone out.
C
The plumbing is super complicated, right? In any given enterprise, there's such a fractured set of data between the HR database, the authentication database, the IT systems, the who knows where genuine from payroll stored, that copy of the driver's license that you send through. So, you know, I think what I'm hearing from you is that the strength of Persona here is that you can. You're not, you're not necessarily trying to aggregate this altogether, I don't think, but you're connecting into a lot of different surfaces. I think. I think what you're doing there is. You're making is this person really who they say they are, a check that any point in the system can really do, right. And so that it can be integrated at whatever points required. Is that a sort of good way to look at this?
D
I think that's right. The thing I want to push back slightly on is it's not just about that point in time check. So we've got to do all that work, get connected to all the right places in the plumbing. But another place that we add value is you're not doing just one check. When you bring on a Benjamin on day one. You're also able to do a check 90 days in, when I get a new phone or when I'm getting elevated access. And what becomes really interesting or powerful is we can have consistency of that identity across so many different touch points. And we're then able to find, you know, is it truly the same person? But we're not just looking at the, you know, the photo that's being submitted. We're also looking at was this coming from a similar device, was this coming from a similar network range? And I think that's where all of those other signals can also be really interesting from a security standpoint because then you can devise an experience that meets the, you know, the, the friction that you want at that moment based on how risky that interaction might be.
C
Right. So. So, I mean, you know, not to, to. To, I guess not to pour further shade on these poor folks, but imagine if Stryker had Persona sitting in front of their intune instance and right before that person has clicked the button saying, yeah, go wipe those 200,000 devices and 12 petabytes. If only something had actually made sure is that person who they really are. And so you guys can actually integrate into those points in time. And as you say, it's not just about day one new employee. It's not just about that initial establishment of identity. You can actually integrate this into really critical upsteps in authentication. But also just, I think I heard you say that like sometimes, maybe even just a periodic thing, like it might be just good to make sure in 3 months time I'm still who I say I am. Because for all I know I was a highly paid actor that showed up for the same first three months and then collected my paycheck and then my farm of other folks doing the work showed up from that point on.
D
Well, and I think, yes, that's part of our philosophy is being able to tie in not just at these moments of onboarding or when someone's being interviewed. I really think the value is when that person as an employee is interacting with any of your critical systems. But we also work with enterprises that they have a once a year, you know, you're going to go through compliance training. So at that same time they want to send out a verification or maybe even gate compliance training with kind of combine them together so that they know Benjamin went through this. And also they have a recency record of that human. And I think that becomes really interesting.
C
Yeah, but you know, along those veins, why don't you tell me about some of the cool things that customers are doing with this, like what does great look like beyond some of those examples that we've just talked about?
D
Yeah, I mean, I think the challenge of describing grade is that a lot of this space is so new that there's a wide range of what we see and how we've helped customers. I think the most exciting ones are where they're starting to leverage identity as a part of their actual authentication processes. They're not just using it as a one time check. But I want to go back to those one time checks do become really critical and really powerful because suddenly you're taking something where we talk with customers, you know, in order to reset someone's password, they're going to get me, they're going to get my manager and they're going to get, you know, it all in a zoom call. And then my manager is going to assert that, yeah, that looks like and sounds like Benjamin. And I think I have this personal fear that the tools that bad actors have are getting so good that maybe even after this podcast they're going to be able to create a likeness of me that could fool, could fool a lot of folks. And so I think that's where even adding a simple check right before a help desk performs an action can become really, I think, impactful and valuable for a company. But again, going back to the, you know, what does great look like? I think it's employing identity not as an additional burden, but can you find a way to weave it into your everyday kind of business processes? So maybe that's in hiring, maybe that's in onboarding. Maybe that's, you know, not. Not meant to make compliance training harder. It's just take a quick selfie, it's still you, and keep on going. Right.
C
Well, that's, that's the bit I wanted to delve into because I, you know, I think it's important to say I think there's different levels of like validation here. Right. We're not talking about before I do the compliance training that you're going to make me go to the closet and dig out my passport and go find my birth certificate and whatever else. There's different forms of validation that you can do at different points in time. So am I right that there is like a really light touch way that you can do that revalidation at that point in time? How does that work?
D
Absolutely. And so I think when we think about a suite of products, we start with being able to collect just device signals so we can run a background JavaScript that helps us understand, is this a known device? Is this something that we've seen before, tells us something about the network?
C
So I might not see anything at all. You can just sort of.
D
Exactly, yeah. So there is that kind of from the lowest friction to maybe there is an active collection where we're going to say we need you to take a selfie or take a picture of your government id. And I think depending on the riskiness of what a company is trying to protect, we can do. We'll call it a high friction flow, a high assurance flow where we might be asking for, you know, a stronger form of verification, maybe using a digital ID or reading the NFC off a passport. We don't want to do that for everyday use cases because A, most people don't carry their passport all the time and B, it's just, it does take a little bit more work, but trying to kind of.
C
And interesting that you, you mentioned there that you can do this, but it's. I don't. Is it so much that like this is baked in sort of canned ways that Persona operates, or is this sort of a modular Thing that actually it's the customer that chooses you for this workflow. I want this, this and this. Like who's got the control here over how they model their flows.
D
So I will say we do this. This is not just a capability that we can do. We have this deployed for many of our customers. And what we've built is this platform that allows. It's not just a government ID and selfie. It's a platform where a customer can say we want to accept documents X, Y and Z from these geographies and we want to perform these types of verification checks. So it's very customizable for workforce settings. There's this interesting tension that I think about in cybersecurity. You have usability versus security. When we talk about workforce solutions, we have the complexity of we can infinitely customize our platform, but many of our partners, many of our customers might not have deep into identity. And so we come to them with, here's our baseline, here are our recommendations, here are the checks that we want to perform. And from that you can either increase the checks or you can change them based on your either workflow or your tolerances.
C
Oh, that's cool. So you've got basically a catalog of this is what you should be doing. But ultimately it's up to the customer to then if they want to fine tune it, they can. So they're not starting off with this gigantic, you know, I guess, toolbox and having to assemble everything themselves as like some of these ready made, sort of good to go sort of options.
D
Yeah. And I think we've learned that, you know, going back to that KYC example, we've been in business for years and I think one of the hypotheses that started Persona is KYC flows vary from institution to institution. No two verifications necessarily are the same because every business has different requirements. And we see that play out especially in the workforce setting because you have different, you have different user segmentations or user populations, full time employees, contract employees, vendors, and you also have different geographies. So we're really, really aware of the different compliance and privacy needs that you might have working in Europe versus working in other locations.
C
Right. And this is probably a good opportunity to take a little bit of a detour, I guess, into the other end of the spectrum where this is useful, which is in candidates and candidate management and validation. Now, you know, we were talking before we hit record that you and I've seen this firsthand. Right. I've worked in startups where people are calling me for a sort of a backchannel reference check on someone that I've never heard with but or never heard of, but there's the LinkedIn page and they claim they've worked with me. But the particular angle I want to explore here with you on this is this is where it feels like there's a bit of a disparity between what's going to be readily available to an enterprise because they've got the budget versus a lot of the actual threat and attack surface exists in the small and medium enterprise and even startups.
A
Right.
C
Because you know, by and large they're still the ones that are really openly adopting remote and hybrid. You know, they're the ones that are open to hiring in lots of different geographies around the world. Those are the kind of things that an enterprise, maybe I'm being unfavor, they tend to have a little bit of a limited appetite for, which gives them sort of those built in safeguards. But talk me through, you know, what's Persona's role here in that sort of candidate validation before you even made the hiring decision or maybe, maybe when you're doing the loop and is that a fair sort of thing to say, that this is more of a small to medium enterprise startup scale up sort of problem, or how does that sit with you?
D
Well, let me start with what can we do or what are we deploying for customers in the candidate space? And we have a philosophy again going back to this idea, that we want to help build identity solutions for the entire life cycle. So we recommend to our customers at some point in your hiring process, probably between the recruiter and hiring manager stages, you probably want to build confidence that you're actually talking with a real human. And so we start with a government ID and selfie verification, just a very simple link that they can send to the candidate's personal email or text it to them. And then you start that record that identity profile and as that individual comes back, maybe for a panel interview or an on site, you could choose to do selfie re verifications, just a quick simple like is this the same human? And then when you get to the point of an offer or wanting to run background checks, we can either help with background checks through a partnership we have and then say you offer someone the job they accept on day one. Now we're granting them access to the actual tools and services you have internally. So that's where we bridge the gap from being candidate Benjamin to employee Benjamin and we can simply take that same profile in our system, maintain that record of verifications and so we know that on day one, it's the same human that you started talking to two or three months ago.
C
All right. Well, Benjamin, listen, this has been a great, great chat. I understand, I think, so much better now what you guys are doing. And I want to thank you for taking the time to have a chat with me today.
D
Absolutely. Thanks so much. And nice to hang out and take care.
B
That was James Wilson interviewing Benjamin Chait from Persona there. Big thanks to both of them for that. And thanks to Persona for sponsoring this week's episode of Risky Business. And that is it for this week's show. I do hope you enjoyed it. We'll be back next week with more security news and analysis. But until then, I've been Patrick Gray. Thanks for listening,
A
Sam.
April 8, 2026 | Host: Patrick Gray | Co-hosts: Adam Boileau & James Wilson
This week’s Risky Business dives deep into the cybersecurity bombshell of the week: Anthropic’s unveiling of its Mythos AI model, an LLM built for unparalleled code analysis and vulnerability discovery. The hosts dissect what Mythos means for the security research field, implications for open-source software, and AI’s role in defending (and potentially breaking) modern systems. The episode also rounds up a wild week of critical vulnerabilities, relentless nation-state attackers (with a particularly close look at North Korean tradecraft), US government cyber policy missteps, and some much-needed comic relief at the expense of ill-conceived “secure” messaging apps.
LLMs Are Now Vulnerability Researchers
The End of Human-Centric Bug Hunting?
Hype, Safety, and the Disruption to Security Work
Exploit Dev Has No ‘Soul’: The Field Is Ripe for Automation
What Next for Vendors?
Related — James interviews Jeff White (Lazarus Heist podcaster) on “fake IT workers” infiltrating the Western workforce ([24:48])
James Wilson interviews Benjamin Chait (Persona Product) ([39:30]-[52:49])
Enterprise Use Cases for KYC
Integration Is Critical
Small Business Relevance
A landmark episode marking the arrival of “AI-powered 0day apocalypse” — with Mythos, the security community faces a paradigm shift that may, paradoxically, benefit defenders and attackers in unpredictable ways. Alongside, the relentless pressure from bugs, espionage campaigns, and the (self-inflicted) wounds of cyber bureaucracy proved that things will remain, in the words of the hosts, “utterly bonkers.”