
Loading summary
A
Foreign and welcome to Risky Business. My name's Patrick Gray. We've got a great show for you this week. We'll be checking in with Adam Boileau in just a moment to talk through all the week's security news. James Wilson, our newest team member, will be dropping in also in the news segment to talk about Anthropic's new C compiler that Claude wrote that has made a bunch of headlines throughout the week. You know, is it impressive? Is it not so much. Is it somewhere in between? James will drop by with some answers in this week's news segment. And this week's show is brought to you by a very, very new company, ENT AI. And Brandon Dixon is coming along. He's a co founder of ent. He's coming along in this week's sponsor interview to talk generally about, I guess what the security opportunities in AI are beyond the sort of copilot model. Right. So you know, if you're thinking, you know, wide open sky, what would you do with AI? He's got some thoughts and not really sharing exactly what it is that they're building just yet. But you will hear more about that on Risky Business in a couple of months. But let's get into the news now, Adam. And first up, man, you know, you see that meme on social media, the fell for it again award. I feel like we need to hand some of them out this week because Microsoft is excited. Adam, Microsoft is very excited. Have you noticed that like senior executives at these sort of companies, they love to be excited. So we've got a blog post from the chief executive of Microsoft here and it starts with I am excited. So Satya Nadella is excited to share a couple of updates in two of his of their core priorities, security and quality. So, so what they're announcing is that Charlie Bell, who was basically the person, the security, you know, vice executive, vice president, security, responsible for most of the serious like product related secure future initiative stuff at Microsoft. This is a man with a terrific reputation. He is taking on a new role focused on engineering quality reporting directly to the chief executive. And now they are cycling in. Hayet galot, who spent 15 years at Microsoft with senior leadership roles across engineering and sales. They're putting her into run security, but it looks certainly more like that role is about figuring out how to sell more security products than actually trying to make as you suck less, which is what Charlie Bell's role seemed to be.
B
Yeah, I mean Microsoft has such an outsized role when it comes to security in our industry. And so any Changes, especially at the top like this are things that we have to kind of pay attention to. You know, Charlie Bell, as you said, did have a reputation of being a kind of technical engineering focused kind of guy. And he got the impression that he understood it. The big question, of course, is what does this mean for the Secure Future Initiative? Is Microsoft going to just do enough to keep the regulators at bay? Are they still going to be serious about it? What does this actually mean for that? The proof will be in the pudding. We don't really know yet. But we have been through Microsoft's boom and bust cycle of taking security seriously or not, you know, a few times now in our career at Risky Biz. And I think being skeptical as well, warranted.
A
Yeah. So additionally, Holosek will take on a new role as chief architect for security reporting to Hayet. Right, so you've got someone now is a chief security architect reporting into an EVP from a sales background. I don't know, but such is excited. So, you know, that's the main thing. Yeah, I mean, I've always been skeptical about this, you know, Secure Futures Initiative, SFI or you know, as I may have jokingly referred to it a few times, sfa. I'm not sure if that's a acronym that everybody uses, but yeah, I don't know. I know a few people who've got some proximity to Microsoft who are like, oh no, they've done some amazing stuff. But you see stuff like this and you just think, no, it's not enduring. I mean, it sort of feels like, yeah, they had a few bad headlines, they had a bad CSRB report, you know, made them look like clowns. So they did what they needed to do to make it look like it was, it was all serious. And now they just got to revert to whatever. You know, that's what it feels like to me.
C
Yeah.
B
And like they've done it before. You know, trustworthy computing is still, you know, an echo of Microsoft's past. And here we are going through the cycle again. I don't know, like we, you know, they have done some things right. I mean, it's not entirely without positive movement. I mean, there was some, I think they put out a press release this week talking about some changes that they're bringing into Windows that are going to introduce mobile operating system style app consenting for Windows. So you'll be able to approve access to devices or resources or files or whatever else. And that's retrofitting that into a general purpose operating system. Like Windows is kind of Difficult and if they've done that properly, then that's a good kind of move in the right direction. But it may end up being UAC in a trench coat and not actually being that effective. So like there's a lot of difficult problems to solve there, both ecosystem wide and like specific Windows engineering things. And it just needs really strong leadership to do a good job of that. And you know. Yeah, I guess we're gonna find out, right?
A
Yeah, exactly, exactly. Meanwhile, you know, there's heaps of Microsoft news this week, but what's going on with secure boot and very old certificates?
B
So the secure boot thing which was brought in, I wanna say, like what Windows Vista era, whenever it was back in the early 2000s, and there is a certificate, there are certificates involved that are baked into all operating system bioses, computer bioses that allow it to verify the boot process cryptographically. The certificates that were originally minted for that, the CA certificates that Microsoft used to sign stuff, are scheduled to expire this year. That's kind of a big deal, but also kind of practically not much.
A
I mean they were issued in 2011, right. So you would hope that. Because this article sort of points out that if you're on reasonably modern hardware, you can actually update these certificates. And that's not really a biggie, but there is going to be some older hardware out there that is not going to be able to update these things and that's going to cause all sorts of fun things to break, right?
B
Yeah, it's not as bad as it sounds. We're not talking like crowdstrike, your computers won't boot kind of thing. We're talking people who run, not Windows, who aren't getting updates from Microsoft because updates from Microsoft will have shipped new certificates a while ago. And manufacturers have been shipping new certificates with their devices for the last year, year or two, but everybody else that isn't Windows. So like Linux users, servers, embedded systems, their boot up process might be a little more complicated. But the other kind of side of this is that the secure boot process reference implementations like the Tiano core reference implementation disables time checking on those certificates anyway because validating certificates during boot without the Internet already kind of hard relying on time when you might be mid boot up and not yet, or in a machine that's factory fresh and doesn't yet have its bias clock set. Like there's a bunch of reasons why time checking for certificates was never a great plan in embedded systems or in early boot time. So that check time check may actually never be a problem for most people and at worst maybe you have to get some manufacturer up there. So it's not the disaster that it could be. It's not a disaster that I would love because we're all about. I am all about things burning horribly down. But it's still crypto is hard. Doing it in the real world is. And you know, I guess secure boot is a thing we do all rely on quite a bit.
A
So I'm having flashbacks to the news cycle for when all of that stuff was introduced, when it was first proposed and the Linux people had a meltdown because they thought that this was Microsoft conspiring with hardware manufacturers so that only Windows would be able to boot on modern hardware, which, no, that's not what they were doing. But it was quite funny having to survive that as a tech journalist at the time. Let's see, what else have we got here? Yeah, we got a whole bunch of like Patch Tuesday stuff that's just dropped, including stuff being exploited in the wild. There's some Office bugs that are being exploited by Russian like APTS and yeah, there's just a whole bunch of horrible stuff. Horrible, horrible. Dirty, unclean. Unclean.
B
Yeah, that's pretty good roundup of bugs this past Tuesday and none of the Windows ones are like super exciting. But six being exploited in the wild I guess is notable. That's slightly up on previous few rounds that we've seen. The one that you mentioned, the Microsoft Office bug that's being exploited by Russian hackers in Ukraine is actually interesting because it's only a bug in ancient like Windows, like Office 2016. So like unsupported end of life, no security updates, available versions of Office. So that's already kind of a bad place. Microsoft has patched them anyway, which is interesting. And then it's kind of like a bypass for the controls that let know you turn off embedded like Olay objects in Office documents so you end up with documents that result in codexec. The Russian hacking cruise jumped on that very, very quickly, turned it into a work and exploit and so on. So if you're running ancient Office, then yes, Microsoft has actually deigned to patch it for you even though they said they wouldn't. So that's nice I guess.
A
I think though I'm having a look at this, like if you look at the nist stuff it is Office 2016, Office 2019 and Office Long Term Servicing Channel 2021 and Long Term Servicing Channel 2024. But I think it goes up to like 2019 and then into the like you're paying megabucks just to get patches. Streams, right?
B
Yeah. So I think the nuance there is that for the later versions of Office, Microsoft were able to fix this kind of server side. There was some knob they could twiddle that basically killed it. So in terms of exploitation now it's only, I think, 2016.
A
Okay, right, got it.
B
2019. I don't remember off top of my head which category that was in, but like, basically it's people running ancient Office and everyone else. Microsoft pulled some trick that meant it didn't work without you actually having to patch your stuff.
A
Makes sense. Makes sense. All right, Darina Antoniuk over at the Record has a write up on Russia linked hackers doing cyber attacks ahead of the Winter Olympics. I mean, we've seen there's been some physical sabotage happening in Italy and now apparently a whole bunch of like cyber attacks targeting things like consulates in Sydney, Toronto and Paris. But you know, the Italians are like, yeah, we were able to repel these attacks and what, and, and whatnot. But it looks like, I mean, you just sort of wonder why Russia bothers with this stuff, right? Because every time they're hacking some Olympic committee or something, it doesn't really achieve anything. You just, you just really wonder what, you know, they've got other things going on at the moment. Why do they bother with this? Seriously?
B
Yeah, it's a great question. And like so far they don't seem to have achieved really anything in this particular Olympics. Like some of the other Olympics, like it was one in South Korea where they did actually do some pretty good hacking, like there was actual good compromise there, but even then still really achieved nothing. And they've got other things on their plate at the moment. Why do they even bother? And I think what struck me was, I think was a couple of weeks ago, maybe three weeks ago on between two nerds, Tom and the Gruk were talking about how Russian hacking often lines up with the reporting cycles internally so that right before its review time, a whole bunch of flashy high profile stuff gets hacked that then they can go and say to their superiors, hey, look, we've been causing cyber effects even though there was no actual effect other than, you know, something happening that gets their name in the press or whatever. And like maybe the Olympics fit into that category. Like it's just, look at us, look how cyber we are, you know, please justify our budget for next year kind of thing. So maybe that's what it is.
A
So much cybersecurity news when it comes to like state groups is driven by KPIs, man. Even if you think about like the Snowden documents, like the Prism, the infamous Prism slide where it's like it made it look way cooler than it actually was, you know.
B
Anyway, yeah, American management techniques have a lot to answer for.
A
Oh, they do. They really do. We got another one here from John Gree and Martin Matoshek also at the Record, which is, I mean the headline here is researchers uncover vast cyber espionage operation targeting dozens of government. Governments worldwide. I mean, this is some research out of unit 42, which is, you know, Palo Alto Networks. What's the go here?
B
I mean, essentially, just as you described it, they've rolled up a campaign that's been very active around the world in a lot of places. The thing that I think is most interesting about this, like, so it's a big campaign, countries all over the world, telcos, firewalls, all the things that you would kind of expect. This isn't Salt Typhoon and it isn't the one that we're about to talk about in Singapore. It's, you know, there's like, you know, five or six of these global scale campaigns, all of which are China Nexus, all of which seem to be kind of roughly independent. Like China, I guess is very big and they have quite a lot of hackers doing quite a lot of things. And like that's the bit that this story I thought was interesting. You know, like there's so many Chinese crews doing so many things.
A
Yeah, I mean they've got scale. Right. Like you talk to the people who track this stuff and they're just like, man, it is crazy how many of these people there are like. And just the number of simultaneous things that they can actually do. And speaking of, you know, here is a story about State linked. State linked phishing campaign targeting journalists, government officials and whatnot in Germany. Dorina has a right up there.
B
Yeah, this is like a signal phishing campaign targeting the device linking. So techniques that we've seen before and the targeting, I guess in this case, German military, German politicians. It's kind of interesting, you know, good in that, I guess Signal is being widely used enough that it's worth targeting rather than, you know, WhatsApp or whatever. But from a technical point of view, basically pretty straightforward social people into linking their account with your. Your device, your account with their attacker devices.
A
Yeah, yeah. And there is, there's been a disclosure from Norway where they have been also Salt Typhoon. I felt like, was it last week we were talking about someone who had talked about being Salt Typhoon. It was The Brits was it?
B
I think it was, yes.
A
Yeah, so. So I mean, you know, talking about the Chinese being everywhere, doing all of the things all at once. I mean there you go, the Norwegians.
B
Yeah, yeah. And I mean this, you know, the fact that Norway.
A
The dagger pointed at the heart of Beijing.
B
Yeah. I mean they're running Avanti Endpoint Manager Mobile which I guess is a Internet kick me sign. So not particularly surprised, but the attackers moved pretty quickly on this one. This bug came out, although it was, I think we talked last week about the fact that it was just basically the same bug as last time but in the next function over or something. So kind of what you'd expect. But yes, real people being actually hacked. The Dutch also said that they had their Dutch data protection authority ironically hacked by IT and the judicial council. So like definitely a bunch of people in Europe being hit who are still running the software.
A
The EU also had some drama. Right. I seem to recall something from a headline in the bulletin.
B
Yeah, it all kind of blurs into one like it's all, you know, lots of enterprise governmenty things being hacked by lots of China.
A
Yeah, well. And meanwhile Singapore says China linked hackers targeted telecom providers in major spying campaign. So this is. The Cyber Security Agency of Singapore said that UNC3886 was behind. So this is like uncategorized cluster of activity was behind this, this campaign. I mean this feels a little bit salt typhoon y who are UNC 3886 Adam I mean that's.
B
I assumed that this was salt typhoon but now the Singaporean authorities said that it was this particular UNC number which is a China Nexus group that I think man Google mandian attributed a few years back does many of the same sorts of things, you know, focuses on firewall devices, complicated environments, telcos and has been seen active around, you know, I think all over the world, not just Southeast Asia but yeah, just China going, going large at this stuff and I mean all four major telcos in Singapore and I guess you know like Singtel owns like Optus in Australia as well. Yeah, so like they're 100% owner of, you know telco is another country. So like yeah, they're very busy. Lots of telco hacking. The Singaporeans pointed the finger pretty clearly but. And you know, I guess, you know they are in Singapore is in a particularly interesting kind of place, you know, between the east and west and you know, having China all up in their stuff probably is not really a surprise.
A
No, it's not. I mean you know, Singapore is a pretty important place in Asia, right? So no surprises there at all. We've got a late break in one here, which is a blog post from Bob Rudis, harbour master over at Gray Noise Labs. This is, I think, fascinating. Basically in January, Gray Noise just saw that Telnet traffic went away, which is weird because they see so much traffic, so many probes and they saw it drop by like something like two thirds, right? And then a few days later out comes a security advisory for Telnet and then Telnet starts getting hit with this. So you know, I had Andrew Morris on the show late last year talking, talking about this, about how you can actually tell when there's a bad bug coming in, something by watching just randomly what's happening on the Internet. It looks like what happened here is some of the major telcos, backbone providers, whatever, actually just were convinced to start blocking Telnet because you know, the powers that be knew that this bug was going to drop. I mean that's sort of what you would infer from this, right?
B
Yeah, that seems to be their kind of supposition, like there's no real finger pointer. They haven't figured out exactly where it happened. The blog writes up their kind of working theory which is that at least one maybe more major US background provider decided to filter it. And then that has effects on like telnet traffic coming into the US or transiting through the US from other places. And they can kind of map on, you know, kind of map where they see the changes into the kind of structure of the Internet. But I mean it could be as simple as one hero, somewhere in, you know, some tier one ISP saw this particular bug coming on, you know, on whatever like information sharing platform went, you know what, we should probably do our civic duty and just drop 23, you know, globally and has done so. And you know, ISPs have a long history of filtering stuff on the network. Like back in the early Windows worm days, like 1 through 5, 1 through 9.
A
Well, but that's what I was actually going to bring that up as well. But that's when it started because initially they were very reluctant. This all goes back to how I wound up getting kick banned from nanog when they figured out I was a journalist. But yeah, basically there was stuff like, yeah, Slammer and Blaster, right, where they're like, oh, it's a, you know, a 237 byte UDP packet that's like very, has some very specific characteristics and is easy to filter. Like initially it was a huge bun fight among the network operators, because they didn't, you know, they're like, their job is to get packet from point A to point B. Their job is not to do anything to packet, drop packet. No, no drop packet. Job is deliver packet. Right. Even if packet bad packet must be delivered. So it is a sign of how much things have changed, right, where this sort of thing happens regularly and especially when they're going to drop an entire protocol.
B
Yeah, I remember I worked at ISP in the late 90s, early 2000s and there was this idea that we shouldn't interfere with that communications. We are a carrier. The postal service is not the postman's job to read your postcard and decide not to deliver it because it has bad news.
A
But I mean there was, I gotta say, like being around at the time as well there was also like the network operators taking themselves ever so slightly too seriously. You know, it is our joy, Judy. No rain, no hail nor sleet shall stop us in our. You know, like it's a little bit like. No, get your hand off it guys. Come on, just help out.
B
Yeah, but, yeah, so, but I think overall this is good news. I mean bare telnet across the Internet is probably not a thing that very many people need. This particular bug in GNU talent that we talked about like it was a, you know, it's a straight up like remote, no auth bypass, like login is root remotely kind of thing, which is clearly not ideal. And you know, telcos also have skin in the game by virtue of having a great many embedded systems and you know, modems and routers and customer premises equipment and whatever else. So like it's also in their interest to filter this stuff. But anyway, if you know why a whole bunch of us backbone started dropping telnet, then I'm sure Bob Rudis would like to know and so would we. So feel free to let Gray Noise know and I'm sure they will share with us if we were the source of the tip. So dear listeners, do your duty.
A
There you go. Now we've got a blog post from Intel. Intel and Google did a joint security review of Intel TDX 1.5, which is this technology that allows people to run sort of like properly separated hypervisors. This stuff has been around forever. Like various attempts at this sort of technology has been around forever. It always seems to have problems. I've never seen much about this stuff really being a hard requirement for procurement from a bunch of people. So I don't know like if this has crossed over from being something that's just Academically interesting into being something that's like business necessary. I haven't heard about that yet. And you pointed out to me that it's interesting when Google and Intel get together to do something.
B
Yeah. So I think intel security team and Google security team cooperated on an earlier review of the TDX extensions, which from Google's point of view they are interested in because they use it to implement trusted confidential computing for cloud stuff. And I think to your point about enterprise, very few enterprises care about this stuff. But the cloud service providers want to be able to say, when you buy computing off us, there is some mechanism that stops us snooping on your things so as to build trust and encourage uptake of cloud computing and so on. And that's, you know, AI world, obviously that's important. General cloud computing also kind of important. So I think both the, like, all the major cloud providers want to be able to point to someone else and say, intel stops us from doing this. Therefore you can trust us. Because it's not just us, you also have to trust intel or ARM or amd, whoever else. And so cooperating on this research makes a whole bunch of sense. And you know, Google research teams, very well equipped, very well resourced and they knew about, they'd worked with intel before and they've turned up some pretty good bugs. There was a couple of new features that intel added, one for doing a live migration of trusted VMs, which, you know, that's a hard problem to get right. And one of the core bugs that they found was like you could turn on like debugging during migration and from that gain access to the confidential environment. Good research. The technical write up is quite interesting. I'm just, I think it's cool that they are working together and you know, I want to give them big ups for making that happen despite it probably being quite complicated, you know, organizationally to make happen.
A
Now from these very interesting bugs turned up by intel and Google to something a little more run of the mill. There's a bunch of SolarWinds web help desk bugs being exploited in the wild. Adam.
B
Yes, I mean SolarWinds has had many, many bugs quite famously and his stuff and this particular bug is not particularly exciting I suppose. But the thing I did like about this is that the campaign, the group that's using this bug in the wild is actually kind of hip and cool and very cloud native. So Huntress had a write up of the adversary and the tool change that they use. So they use this Solen's bug to get in and then after that they drop like a Zoho assist agent for remote access. And then they use Velociraptor, the incident response tool for their command and control channel, which if you're an IR person, is pretty on the nose. And then they also have like dynamic failover where they can switch to another command and control channel. They exfiltrate data into elastic clouds. They land on the system and just bung it up into elastic instead of expelling it to their own systems. So all very, very cloud native, which. Which, you know, I think is pretty fun.
A
I think the funniest, weird modern hip thing that I've encountered recently is knock knock. Have a customer who want to use knock knock on their mainframes. So they needed a mainframe client written and the guys had a first stab of it in Go. Tried to write a Go client for a mainframe and it didn't like it, it didn't work well, basically so they had to redo it in C. But you know, that was pretty funny. But go go on mainframe, you know, hey, why not?
B
That's a funny old world in is.
A
And look, you mentioned this one earlier, but yeah, that Ivanti bug that we first spoke about being, you know, terrible and should not be there because it was like related to previous bugs. That one is out there being exploited in the wild. That's the one that's hitting the EU and various bits of the Dutch government. Yeah.
B
I mean at this point you would think that running a vant here was a bad idea, but you know, still are some people who are stuck with it for whatever reason. Procurement is hard, I guess.
A
Well, and compliance is hard as well.
B
Right.
A
So there's parts of that as well. Meanwhile, North Korea, North Korean hackers targeted a cryptocurrency executive with a deep fake zoom meeting and a like a click fix. They were trying to do a click fix payload. It doesn't look like they actually got away with any money or anything at the moment. But the point is, I guess these deep fake zoom calls and whatever, they're going to be standard operating procedure real soon and it's going to be very, very difficult for people to tell the difference between, you know, like the true person and, and, and a deep fake. I don't think people quite understand how bad this is going to get and how rapidly this is going to get bad.
B
Yeah, I mean, identity verification is hard already, but there's also a great many situations in business where you are interacting with new people that you don't have some, you know, anchor on which to decide whether you trust them or it's a new customer, it's a new inquiry, it's a new customer sign up. Like, there's, you know, there's plenty of reasons why, you know, you are not, you don't have a grounding to decide whether or not a deep fake is a real person or not. Like that kind of doesn't matter in this case when you're chaining it with, you know, click. The lure here was you get into this video call and then the audio doesn't work well, so then they click fix you to fix it or whatever else and then drop, drop malware on you. But yeah, I think, you know, the prevalence of video deep fakes versus conference calls versus, you know, identity checking, whatever else, it's going to be a wild ride for a few years whilst we figure out how to do distributed network identity because.
A
Well, because it's the last thing that we've been relying on, right? Like if someone sim swaps me, rings one of my friends and they're not me, like, it's. I don't know, it's. It's a bit like if they're texting me, if, you know, someone's texting from my phone, hey, can you send me some money or whatever, they're going to ring me and they're going to say, hey, Pat, you know, you need me to send you. Like that's not going to work anymore. And then, oh, okay, well, you're a bit worried about audio. Might do video. That's not going to work anymore either. So I think this stuff is going to get bad. And we don't really have too many solutions. Like we had Snake, like in last year's second Snake oil is. We had Persona, right? Who are like a KYC identity verification verification company who do like remote identity verification for banks and whatever. And they're finding that they're selling licenses to enterprises to just cope with that issue of verifying the identities of like staff. So that's a whole different like, you know, business line for them. I've been chatting with some people who are founding a company, trying to tackle this problem. It's hard. It's a really hard problem to tackle because what do you, what do you, how do you do identity attestation remotely? What you bind it to the device, then you just identify, then it's just someone has control of that device. It's hard anyway. And I think it's going to be bad. It's going to be bad. Let's see, beyond trust Remote code execution flaw in their remote support software. Hooray.
B
Yeah, exactly what you want in a security product on the Internet for remote access. Access. You know, there's a lot of beyond Trust out there and I really hope people are patching because yeah, this stuff, the bug. Do you remember last week, was it last week before we talked about like a particular, like command injection bug and bash that Wotstauer had written up and it was actually really cunning and I was like, oh my God, I wouldn't have thought of that when I was auditing it, if I was auditing that particular piece of software. Anyway, it's basically the same bash trick, except in Beyond Trust and somebody looked at that Watchtower post that we talked about and went, huh, I bet other people have exactly that thing and then had their AI go look and found this particular, this particular bargain. Beyond Trust. So it's kind of funny. It's a funny world. And yeah, if you have Beyond Trust, get patching. If you are a Unix hacker, this particular bash trick is absolutely worth reading about.
A
Okay, so at this point I want to bring James Wilson into this conversation. James, of course, is the newest staff member and podcaster here at Risky Biz HP Q. We will be launching a new podcast channel filled with Jamesy goodness soon enough. Once he's. Once. Once we've actually set up the feed. But James, you've been taking a look at some stuff for us this week around AI again, and in particular the news that Anthropic was able to like it's. It's Claude. Claude was able to basically write a compiler that was capable of compiling the Linux kernel. Now this was really, really big news and opinions seem to be split. Right, which is why we were so keen to have you look at this, because the hot takes were either, oh my God, this is incredible. This is the most amazing thing that ever happened in computing, ever. Or lol. This compiler sucks. Right? So I'm guessing the truth of the matter here is going to be somewhere in between those two. But you tell me it's a compiler.
D
Insofar as it can take C source code and it can compile it down or translate it down to the next sort of layer of instructions as a compiler would do, but it's far from a general purpose compiler that you can throw any properly formatted code at it and expect it to work with. And that was sort of the first.
A
Well, I did see people like trying to compile hello World and getting errors, but I didn't know if they were trolling or not right.
D
Yeah, I mean, look, it's a little bit of an unfortunate sort of foot gun moment to release this and have issue number one show up on GitHub as, yeah, great, but it doesn't compile hello world. And this is where I think Anthropic really hasn't done themselves any favors. Like stating that you've built something that can compile the Linux kernel is a bold claim, but it misses the point of the article. The article's not about the fact that that was their long term intention here. We don't need another compiler, we certainly don't need one that's written by AI. But this is a demonstration of what happens when you get multiple agents working together in parallel and exposing the fact that there's still quite a few bottlenecks and problems around the way to orchestrate and make those agents work together.
A
Okay, so what are those bottlenecks? Right, because if anything, like I've been surprised, right, with how quickly agents, particularly Claude, have just been able to blast past existing bottlenecks. Like, are these more of those same bottlenecks that we're expecting these models to just blow through or is this a little bit different?
D
Yeah, I don't think it's a fundamental problem that requires like a deep architectural change. It's more of just like a. An interesting write up of the fact that when you get multiple agents working together in parallel, the emergent properties are not too dissimilar from what happens when you get a bunch of undomesticated developers working together. You know, people commit code that stomps all over each other. You get merge conflicts, you get problems, you get people working on the same stuff. It happens. Right, and the fact that agents do this as well. Yeah, that's somewhat expected.
A
Now Adam, I wanted to get your take on this other thing we got here in the run sheet this week, which is a write up of some. It's a write up from a company called Aisle, which is all about what they, what they've called it, is what AI security research looks like when it works. And it's a very interesting sort of nuanced write up about AI based security research. But I think this is similar to the compiler stuff in that every time there's AI bugs, every time someone drops AI bugs, there is one of two reactions which is, oh my God, this research is incredible game changing. It's going to nuke every single research job forever. Or it's that ain't real hacking, that's like a lucky find. And the way that it Found that bug was really, really dumb. So what do you think of this? And walk us through it.
B
Yeah, I think it calls out the important distinction between, you know, as you. There are these two different responses, but there's also so many different ways you can use these tools. And this particular blog post comes from a company that has been building tooling to do, you know, pretty real security work. You know, finding bugs in, like OpenSSL, for example, patching those bugs, getting those patches accepted upstream, interacting with the maintainers of things like curl and OpenSSL that have, you know, pretty, I guess developers that are opinionated and are absolutely willing to tell you to go pound sand if they don't like your contributions, be you human or AI. And especially skeptical in the case of Curl, like, quite famously, Daniel Sternberg has been complaining about the quality of the work that they get on bug bounty programs, for example. So both of these things are true, right? There are people doing real interesting kind of frontier research and there are people just using, you know, pasting stuff into ChatGPT and then copying it into GitHub and saying when bounty. And both those are true. And it's interesting seeing people writing up both of those bits. And then the compiler part, you know, the compiler story for Anthropic also kind of dovetails with, you know, they've been releasing work about their security research and their ability of their models to go find real bugs, pats real code, and everything is moving very, very quickly. And even opinions from last week, you know, kind of need to be reevaluated. And that's the kind of. I think the real takeaway is, you know, immediately dismissing it is wrong. Immediately saying this is amazing and going to solve all our problems is also wrong. But we do need to be constantly reevaluating the state of the art.
A
I think you're right. I think the speed thing that you just hit on, though, is very, very real. Because I'm finding that, you know, the sand is shifting under my feet so quickly when I'm looking at this stuff. I mean, James, you've been zeroed in on the AI stuff a lot longer. Like, is it playing out how you expect? Where do you think it's going with security research? I'm curious, because you are so zeroed in on AI stuff. I'm just curious to know what you think there.
D
Yeah, I think the pace is definitely quickening. I think there's two step functions that have happened. The introduction of using tools was the first big step function. And then more and more of these agent workflows where there's just an endless iteration loop where stuff gets done is what's making this move really fast. The security research bit. Look to me, the fact that people sort of looked at this thing as a, hey, it's a compiler that compiles a Linux kernel, but it can't compile hello world. Aside from the luls, the deeper story there is, that's the same parallel as what we're seeing, which is these models know how to create something that works, but they won't create something that works, that won't be susceptible to attack unless you actually go a whole lot extra yards to bake that in and to make sure that that's actually the case. And so just like the work that I was doing is great because it actually goes beyond just, just find, exploit, get bounty. The same needs to be done with the way that these models generating code to make sure that we move beyond just it works, then ship it. It's got to be, it works and it works safely. And I, I am not yet seeing a sufficient amount of work and effort being put into that side of the equation as well.
A
Well, I mean it's not, it's not, it doesn't look as good on the PowerPoint, right, like in the meeting with the, the investors where you've got to convince them to give you another $10 billion into your money destroying business. But yeah, all right, that'll, that'll wrap up that conversation. For the, for the week on, on AI stuff, we do have a couple of funny stories just to round out this week's news segment. Adam. A South Korean crypto exchange called, I think they're called bit thumb. Yeah, bit thumb butter thumbs. They accidentally transferred $40 billion worth of Bitcoin to their customers. Bitcoin that they didn't have, mind you.
B
This is, they were given, they were doing like a promotion, like a loot box promotion thing where you got a freebie. And it was meant to be they're going to send some of their customers the chance to win six, was it 2,000 won, Korean won. And they missed the currency units and actually set that to 2,000 bitcoin, which is quite a lot of money. And so yeah, $44 billion later, so they managed to like reverse the, the balances of people on their platform, but a number of people managed to actually get their bitcoin out of their system fast enough to go cash it out. And I think they said, you know, the company said oh, we've recovered like 99 point, blah blah blah percent. But when you go look at the numbers. They still lost something like 1800 Bitcoin, which is like $120 million, because they screwed up the currency. So that's just deeply funny.
A
I mean, are these people going to have to. Are they going to have to give back the money?
B
Yeah, they're asking people nicely to give it back. Like the ones who are on the platform where they could just take it, they've taken it. The people who've moved it out of the platform, they are in the process of asking nicely. But Korean law is kind of funny on the subject as well, because crypto is not.
A
I think you'd find that it'd be the same in a lot of places. Right. Which is. I don't know, though, if they. If a bank accidentally puts money in your bank account, you can't just take it. Right. Like, we know that in Australia because it happens every now and then, and sometimes people, they cash it out, man, they're on the next plane to the Philippines, you know what I mean? But, yeah, I don't know. I don't know. I don't think it's the same everywhere.
B
Yeah. So I know it'll be interesting for them to see how much they managed to get back. And, like, mostly I'm just imagining the employee who did it. Like the person whose day on the job they screwed up in that particular, you know, $40 billion. Like, we've all made mistakes at work, but $40 billion worth of Bitcoin? Whoopsie. Like, that's a whoopsie.
A
So that's our comedy story of the week. And I should just mention, you know, we've got a tragedy story to pair with it as well, which is this story from Martin Matashack, which says, white House to meet with GOP lawmakers on Pfizer Section 702 renewal. This is the story that just never will die. You know, it feels like every few months we're talking about 702 renewal these days. So it's back. It's back. And then of course, it's going to get close to the date and then everyone's going to start freaking out. And then there'll be some, like, last minute emergency authorization that'll kick the can down the road for another three months or something. Right. Like, that's going to be how this.
B
Thing'S doing that for years right now it feels like it's been years, right?
A
I don't know. I don't know. It all sort of like, has blurred into one in my head. But, yeah, all Right. But that's it. That's it. We're going to wrap it up there. Adam Byerlo, thank you so much for joining me. And James Wilson, thank you for coming along to share your expertise on the AI stuff with us this week. Thanks to you both.
B
Thanks, Matt. I'll see you next week. Thanks, Pat. Foreign.
A
That was Adam Boileau and James Wilson there with a check of the week's security news. Before we kick on to this week's sponsor interview, here is Tom Uren telling us all what he's been up to this week, both in the Between Two Nerds podcast and what he's going to write up tomorrow in Seriously Risky Business.
E
On this week's Between Two Nerds, the Gruk and I spoke about the dynamic where security has been bad, is bad, probably always will be bad, but maybe that's okay in Seriously Risky Business. This week I'm writing about changes at the top of Microsoft. They make me worry that the company is reverting to form and that it'll prioritize selling security products over making products secure. I'm also writing about Chinese cyber rangers. So apparently there's reports that they're replicating regional critical infrastructure networks. The only reason you'd want to do that is to figure out how to attack them and disrupt them. So bad news finally. I'm also talking about news that the US disrupted Iranian air defence networks using a cyber operation. That's like the wet dream of cyber operations when it comes to the military. I think it's fascinating that that news has come out, but that operation bombing nuclear facilities in Iran is a type that really suits cyber operations. So I'm not convinced that this is the sort of standard thing that will happen in a long drawn out war.
A
If you would like to listen to those podcasts, please do subscribe to the Risky Bulletin RSS feed. You can head to Risky Biz to find that and you can also subscribe to our newsletters. There it is. Time for this week's sponsor interview now with Brandon Dixon, who is a co founder of ENT AI. Now Brandon was the founder of Passive Total which wound up with Risk iq. And then Risk IQ went over to Microsoft and you know, Brandon actually wound up being, I think he was responsible for Security Copilot when it launched, but he's out of Microsoft now and he's building Ant AI. So ANT is not really talking all that much about what they do, but they are talking about how Copilot related stuff in AI is kind of just the first stage in what we can do in, in terms of using AI to improve security. So this interview is really Brandon talking about what the bigger possibilities are. And I personally, I found it very, very interesting. I'm going to be working with these guys, I'm going to be doing some advisory with Ant. They're a decibel company so I just need to disclose that. But this was a very interesting interview about yeah. Where AI could go in security beyond just using AI to interpret the sources of information that we already have access to to. Here's Brandon Dixon. Enjoy.
C
You know, cars for the longest period of time have been trying to quest towards like automation like since like the 1970s, like making them self driving. And it was only with the introduction of world models and like updated technology that that has become more possible. It's not perfect, but it's become more possible. The car is not trying to drive better. It is trying to anticipate what another car might be doing or what a person might be doing within that visual space. And so it's trying to be predictive, it's trying to anticipate what that next thing looks like. When you think about security, security is about trying to do this detection, like do everything right, assume that everything's going to be right. And what it lacks is this organization, what we call an organization work model. This understanding of how the business actually operates in order to create an understanding and predict what somebody might be doing. Is that normal, is it not? Is it part of what the business typically does or is it, you know, seemingly stick out like those are the things that you want to be able to capture.
A
So how do you go about actually capturing that is the question, right?
C
Of course. So you know, the way that you, you do this is that you have to be at the endpoint where you have the most feature rich context and you want to create a layer, a layer between the system and the user and it's that telemetry, the same telemetry that feeds into, you know, the automated self driving cars that exist today that help build up that world model. The understanding of what that person is doing and what AI has given us today, recent advancements, is the ability to scale understanding that context without humans. So because we have a lot of semantically rich information at the endpoint, we're capable of now understanding what is the user actually doing. We don't have to guess, we don't have to represent it in sparse signal. We get it in exactly what happened. And so for us we see the endpoint as the holy grail of context. But Also the greatest opportunity to intervene and stop somebody from doing something before that bad thing can occur, before the mistake can occur. And it's a combination of the advancements in AI, but also your deterministic rules as well.
A
So where does AI actually bolt into this? Right? Because if you're talking about like being able to do a statistical analysis of telemetry sources to understand context, like this is a game we've been playing for a while, mostly around sort of operating system behaviors and things like that. Like that's how the endpoint protection stuff on something like CrowdStrike works, right? Is there's some wacky event that just it hasn't happened before. It pops up in some console, you know, where they're doing their MDR and they say, hang on, this strange thing happened. And then. And then away they go. So I guess what you're talking about though is a more flexible human centric context. How do you construct, how do you, you know, where does AI come into like, you know, contemporary AI come into actually constructing that world work model?
C
So there's statistical approaches that are tried and true that are going to be cheap and somewhat effective, but they're going to be noisy and riddled with false positives and a bunch of noise. I think that's why those systems didn't work well.
A
This is why people haven't done it when machine learning was all of the hype, right? This is why we haven't seen someone come out and solve the DLP problem with machine learning. It hasn't happened.
C
Yeah, I mean, at its core, you know, it's embeddings. Embeddings are the big advancement that we have today that recent updates in models have given us. Language. Models excel at language because they embed that natural language. They backed into understanding it and they make sense semantically over those concepts. They attend to words to then understand the meaning, to then try and predict what the next thing is going the next token will be. The next word will would be. So where you have statistical approaches which are fine, they give you some signal what happens all of a sudden if you have a way to represent behavior in natural language and they can be.
A
Formed in embeddings, okay, so you've got some room for AI to help you sort of understand the context, not just understand the context and collect information about the context, but also to continuously evaluate what's happening against that context. Is that kind of the idea? Is this all stuff that's sort of been enabled with recent reasoning models?
C
I think it's advancements in how the current language models have been able to scale to understand language itself. They've backed into those concepts. The way that they've done that is that they've taken all the data from the Internet and formed embeddings to transform that semantically rich contents, a context into vectors that computers can then process to find similarities between those. So if you're capable of building a layer that understands what's happening, and you're capable of modeling that in language, then that allows you to go and identify something that's more similar, like, similar to behavior that is normal versus not, normal, baseline versus not baseline, anomalous versus not. And it also allows you to express in natural language where you don't have to be an expert to then understand what actually occurred. I'm not.
A
Well, I mean, I was about to say, like this is, what you're talking about is the skeleton of a system that can actually explain to you why it has flagged something or why it is complaining. Right. And that is a very new sort of thing. Instead of having to see some weird signature based alert that you have to hit Google and find out, you know, why did you just trip on that? That's, you know, I don't understand. This will actually sit down and make you a cup of tea pretty much and explain it to you.
C
Yeah. And well, think about it this way. When you go and join an organization, one of the first things that you experience is the policy of the organization. This is your data handling policy. These are the acceptable applications you can use. These are considered the norms for the business. They're not expressed in signatures, they're expressed in natural language. We read these things as humans. We try our best to understand it. That is the opportunity that's available to us. Can you consume those policies and then represent that as effectively as rules in natural language? The analogous term that we would have today in these language model systems is prompting. We're effectively prompting these systems to go do a task on our behalf. Generative systems are wonderful for being able to take complex topics like the outputs of a machine learning clustering algorithm and explain in human language what the hell that actually means. Right. I'm not seeing cluster one versus Cluster two with like some feature set. I'm seeing cluster one is, you know, doing productivity documents and cluster two is deleting a bunch of information off my system. And when you're capable of representing information that way, it creates a new paradigm for how detection can be performed. But it also opens up again the opportunity since you're at the Endpoint to intervene in near real time to stop the bad thing or the mistake from occurring before it actually happens. You save by not having to create all this downstream work that securities have. The security practitioners have to be experts on today. Today, as a developer, if I go and run PowerShell on my system, it's going to get flagged as a suspicious command line usage. Somebody in the SOC is going to either ignore that alert or they're going to be unfortunately having to run it down. They're going to go talk to the developer and they're going to say, what were you doing? Why were you running PowerShell? Developers going to be like, what are you talking about? It's part of my job.
A
I'm a developer. I was doing developer things.
C
And they're going to put it somewhere in a ServiceNow ticket and it's going to fire the next day and the next day and it's just going to go unaddressed. That context is missed. It doesn't get preserved in ServiceNow. It needs to be preserved in a system, that world model of the organization, something that's capable of expressing. This is unique workflows to your business. These are unique applications that you go to, unique URLs that you visit. These are considered normal behaviors. And when people deviate from that, these are considered risky.
A
I mean, it's interesting, right, because until now, like, I, I can, I can just think of like a dozen companies that are doing like, you know, AI SOC at the moment, right? Which is the idea is we're drowning in information already, we're drowning in telemetry. So let's plug an AI agent into the SOC and bing, bing, bing, you know, fantastic. We've just saved ourselves a whole bunch of time, increased the fidelity, you know, decreased false positives and whatever. I guess what's interesting here though is that you're saying, well, the SOC is kind of yesterday's news and we've got an opportunity here to recreate that, to rebuild that model around data that has much richer context, data that's much closer to the user, data that's much more complete. I mean, that seems to be the thinking here, right?
C
Yeah, I mean, I do believe that the AI, like agents that are taking place in the SOC have a fundamental flaw. There's, there's, there's good and bad. The good is we need a, we need help scaling in the moment. And I think services and automation in the SOC as it exists today is a net positive, even if it's another layer on top of existing solutions that we have, we're not going to have traditional threats just magically go away. They're still going to be plagued with EDR problems and you know, we're going to have to have some coverage there.
A
But, but, but we need something else now. We have an opportunity to have something else now, which is an additional control. Right? Well, additional insight, visibility, content, whatever.
C
Yeah, there's a first principles approach that you could take, which is if you can model that behavior, that behavior information becomes applicable to the SoC, it becomes applicable to security awareness training as well. Right. It becomes applicable to modeling inside risk, seeing how data flows across.
A
Yeah, but it's high quality. It's high quality data pumped into the soc, which we need. We can always use more of that. Right. And it's additional context when. Yeah, I mean if you have got good information from endpoints, you know, correlating that using agents, I guess in the SoC, you're going to get, you're just going to get better detections.
C
You will get better detections and you'll get real explanations as to what happened. Because now when you have that developer that got the powershell that there won't be a SOC analyst looking at it because it'll be automatically resolved. This was normal for the person. We've seen it as part of their baseline over the last three months. The command was innocuous, shut it. Right. And like, and when you see something that looks similar to that in the future, just silence it. And like, maybe here's a suggestion on how to go and tune that traditional AI SOC system, right. Like your traditional detection mechanism. But eventually you want to squelch that stuff again, being directly on the endpoint, that alert should never fire in the first place. So I think that's an opportunity where that endpoint movement, there's a V3 effectively, right. Gen 3 of endpoint, you had AV, you had EDR, and now you're going to have this autonomous endpoint in which we're eventually going to get to a point where we do trust the systems enough. The non deterministic, working with the deterministic, I. E. This neurosymbolic system that is capable of achieving these, this security defense, it can actually go and manipulate your system, but it's going to do so in a way that has the appropriate guardrails and safety that you feel confident. Unlike a MOBOT or Clawbot where you basically give it full reign over your machine and hope that it doesn't go and delete your file system or send all your files out to through your personal Gmail or something. Right. Like there's that. That feels like the direction that we need to go in. And I think the way in which you achieve that is you have to start from a first principles approach. If you try to retrofit it like the CO pilots, you know, just simply bolting AI on and it's not a knock on the CO pilots again, I think that they have, they service something. But when you bolt this stuff on, you don't get the benefit of thinking about it from, from a brand new vantage point. You have this technical debt that you're trying to like walk around or retrofit. Why just start fresh, go big, try to do something bold that's, that's worth doing. Insecurity.
A
All right, Brandon Dixon, fascinating stuff. You're going to be back in April or someone from NAI is going to be back in April to give a more detailed pitch on exactly what it is that you're building. That was a fascinating conversation. Thank you.
C
Thank you.
A
That was Brandon Dixon there from ANT AI and those guys will be back in a couple of months to talk in more detail about what it is that they have built. That is it for this week's show. I do hope you've enjoyed it. I'll be back soon with more security news and analysis, but until then I've been. Patrick Gray, thanks for listening.
Date: February 11, 2026
Host: Patrick Gray
Panel: Adam Boileau, James Wilson
This episode unpacks another week dense with cybersecurity news, focusing largely on major changes at Microsoft’s security leadership and the broader implications for the company's "Secure Future Initiative." Adam Boileau brings his trademark skepticism to bear on Microsoft’s new moves, while James Wilson joins to dissect the buzz around Anthropic’s AI-generated C compiler and broader trends in AI security research. The show rounds up a flurry of global cyberattacks, ongoing vulnerabilities, state threat actors, deepfake scams, and operational shifts in defensive infrastructure.
“Being skeptical is well-warranted... We have been through Microsoft’s boom and bust cycle of taking security seriously or not, you know, a few times now.”
— Adam Boileau (02:39)
“It’s not as bad as it sounds... not a disaster that I would love because we’re all about things burning horribly down. But it’s still—crypto is hard.”
— Adam Boileau (06:31)
“...it’s just, look at us, look how cyber we are, you know, please justify our budget for next year kind of thing.”
— Adam Boileau (11:12)
“Job is deliver packet. Right. Even if packet bad, packet must be delivered. So it is a sign of how much things have changed.”
— Patrick Gray (19:05)
“If you have Beyond Trust, get patching. If you are a Unix hacker, this particular bash trick is absolutely worth reading about.”
— Adam Boileau (29:43)
“The deeper story there is... these models know how to create something that works, but they won’t create something that works that won’t be susceptible to attack unless you actually go a whole lot of extra yards to bake that in.”
— James Wilson (35:27)
“Everything is moving very, very quickly. And even opinions from last week... need to be reevaluated.”
— Adam Boileau (34:40)
“It’s going to be a wild ride for a few years whilst we figure out how to do distributed network identity.”
— Adam Boileau (26:32)
“We’ve all made mistakes at work, but $40 billion worth of Bitcoin? Whoopsie.”
— Patrick Gray (38:33)
“This is the story that just never will die... it's back. It's back.”
— Patrick Gray (38:53)
[42:43–55:34]
“The endpoint as the holy grail of context... the greatest opportunity to intervene and stop somebody from doing something before that bad thing can occur.”
— Brandon Dixon (43:56)
"It creates a new paradigm for how detection can be performed..."
— Brandon Dixon (48:39)
This episode of Risky Business threads the needle between resigned skepticism about big vendor promises (especially from Microsoft) and cautious optimism around the rapid evolution of AI in security, showing both the strengths and the limitations of automated solutions. With global threat activity ramping up, deepfakes becoming operationalized, and defenders forced to rely on defense-in-depth and smart filtering, the security landscape remains as precarious—and as “risky”—as ever.
For cybersecurity professionals or anyone following digital risk, this episode is a sharp digest of the week’s biggest infosec themes—delivered with candor, wit, and technical acumen.