Loading summary
Leo Laporte
It's time for Security Now. Steve Gibson is here. The FCC has backed down a little bit and that's good news for router manufacturers. AI has found a 21 year old critical flaw in well, the most secure operating system I know about. We'll talk about the let's encrypt outage and then how Digicert responded to its recent breach. Steve said a plus to DigiCert. That's coming up next on Security Now.
Steve Gibson
Podcasts you love from people you trust.
Leo Laporte
This is twit. This is Security now with Steve Gibson. Episode 1078 recorded Tuesday, May 12, 2026. Digicert does it right. It's time for Security now. The show we cover your security, your privacy and how computers work and maybe a little sci fi and buzz vitamin D thrown in with this guy right here because basically this is the show where Steve talks about the stuff he cares most about. Hello Steve.
Steve Gibson
I'm not sure if it was self selecting, but our listeners tend to agree. Yes. So they're like, they're. I mean I'm getting vitamin D email and, and, and sleep supplement questions and so I know that our listeners are.
Leo Laporte
And Steve, we should add coffee to this because Steve is as we well know, a five shot venti latte drinker. There it is in the giant mug. I had some wonderful coffee in Kona. They grow Kona coffee on the side of the volcano.
Steve Gibson
That's how they named it actually. Yeah.
Leo Laporte
Yes, the name came from the city, but it is amazing coffee. So much so that I bought a large amount of it to bring back because it's just, it's so.
Steve Gibson
No, I mean good coffee is just in a class by itself.
Leo Laporte
But, but I don't think you would like Kona coffee because I remember we talked about this. In fact this came to mind as I'm drinking it because of where it's grown on the. In the volcanic soil. It has less caffeine and it. And very little bitterness, very little bite. And I remember you like the bite.
Steve Gibson
Yes, in fact decaf lacks the bite. And so I was like eh, seems a little. Why bother?
Leo Laporte
Kona coffee is almost like tea. It's very smooth and delicious and a little bit less caffeinated. So yeah, I thought maybe he wouldn't, maybe he wouldn't like it that much come to think of it. So I didn't send you any. Okay, that's very.
Steve Gibson
I'd rather have salt.
Leo Laporte
Rather have expensive. I do have some salt from salt. Hank, that's coming, coming your way as soon as I figure out how I could package those glass jars in a way that they don't break. We had some special salt made for Steve and his wife. They have a. A fetish for my son.
Steve Gibson
Yeah, that's right. Okay, so we are at episode 1078 for May 12th and I, I teased this last week. The news was just breaking and I didn't have the whole story. And actually they didn't have the whole story. I'm talking about a. An interesting problem that Digicert, the industry's now by far number one certificate authority, suffered. I titled today's podcast Digicert does it right because I and a lot of other industry experts have singled this, their reporting out as this is the way you do this. If you suffer a breach, how do you disclose? And anyway, so we're gonna. I want to really, you know, give them some props for like the job that they've done and take a look at what they did to just, you know, both give them credit, but also to show the. In, in some detail this is the way it's done. Right.
Leo Laporte
So many people do it wrong. We should. Oh, we should definitely mention when they do it right.
Steve Gibson
Yeah, exactly. And especially we've covered other CAs that have lost their cass because of, you know, trying to like. Oh no, that really was. Oh, well. Oh, you found that. Okay, well then we'll have to talk. I guess we'll have to, you know, that's just, you know, wrong. So, you know, props to them. We're going to talk about the fcc, however, deciding that firmware updates might actually be a good thing. So let's rethink that policy. Netgear, speaking of the FCC is the first and as far as I know so far only router manufacturer to get a full pass on this ridiculous.
Leo Laporte
Eero just got one yesterday.
Steve Gibson
Oh, good.
Leo Laporte
So there's two now.
Steve Gibson
Good. Also, AI has uncovered a 21 year old critical remote code execution vulnerability and this one is trivially implemented in one of the most secure Unixes ever. My favorite and the one I use, FreeBSD. And I should also mention I didn't get it in the notes for this week, but I'll be talking about it next week. Google has announced that they have uncovered the first AI generated zero day. So we now have a confirmed example of AI as we have been worried about. Even pre Mythos, this is not, you know, because presumably the bad guys didn't have access to it. Although we do know that there were some leaks of Mythos access. But this has been the concern. So it's Beginning to happen. There was also a brief let's encrypt outage. We have another example of a company doing the right thing. Turns out there are now some reports, not surprisingly, also of AI model repositories overflowing with. I'm not sure you call it malware malprompting. I don't know what mal. It's bad. So mal. But anyway, that's. That's a thing now, in addition to, you know, NPM and PI PI and everything, all the other open repositories that we've talked about. The. They. The. It looks like cisa. It's cisa. Although it's a decade old. It's that agreement that was signed in 2015 which allows private companies to share cybersecurity things with the government without fear of reprisal. Looks like that was. Well, it was temporarily extended. Looks like it's going to be made permanent. We have some very distressing news about the Edge browser and what was found by someone and has now been confirmed not only by people online, but some of my own news group participants who have done this and found all their usernames and passwords the clear. So we'll talk about that and then we're going to get into a deep look at how you do this right. If you are a. A company with serious responsibility for user security, taking responsibility and documenting it. And Digicert did that. And I'm saying that even though I've wandered away from them, as we know, because of their pricing, but still they're, you know, they're it. So I think a great podcast for our listeners and of course a fun
Leo Laporte
picture of the week, which I have not looked at, but will at some point with you after this word.
Steve Gibson
Probably. Then, yes, from.
Leo Laporte
From our sponsor. We will get to that picture of the Week in just a moment.
Steve Gibson
First, and to those who are seeing video, yes, Leo has an apparently white shirt on.
Leo Laporte
Oh, no. But I have to.
Steve Gibson
First time in ever. Oh, there was a weird blue stripe hidden behind the microphone.
Leo Laporte
It's a Tommy Bahama Hawaiian shirt.
Steve Gibson
Some foliage down below, I guess.
Leo Laporte
It does look like a white shirt, doesn't it? I got. I might have to retire this from the collection.
Steve Gibson
We're just so used to the collection of orange slices and I am gonna
Leo Laporte
go change my shirt after this break. I'll wear something crazy and kooky, I promise. I apologize for not our show this hour. Brought to you by Cyberhoot. If you have ever rolled out security awareness training and thought this feels more like a compliance exercise than actually teaching security. Well, you're not alone. It kind of is in many cases. Many times. You know, platforms try to catch users making mistakes. They send fake phishing emails to inboxes, then they kind of wait for someone to click and then they pounce on them and assign training after the fact. And let's face it, that can feel, you know, punitive. And people don't learn when they're being punished. Honestly, it doesn't change behaviors to do it that way. That's why cyberhoot does it different. They take a different approach. Instead of trying to trick your users into clicking, CyberHoot's Hootfish focuses on teaching them first, not in their inbox after a mistaken click, but in their browser through a trusted realistic phishing simulation. The goal is simple, to build instinct before that click ever happens. You're not trying to trap your users, you're trying to teach them. And by the way, it really works. I've watched Lisa go, we've been using cyberhoot and I've been watching Lisa go through the training. It really is great. Cyberhoot is completely automated. Training campaigns, reminders, escalation to managers, and reporting. That's all handled for you. So instead of, you know, chasing users, you get clear visibility into who's completed what and where your risks are. And here's something interesting. Cyberhoot also adds a light opt in social layer. This is kind of cool. Users can connect with co workers and engage in friendly competition around the training process and and how they're doing.
Steve Gibson
Right?
Leo Laporte
So this isn't forced gamification. It's just enough to increase participation without turning it into a gotcha system. And users love it. It's fun. A little competition adds a little spice, right? G2 reviewers also love it. They rate Cyberhoot. This is. I've never seen a rating this good. 4.9 out of 5 stars. The G2 crowd repeatedly praises ease of use, high participation, brief content, quick non punitive training, full automation and strong support. If your organization is ready to stop punishing people for being human and start actually building cyber smart employees, head to cyberhoot.com SecurityNow and by the way, please use the code SecurityNow at checkout. You'll get 20% off your first year. So again, that's C-Y-B-E-R H O-O-T.com SecurityNow and that promo code Security now gives you 20% off your first year. Just remember to always laugh, learn and hoot up. By the way, some of the awards are the cutest little owls. I just love it. Cyberhoot.com Security now. Don't forget the offer code security now. Now back to our man of the hour, Mr.
Steve Gibson
So I gave our picture of the week a title. Okay. I titled it When a Powerful Meme Creates Fertile Ground. A Powerful Meme Creates Fertile Ground. With apologies to Randall Monroe. Oh yes, of course, kcd.
Leo Laporte
All right, well let me, let me pull it up. We'll look at it together for the first time.
Steve Gibson
This is what you begat.
Leo Laporte
So you remember his famous X key cartoon with the blocks all teetering on one little block that's written by a single developer. Oh my God. This has gotten a little more complicated.
Steve Gibson
Wow. So this is an updated version of this ridiculous set of towering blocks.
Leo Laporte
This is hysterical.
Steve Gibson
It's wonderful. I'm not sure if those are supposed to be tombstones there at the very bottom with Linus, Torvalds, IBM, TSMC and K& R. You know, clearly K and R is Kernigan and Richie. But at the very, very top we see a little tiny little black speck that says you are here. And then we have a, a zoom in window with a little guy there that, that's whose thought bubble is wtf. And so there's also some guy who's like sort of teetering on the edge that is is labeled Web Dev Sabotaging himself. We've got that all sitting on webassembly and there's a V8 engine and it's all bracketed saying something happening in the web. Then a bracket on the other side that is more encompassing, titled All Modern Digital infrastructure. Referring to all that. We've got rust devs flying in from the left. It says doing their thing and doing a loop de loop and then slamming into the Oracle block. And it looks like maybe JWT Java web tokens are above that.
Leo Laporte
No, jvm, that's the Java Virtual machine.
Steve Gibson
Ah, okay. Jvm.
Leo Laporte
And it's teetering. The reason I know that's teetering on Oracle, which owns it.
Steve Gibson
Right, perfect. Perfect.
Leo Laporte
And I love the Angry Bird coming in from right the right.
Steve Gibson
And it's titled Whatever Microsoft is doing. Is the Angry Bird apparently going to slam into this whole thing. CrowdStrike's got its little block. I don't know what left hyphen pad is, Leo.
Leo Laporte
I don't know what that means either. Yeah.
Steve Gibson
Anyway, so also wedged in now we have a new thing. We have what would be a carjack if it were. If it were going to be a parallelogram and sort of raise the whole thing instead. This is a screw based wedge which is expanding and that's of course labeled AI because it's threatening to teeter the whole thing off its axis and cause it to all come crashing down. We have that one C99 project based on behavior of undefined behavior. And not to be left out, we've got a cloud flare block and a set of four of the lava lamps which of course Cloudflare made famous for their true random number generating system based on the wax in that lava lamps. I also like those bunch of little blue squares in the lower right. I had to figure out what I was looking at and I realized it's a shark biting an undersea data cable which of course is going to cut off a whole chunk of the Internet if it chewed through the cabling on the ocean floor. Lib curl not to be forgotten is there AWS sea developers writing dynamic arrays. And we are reminded that all of this is driven by made possible by electricity. So the underlying foundation is a big block of electricity with some electric poles coming in to feed it.
Leo Laporte
Very, very funny. This is not only funny, but really pretty true.
Steve Gibson
There's a lot of yeah, this is all the stuff we've talked about through the years on the podcast.
Leo Laporte
Yeah. Wow. Very nice.
Steve Gibson
So a fun picture and thank you to our listener who sent it to me. Okay, so there's news on the residential router front. Apparently someone at the FCC got a clue, or at least they listened to somebody who actually knew something about cybersecurity, because last Friday they announced a reversal to their previous no updates for you policy. Tom's Hardware covered this and wrote the Federal Communications Commission announced on Friday, May 8 through its Office of Engineering and Technology, the OET, that it was extending temporary waivers allowing certain foreign produced drones, drone components and consumer routers to continue receiving. You know, all those bad things that we think are coming from China to continue receiving software and firmware updates in the United states in late 2025. They remind us in early 2026 the FCC added these categories of equipment to its so called covered list which effectively blocked already authorized devices from receiving post approval software and firmware modifications. The agency subsequently issued waivers permitting critical security and functionality updates to continue through March 1st of 2027. So here we are around the same time in 2026. So basically you get a a year more of updates for consumer routers. Now under the now updated waiver, manufacturers of affected devices will be allowed to continue issuing software and firmware updates until at least the 1st of January 2029. So almost another two years provided that the devices had already been authorized for use in the US before being added to the FCC's covered list. Meaning nothing new can come in. We talked about that before. No new model numbers, which again is nuts, but okay, they write. The extension also broadens the waiver to include certain Class 2 permissive changes involving software and firmware updates intended to mitigate consumer harm. In its notice, they wrote, the FCC acknowledged that continued software support remains necessary. This is them, you know, the light turning on for them. Hey, continued software support remains necessary to protect US consumers. What do you know? The waiver specifically allows updates that maintain device functionality, patch vulnerabilities and and preserve compatibility with changing operating systems and network environments. But still no new models. But oh, you can have all the firmware updates you want. Which again, what. The agency argued that the public interest would be better served by allowing these limited updates rather than freezing software support entirely. Okay. In other words, duh. Anyway, Tom's ads the waiver does not reverse the broader restrictions or remove the devices from the covered list. It applies only do already authorized products and the software and firmware related changes intended to maintain safe and secure operation. Manufacturers must still comply with other FCC requirements governing permissive changes and equipment certification. Okay, so as I said, given 1-1-2029, that allows for nearly an additional two years of updates to existing routers, which, yeah, that's certainly good news. But of course the entire thing remains unspeakably ridiculous because control over a router's firmware is all anyone needs to turn that previously authorized and approved because it existed back, you know, a year ago router into an Internet bandwidth weapon. The hardware doesn't need to change, model does, model number doesn't need to change. It's all firmware. So either you trust the foreign manufacturer of a router or you don't. And if you do, then there's no problem. And if you don't, then limiting updates, like allowing any updates, but you know, limiting them to the original March 1, 2027 deadline, even that one year is of absolutely zero benefit since you've given them, under the assumption that they have malicious intent, one full year to cook up some new sneaky malware update with which to infect any routers that may be updated during the period of that year. In other words, none of this has ever made any kind of sense. As we saw last week, cisa, our CISA agency, has been effectively neutered. You know, our, our agencies appear to now be staffed and run by people who will not push back against policies that they know are clearly wrong. So this sort of nonsense is what results. It's difficult to imagine this could have happened back when, you know, CISA was at its original strength and staffing. But you know, because there would have been people there would have said what? No, I mean their.
Leo Laporte
They.
Steve Gibson
One of the reasons we liked CISA so much was that they had take such, they had taken such responsibility for getting into the you must update your stuff business and pushing that out to all of the government agencies over which they had any oversight. Apparently that's not what we do anymore, even though it was the right thing to be doing. Then one piece of good news and Leo, you added a second piece of good news for like, for those who like and use Netgear routers and now the Eero products is that even before the addition of those additional two years of firmware updates, Netgear and now we know Eero had announced that it had received, they had received now the FCC's conditional approval for their routers. This meant that none of those ridiculous FCC imposed restrictions would affect any of NetGears and eero's router products, not those already sold and not any current or future models. It's like this, you know, this membership on this list just doesn't exist. So they get a full pass, which also includes their right to update their firmware with abandon anytime they feel the need. So yay. Okay, so we've heard again from the guys at Aisle Security. Remember their name, AI S L E. You know, they're that commercial group who've been using their own AI, as their name suggests, to find flaws in software and who were somewhat annoyed, as we discussed a couple weeks ago, by all the hoopla that Anthropic was able to generate around Mythos. The headline of last Thursday's posting of theirs was I'll discovers CVE2026 42511 a 21 year old FreeBSD remote code execution vulnerability. So AI was used. This thing has been in this has been a problem in FreeBSD for 21 years. It actually, as we'll see in a second, actually inherited it from OpenBSD when it when it went when the Open Source Pro 1.1 open source project FreeBSD grabbed another chunk of code from from a different open source project, OpenBSD and with it came a serious problem. So this posting of Aisle was written by the discoverer of this flaw. He writes. FreeBSD is often described as one of the most secure operating systems in the world, with its reputation arising from its high quality networking stack, deliberate engineering and a philosophy of security through simplicity. FreeBSD's history and usage are remarkable. It powers Netflix Open Connect infrastructure, Sony's PlayStation OS, part of Nintendo's Switch OS, Yahoo's backend services, NetApps storage systems, Citrix's NetScaler has long helped form the software base of major networking platforms Cisco, Juniper and so on. WhatsApp's backend services historically and is now the focus of a substantial foundation effort to make it work better on modern laptops. And he writes for full disclosure, remains this author's personal operating system of choice. And to that I will just add that it's also my own Unix OS of choice and as I've often mentioned, it underlies the PF Sense personal firewall router system. And for me it runs our DNS and our news groups. So you know, that's the Unix that I chose. You may remember Leo years ago a guy named Brett Glass was active in the early days of the PC industry and Brett Glass knew his way around Unixes and I remember having a conversation with him and saying so what do you Recommend? He said FreeBSD period.
Leo Laporte
There are other BSDs, there's NetBSD as an OpenBSD, but FreeBSD is the one you like.
Steve Gibson
I do and it has had some some desktop laptop orientation and some a lot. I think it's $750,000 from the some foundation affiliated with FreeBSD are making a serious push to make it more desktop and laptop friendly, adding a lot more WI fi drivers and making a lot more hardware agnostic. So it's still alive and kicking. Anyway, I'll continue saying I'll discovered
Leo Laporte
a
Steve Gibson
remote command execution vulnerability in FreeBSD's DH client that is trivially weaponizable and wormable. Yikes. By any system on the same local network as the FreeBSD system. The vulnerability first entered FreeBSD in the 2005 that's the year we started this podcast 2005 really. Thus 21 years old. And so is this podcast 2005 release of FreeBSD 6.0 when OpenBSD's DH client was imported and laid dormant. That is the vulnerability did until discovered by Aisle. The vulnerability also affected OpenBSD until 2012 when that operating system deprecated DhClient script completely, indirectly fixing the vulnerability. But FreeBSD didn't. The initial flaw was identified by Aisle's AI based source code analysis pipeline and then investigated by our triage agents Joshua Rogers, that's actually the author, so he's referring to himself of Aisle's offensive security research team traced the relevant code paths, established the full security impact, and developed a proof of concept demonstrating a complete local network to root exploit chain. FreeBSD is adding key improvements to laptop support, including greater WI FI support, so the attack surface here becomes even more relevant to everyday systems. A malicious wireless access point, or in some cases another attacker just sharing the same WI FI network able to spoof DHCP can target the exact DHCP path that Almost every wireless FreeBSD system will rely upon. Imagine you're the author of this post who runs FreeBSD on their laptop as this guy does. You're at a coffee shop, airport or hotel and as soon as you connect your FreeBSD equipped laptop to the Wi Fi, your whole system is is hijacked in secret. Imagine you have a PlayStation whose OS is locked down from any unofficial access, only to be hijacked by connecting to a network. In other words, this vulnerability not only affects servers, but any FreeBSD machine that connects to a network using DHCP, which is the default setup case for almost everybody. The vulnerability was a logic flaw that allowed attacker controlled protocol data to be persisted into a trusted configuration like format without proper sanitization, then later reinterpreted in a privileged execution path. That is exactly the kind of bug Aisle's autonomous security platform is is built to find and and get how he signs off here, he says, like our recent findings in Open ssl, Firefox, Live PNG and Amazon's Crypto Stack, this result came from disciplined engineering and end to end analysis, not model mythology.
Leo Laporte
Oh please.
Steve Gibson
Okay, so okay, sounds like they may still be somewhat annoyed by the mythology
Leo Laporte
of Mythos, which is not mythology as you pointed out.
Steve Gibson
Exactly. It's happening. But in any event, AI truly is finding serious flaws, many of which have been present for decades. In this case we're talking about FreeBSD's DHCP client, which can be fed and a maliciously formed reply DHCP reply containing code or commands and code that it will execute. As Joshua who authored this write up noted, this could have been extremely serious if it had not been found by the good guys. At some point we may see those who claim that AI enhanced software vulnerability discovery never turned out to be such a big deal. Remember though, that's what Y2K could have been a big problem if it hadn't been caught beforehand and dealt with. Objective observers would, I think, do well to remember all of the many critical vulnerability discovers. Discoveries like this one that did serve to clean up our archaeological code base before the bad guys had the chance to get in there and exploit it.
Leo Laporte
The question of course, how long it's going to be before the bad guys get access to these models. Well, you're gonna have a story about this in just a little bit actually.
Steve Gibson
Yeah, yeah. Okay, so there was a bunch of Internet chatter last Friday about a several hours outage of let's Encrypt. Our listeners know that with, with so much of the web now utterly dependent upon certificates issued by let's Encrypt and with maximum certificate lifetimes continuing to drop, and especially with let's Encrypt's optional super short six day certificates now being available, any outage of the system upon which so much now depends is of interest. And again, and I'll say again that you know, unfortunately all of this, I get how let's Encrypt happened. I understand the appeal, but it's so antithetical to the deliberately distributed model of the Internet. I hope this never comes back to bite us because it is creating a single point of failure where everything else has been designed to prevent that. In this case, this outage, such as it was, was deliberate and temporary. It was an administrative suspension of new certificate issuance following reports of a missing extension from one class of certificates. Let's Encrypt post incident report said and you know, this involves a lot of inside baseball terminology. They said let's encrypt gen Y which is the Y and Y R cross certified subordinate CAs were issued in violation of CCADB policy which requires that the server auth EKU extension must. All caps must be present in cross signed intermediate Certificates issued since 06-15-2025. Root, Ye and Yr were issued in 09-03-2025 and are subject therefore to the requirements. Okay, so the certificate extension in question, which is to say a server Auth Eku where EKU is the abbreviation for. For extended key usage. It specifies the. It, it specifies limits, the imposition of limits on the application of any certificates which that CA intermediate certificate would be validating. And so the limits are things like can be used for server authentication or client authentication, you know, and, or code signing and, or email protection and, or time stamping. So again specifies what that certificate is authenticating for what purposes. And that extension was missing. So it should have been there. It wasn't. It mostly doesn't matter that it wasn't. But as we know, any certificate authority must again that should be all caps M U S T take their responsibilities absolutely seriously and let's encrypt did. They immediately stopped issuing new certificates when this when this issue came to my came to their attention and they confirmed it, they fixed the problem. They resumed reissuing or resumed issuing certificates in their words quote we temporarily disabled certificate issuance, deployed a configuration change to prevent future issuance from the cross signed gen Y hierarchy and then re enabled issuance. So thank you very much. We fixed it. Problem solved. Nothing to see. But again they are, you know taking their responsibilities. The the authorization essentially the trust that the entire industry is placing now on 70% and growing of all web certificates which are being signed by let's encrypt. We need to know that you know they're doing that job correctly. And Leo, we're a little after a little past half an hour in, let's take a break and then look at unfortunately why we can't have nice things. The yes, the poisoning of AI models.
Leo Laporte
I'm afraid I don't want to hear about this, but I'm going to have to.
Steve Gibson
You need to be careful that your agents are not pulling models from Open Claw and
Leo Laporte
I use open router Hugging Face and I know Hugging Face. You have to know that because Hugging Face has more than a million models.
Steve Gibson
Yes. Where'd they come from?
Leo Laporte
Yeah, people are just making them.
Steve Gibson
Do you need them? No. No.
Leo Laporte
Well, you need some. I don't know if I need the bad ones. We'll find out what that is in just a little bit. But first a word from Guard Square, our sponsor for this segment of security Now. Now, are you a mobile app developer? You really want to listen up here. Mobile apps today are, and this is good news for you as a mobile app developer, an inescapable part of life. We use mobile apps for everything from financial services to healthcare to retail entertainment. And here's the thing. Users trust those mobile apps with their most sensitive personal data, especially in finance and healthcare, right? Unfortunately, a recent survey showed 72% of organizations have experienced a mobile application security incident in the past year. 92% of respondents reported rising threat levels over the last two years. I think you don't need a survey to tell you that. Meanwhile, attackers who want your user's personal data, I mean that's, that's the gold they're looking for. Are they constantly finding new ways to attack your mobile app? I'll give you one way that they do it. For instance, they will take your app. It's actually fairly trivial now with AI and Ghidra to reverse engineering, take it apart, repackage it by the way, this is, I suspect, what's happening with these AI models as well. Repackage in it and then distribute the modified app. They do it with phishing campaigns. Hey, we got an update. You'd got to get the latest version or side loading or third party app stores. There's all sorts of ways to do this. That's just one of many ways they're attacking your app. But you need to prevent this. You need to take a proactive approach to mobile app security. And by doing so, you can stay one step ahead of these attacks and more importantly, most importantly, maybe maintain the trust of your users. And that's where Guard Square, our sponsor, comes in. Guard Square delivers mobile app security without compromise, providing advanced protections. Both Android and iOS apps combined with automated mobile application security testing to find vulnerabilities and real time threat monitoring so that you can gain insight into the attacks that are occurring. Not, you know, in general and to your app specifically. Discover more about how Guard Square provides industry leading security for your mobile apps@guard square.com that's guardsquare.com we thank them so much for supporting our show and for the work they're doing to protect us as mobile app users and mobile app developers. Guard square.com Steve, I've replaced the white shirt, as you can see, with a shirt.
Steve Gibson
Ah, now we recognize you.
Leo Laporte
I got this in Orlando when we were out there for the gator. Yeah, this is my from Gator Land. It's got gators on it. Yep.
Steve Gibson
Nice.
Leo Laporte
Little bit more recognizable. Not, not a white shirt for sure. All right, now I want to hear about this AI thing.
Steve Gibson
Oh boy. So last Friday, the Next Web posted the news of an analysis of the large language models being hosted and offered at Hugging Face and Claw Hub. The news is not good. Yeah, here's what they said. They said the two most important software supply chains in artificial intelligence have been systematically compromised. Hugging Face, the repository that hosts more than a million machine. More than a million where? What?
Leo Laporte
It's amazing when you go there, it's amazing. I mean there are, you know, the, the kind of maybe several dozen root AI models, but then people are creating their own spins of it and so forth. It's really incredible.
Steve Gibson
Repository that hosts more than a million machine learning models used by virtually every AI company on the planet, has been found to contain hundreds of malicious models capable of executing arbitrary code on the machines of anyone who downloads them. Clawhub, the public registry for open clause AI agent skills, has been infiltrated by a coordinated campaign that planted 341, malicious skills designed to steal credentials, open reverse shells, and hijack AI agents for cryptocurrency mining. The attacks are different in technique, but identical in logic. Both exploit the implicit trust that developers place in shared repositories. Both use the infrastructure that the AI industry built to. Excuse me, AI industry built to accelerate development as the vector for compromising it. Hugging Face has been aware of malicious models on its platform since at least 2024, when security firms J Frog and Reversing Labs independently identified models containing hidden back doors. My turn to have a throat tickle.
Leo Laporte
Sorry, yeah, I'm just looking right now at Hugging Face at their model repository, and this is actually kind of stunning. They list 2,869,086 different models.
Steve Gibson
My God.
Leo Laporte
Yeah, I mean, well, let's hope they
Steve Gibson
have a good search engine, because they do, actually.
Leo Laporte
They have a very good search engine. And, and the thing is, it's not every mod, you know, it's not like chatgpt alone. I mean, there are models to do all sorts of things. I, I have a specific model I use from Hugging Face that's just for text embedding. That's all it does. So, you know, and these are highly customized in many cases.
Steve Gibson
So, so, so very vertical applications.
Leo Laporte
Very vertical slices. Exactly. Exactly. Yep. I mean, it's a, It's a great repository. This is really somewhat different, though, from the openclaw registry, but I'll, I'll let you talk about this because I know.
Steve Gibson
So they said. Hugging Face has been aware of malicious models on its platform since at least at least 2024, when security firms J Frog and Reversing Labs independently identified models containing hidden back doors. The problem has not been contained. It has scaled Protect AI, which partnered with Hugging Face to scan the platform's model library. And given its size, that's no small feat. Has examined more than 4 million models and, and identified approximately 352,000 unsafe or suspicious models across 51,700 models. I'm sorry, 352,000 unsafe or suspicious issues across 51,700 models. JFrog found more than 100 models capable of arbitrary code execution. The attack technique known as null if AI Nullify nullif. Kind of, you know, a play on nullify null if AI exploits Python's Pickle serialization format, the standard method for packaging machine learning models. Attackers embedded malicious Python code at the start of the pickle byte stream and Compress the file using 7z rather than the default zip format, which breaks Hugging Face's Pickle scan Detection tool. Well, and that's just dumb. That Hugging face can't check 7z compression. In addition to zip. The payloads are not subtle, they write. Security researchers have documented models that establish reverse shells. Okay, Meaning that it connects out to a remote command and control server and says, what do you like me to do now? Connecting to hard coded IP addresses, giving attackers direct access to the machine of anyone who loads the model. Others execute credential theft, exfiltrate environment variables, or download secondary malware to the user's machine. A data scientist who downloads what appears to be a legitimate model for a research project or production pipeline is, in some cases handing control of their machine to an attacker. Hugging Face has responded by partnering with JFrog and Wiz Security to improve scanning capabilities. Remember that Google bought Wiz? JFrog's integration has eliminated 96% of false positives in malicious model detection. But the platform's open architecture, which is the source of its value to the AI community after all, is also the source of its vulnerability. Anyone can upload a model. The scanning catches known patterns. The attackers who designed Nullif AI built their technique specifically to evade that scanning Claw hub, the registry for Open Claw's AI agent ecosystem faces a different but related problem. OpenClaw has grown to 3.2 million users and attracted partnerships with OpenAI. But its skill registry has become a target for attackers who understand the An AI agent executing a malicious skill has access to whatever the agent has access to, which in enterprise environments can mean databases, APIs, internal networks, and cloud credentials. In other words, we're giving agents. In order for the agent to have agency it, we need to give it control and access to things. Unfortunately, the malicious skill inherits that Coy security audited all 2857 skills. Thank goodness that's a manageable number on Clawhub and found unfortunately, 341 malicious entries. Of those, 335 were traced to a single coordinated operation called Claw Havoc separately sinks, you know, S N Y K sinks Toxic skills Research examined the broader ecosystem and found that 36% of say better than 1 out of 3 36% of all AI agent skills contain security flaws, with approximately 900 skills, roughly 20% of the total classified as malicious. So 1 in 5 deliberately malicious 30 skills from a single author were silently co opting AI agents for cryptocurrency mining. You know which, right? Makes sense. You got a super powerful gpu, you got, wow, that AI is really working hard for me. No, it's working hard mining cryptocurrency for somebody else, they write the Claw Hub. The Claw Hub attacks are particularly dangerous because of the nature of AI agent architectures. The rise of model context protocol and similar standards in the agentic era has created a new category of software supply chain in which AI systems autonomously select and execute tools from external registries. A compromised skill does not require a human to click a link or open a file. It requires an AI agent to select the skill as part of its workflow, at which point the malicious code executes with the agent's permissions. The Hugging Face and clawhub compromises are the AI specific manifestation of of a supply chain attack pattern that's been accelerating across the entire software industry. In March of 2026, the Light LLM package on PyPi or PYPI was compromised, potentially exposing half a million credentials, including API keys for meta, OpenAI and anthropic meta for froze. Its AI data work after the breach put training secrets at risk. In April, a Bit Warden as we know we covered it Command line instruction package on NPM was hijacked for 90 minutes with a payload specifically designed to harvest credentials from AI coding tools including Claude Code, Cursor, Codex, CLI and Adoration. Days later, the PI Torch lightning package was compromised for 42 minutes with a credential stealing payload from the Mini Shai Hulud campaign. The European Commission itself was breached after attackers poisoned Trivi, and we've talked about that, an open source security scanning tool demonstrating that even the tools designed to detect supply chain attacks can become vectors themselves for them. The United States Department of Defense published formal guidance on AI and machine learning supply chain risks in March of 2026. Acknowledging at an institutional level that the AI software ecosystem has become a national security concern. The common thread is speed. The Pytorch lightning compromise lasted 42 minutes. The bit warden CLI hijack lasted 90 minutes. The light LLM attack window is estimated at hours. These are not persistent campaigns that defenders have weeks to detect. They're brief, targeted insertions that exploit the automated dependency resolution systems that modern software development relies on. A developer who runs a package install at the wrong moment downloads the compromised version. The window closes, but the damage is done. The AI industry has invested hundreds of billions of dollars in model training, inference infrastructure and application development. The investment in securing the repositories through which that software is distributed and has been a fraction of the total. Hugging Face has partnered with security firms. Clawhub has implemented basic moderation package registries, have added two factor authentication requirements. None of these measures has Presented the attacks documented above, state actors can already produce AI powered malware that evades conventional detection. And the supply chain attacks on AI repositories represent a natural evolution of that capability. The models and skills hosted on Hugging Face and clawhub are consumed by systems that make automated decisions, process sensitive data and operate with elevated permissions. A compromised model in a production AI pipeline is not equivalent to a virus on a personal computer. It is a backdoor into an automated decision making system that the organization trusts precisely because it appears to be a legitimate component of its AI stack. The fundamental problem here is architectural. The AI industry built its development infrastructure on the same open registry model that has defined software development for the past two decades. Centralized repositories where anyone can publish automated tools that download and execute code from those repositories, and a culture of trust that treats popular packages and models as implicitly safe. The difference is that AI models are not just code. They are serialized objects that execute during deserialization, a property that makes pickle based models inherently more dangerous than traditional software packages because the malicious code runs the moment the model is loaded before any human has a chance to inspect it. The AI supply chain is now the most attractive target in the software security space. The repositories are trusted, the consumers are automated, the payloads execute on load, and the industry that built these systems is spending its security budget on model alignment and prompt injection, while the infrastructure through which the models are distributed remains in the assessment of every major security firm that has examined it comprehensively. Compromised. So Leo, I would say that the caution and trepidation you felt and shared when you were first considering turning open claw loose in your world was likely warranted. This is not an entirely new yeah, this is not an entirely new phenomenon. Although we with popularity comes increased focus by the bad guys and we've seen this thing just skyrocket over so far this year.
Leo Laporte
The difference between the two hugging face and the skills repository is it's trivially easy to write skills. They're just text to make your own malware model, that takes a little bit more skill. But anybody can write a skill. It's just plain text. And so I mean guess the skill involved is how you insert the, you know, Shai Hulud style, you know, bitcoin stuff. But it's not complicated. And so that's why I think you're going to see in these, in these registries for skills. I never use a skill from the public. I look at skills what I often do and I would recommend people do
Steve Gibson
this is, I use them as a training base. Yeah.
Leo Laporte
I point my assistant, I say, here's a GitHub repository for a skill, assess this, tell me what you think of it and how we could apply it and then let it write a skill which I will then check. And that actually works quite well. I've built basically my own Open Claw from scratch with just the pieces that I want. And it's a lot, I think.
Steve Gibson
I love that you said how we could apply it because you weren't talking about you and Lisa, you were talking about you and the AI me and Claudia, my good friend, it is so difficult not to think of this thing as an entity. I mean it is just astonishing. Anyway, I wanted to share this with our listeners because I'm certain that, that you know, we have many listeners who are enjoying playing around with and experimenting with and perhaps even deploying systems or solutions using these openly available AI models. Please, please, please be careful because one of the problems here is this makes it, the, the way this has been built and deployed makes it so easy to use this stuff that just, I mean that, that alone should be a cause for raising a red flag in, you know, in the minds of any security aware person. So you know, as you are Leo, you know, you are the, you know, looking at this and not just saying go, you're saying, you know, let's take a look at it, what should we do?
Leo Laporte
And well, you've taught me sensei, over the many years that we have done this.
Steve Gibson
I'm glad it's like sunk in. That's good. But again, it's so easy in a moment of enthusiasm to say well, and you were tempted, right when, when Open Claw happened is like, oh, should I, shouldn't I? But you know, somebody else is not going to be, you know, who hasn't been sitting here for the last 21 years with me going to go, hey, this is great, go, right?
Leo Laporte
Yeah. If you have any nervousness about it, trust your instincts because you're right. Yes, basically, yes.
Steve Gibson
So there's some good news on the horizon. The word is that the original decade long CISA, that's CISA 2015 stands for Cybersecurity Information Sharing act, which as we have talked about on a number of occasions, expired last year because it had a decade long life from 2015 to 2025, which was then temporarily extended until this coming September, is now in the process of receiving its much needed long term reauthorization. And remember that this is what allows private sector enterprises to share their cyber intelligence with the government without fear of any legal blowback or reprisals. So this gives them cover. And we've, we've heard from CEOs and CIOs who've been saying, you know, you know, we, we, we really need to share some stuff that we have. Like it's important. But we can't because you know, we, we, we can't risk having ourselves, you know, taken to court. So anyway, that, that hopefully another 10, another decade worth of coverage is, is coming soon. A number of our listeners pointed me at the news that Microsoft's Edge browser is doing very little, much, much less than it could to protect its users passwords. A posting from the SANS Institute, which I've enhanced a bit, reads, yep, it's for real. The posting wrote. This started with a post in X which highlighted research by. And we have a, an AT sign handle that's clearly hand hackerized. Living off the land with, you know, ones and O's and zeros and threes. I know.
Leo Laporte
Is he a 12 year old?
Steve Gibson
But that person did find this issue. The, the, the, the SANS Institute posting said Edge stores all of your browser passwords in clear text.
Leo Laporte
Oh great. Okay.
Steve Gibson
Yep.
Leo Laporte
Oh great.
Steve Gibson
Even if you have not used them in this session, you know, just in case, he writes, you might want them, he said, I figured it couldn't be that easy, right? But like so many things. Yes, yes, it was. To reproduce this open Edge, don't browse anywhere, just open it, he says. Flip out to Task Manager. Search for Edge, then expand that task. Highlight the browser subtask, right click and choose Create Memory Dump. Navigate to where the DMP file is stored. If you have not used strings before, you're in for a treat. Strings is of course just part of most Linux distros, but you can easily get a copy for Windows as part of Ms. Sysinternals. Now let's look for passwords. You could use strings and look for known credentials. Just search for a known password and you will certainly find it. Or you can take advantage of the format of the saved data, which is the URL of the site followed by its protocol, meaning like HTTPs probably, then a space, then you, the user ID a space and the password. All of that for the site, he says. So searching for TLD protocol, right? Mean for example google.com immediately followed by HTTPs he says. Which in most cases is like or. Or just com, just c HTTPs with no spaces, he says we'll find them and they'll all be in one nicely formatted group, no less. The command for that will be strings, space, hyphen, n space, 8 space, Ms. Edge, DMP and then a space and then a vertical bar for piping. Then find double quote. Com, HTTPs double quote and then hit enter. Bang. He says it really is that easy. And the ironic thing, to view these same credentials in the browser. There's a whole security theater process where Edge wants your biometrics as proof before disclosing even the user ID and site names. You know, for security. All the while the whole shot is there in clear text, free for the looking. Also, as noted in the X post, Microsoft classifies this as intended behavior. I'm not sure what manager or lawyer he writes decided that. Hopefully it wasn't anyone in their security team. Any logged in Windows Edge user can dump all of their stored Edge credentials with no additional rights, which means any malware that the user executes also has access to all of those credentials for the asking. But he says not to worry, right? It's intended behavior.
Leo Laporte
Remember, this is what Chrome did for a while. It kept it in plain text.
Steve Gibson
Well, it's. It's Chromium. Edge is Chromium based.
Leo Laporte
I don't think they do now, though. That's what's surprising.
Steve Gibson
But, and he said it's intended behavior if what's intended is also to get me to use Firefox or Chrome. Yes, it's working. Gosh. So, and I did upon that coming to the attention of some of the guys in our Security now news group, someone thought really? And did it. And he's like, oh crap. Yes, terrible. There's all of my username, domains, usernames and passwords. Basically you. You get that you can log in as that person anywhere.
Leo Laporte
Wow.
Steve Gibson
Crazy. Okay, we're at an hour. We're going to do some feedback. Leo, let's take another break and we will hear from two of our listeners.
Leo Laporte
Absolutely. I could figure out what button I need. Oh, there. Okay, I need to press it.
Steve Gibson
Is.
Leo Laporte
It is time to talk about Doppel, as in, as you pointed out a couple years ago, Doppel as in Ganger. Yes, as Steve said, Doppel, our sponsor for this segment on security. Now, maybe you know that message you just got? Maybe that is an urgent message from your CEO. Or could it be a deep fake trying to target your business? I think these days, more likely the latter, AI can impersonate trusted individuals. I mean, down to the voice, down to the. They can do it in video. Now, Doppel's platform illustrates how frequently users fall for these phishing attempts. They call it vishing. Right. With A V in voice call. Simulation deployments. Target users. They didn't know they were target users. Right. This was a test. They spent six minutes conversing with a deep fake and then afterwards when they talked to him, 100% of them thought, oh no, that's human. It's not. It was an AI. But they didn't believe it. Doppel is the AI native social engineering defense platform. Doppel strengthens human risk management by training employees to recognize deception while their digital risk protection detects and disrupts attacks across every channel. As attackers turn to AI to power increasingly sophisticated strikes, Doppel uses AI to fight back with automated takedowns, multi channel coverage. Yeah, because it's not just email anymore, is it? It's text, it's slack, it's voice. And Doppel builds AI defenses that put intelligence into every fight. Intelligence in your users too. That builds intelligence with them. Doppel works relentlessly to protect people, brands and trust Doppel D O P P E L best in class integrations and partnerships to seamlessly integrate into the existing security stack you're using. So that's nice. And Doppel's industry awards and testimonials speak for themselves. They're recognized as a winter 2026 G2 leader in users most likely to recommend momentum leader and best support. Kind of the big three. Join hundreds of companies already using Doppel to protect their brand and people from social engineering attacks. It's out there. It's happening right now. You need Doppel. Doppel outpacing. What's next in social engineering. Learn more@doppl.com that's D O P P E L.com Dole It's a shame we, we have to do this now, but it, the attacks are getting so sophisticated. It's, it's, it's really mind bending.
Steve Gibson
Yeah. The ultimate consequence of the cat and mouse back and forth with, with escalation on both sides.
Leo Laporte
Yeah, that, that's actually right. Yep. On we go with the show, Mr. Gibson.
Steve Gibson
So Todd Whitaker, our listener, writes. Steve I thought this might be of interest for security. Now the rival security group has a thoughtful follow up on the Claude Mythos FreeBSD exploit story, arguing that Mythos may not have been quite as creative as the initial coverage suggested. And he has a link to their. Their whole posting, he writes. Their claim, as I understand it, is not that the result is unimportant. It is that the vulnerability, the prior fix pattern and perhaps even the exploit relevant structure may already have existed in the model's training data. The FreeBSD issue appears closely related to CVE 200733999 in occurring in MIT Kerberos. The same general RPC SEC GSS validation logic, same stack buffer overflow pattern, and a strikingly similar bounds check fix. So Mythos may have discovered something genuinely dangerous, but perhaps by recombining known historical material rather than reasoning from first principles in the way many of us initially imagined. He writes, that still seems worrying, just in a different way. If advanced models can rediscover old vulnerability patterns embedded in the fossil record of open source code, then attackers may not need models to be brilliant. They only need them to be tireless, well tooled, and good at recognizing dangerous old ideas in new places. And I'm going to interrupt because Todd has a little a bit more to say about something different, but I just want to say I completely agree with that. As our understanding of computer science has evolved, one of the things that's happened and it becomes, it's kind of gone into the parlance now of, of computer science. We've come to notice patterns in the solutions to problems. You know, they're like a, a short and small abstraction away from the concrete solution, where we see that many different such concrete solutions can be grouped together by their sharing of a common underlying pattern. So what rival security observed was Mythos preview finding what we might describe as a flawed design pattern, a common type of mistake that coders have been making through the years, which winds up being a natural sort of mistake to make due to the underlying architecture of the computer behavior that we're programming. We all know that today's large language models excel at pattern discovery probably more than anything. That's what they are. So we would expect that if someone somewhere made a similar mistake and its correction that had been captured in the model's training corpus, then it would indeed be able to make the connection. So I'd say that this is an interesting and useful observation about the underlying way in this instance, Mythos discovered, maybe rediscovered, you know, the newer, similar patterned flaw. But I don't like Todd. I don't see anything taking away from the fact that, you know, as Yoda might say, discover that flaw. It did,
Leo Laporte
yeah. I mean, it's not like it knew about the flaw, it just recognized the pattern.
Steve Gibson
I mean, yeah, it said this seems familiar from.
Leo Laporte
It's like if you recognized a buffer overflow. I mean, exactly.
Steve Gibson
Yeah, exactly. So his note continues with some interesting observations from his own background about AI as a computer science educator. He writes, I would also be interested in your broader take on what this means for computer science education and the profession. My current working view is that the most productive human AI collaboration in software engineering depends on advanced judgment, architecture, design patterns. There's patterns threat modeling, failure modes, invariance, trade off analysis, and knowing when the AI's answer is superficially plausible but structurally wrong. The problem is not that students must learn the old ways before they're permitted to use the new tools, he says. That's just reframing the old learn assembly before C argument. The real issue is that AI makes coding cheaper while making judgment more valuable, he said. To supervise AI to supervise AI generated software, a person needs mental models, how state behaves, where abstractions leak, how protocols fail, why concurrency is treacherous, how memory and parsing bugs become security bugs, and why a working MVP may still be architecturally unsound. Those mental models do not emerge from prompting alone. If we let AI collapse the difficult apprenticeship too early, we may produce developers who can ask for software but cannot reliably tell whether the software they received is safe, coherent or professionally defensible, he says. I'm writing from my personal email, but my day job is in computer science education, so this one lands close to home. I spend a fair amount of time thinking about what we should still teach humans when machines can increasingly produce the code which so cool feedback, Todd, thank you. I don't think I've I've seen any more coherent and clear description of the AI versus human coding question. And I love his one line. The real issue is that AI makes coding cheaper while making judgment more valuable. And I think that captures where we're headed in computer science education and vocation. In the same way that any higher level language lifts its coder away from the grubby details of the specific underlying computer hardware, the use of coding trained large language models clearly lifts its users from the grubby details of the way computers are applied to problem solving. As many you know, never before programmed anything users are discovering they're now able to simply ask for what they want the computer to do, and the LLM will almost magically produce a potion that does that. But there are clear limits to what can be asked for. We saw a perfect example of such limits last week when those bad guys who had created that credit card clearing web portal apparently just forgot to ask for authentication to be added. Whoopsie. What we're seeing rapidly evolve during the rush to use AI or for code generation is that for AI to be applied to the creation of any very large and complex solution, a solution architect is still required. No Large problem can be dropped whole into AI's lap. At least not today, not yet, and I don't know when. Instead, for now, today, a solution architect who's trained and experienced in the application of the various higher level solution abstractions that have been developed over the years of true computer science needs to carefully decompose the larger problem into much smaller, smaller, individual safely codable modules. These solution architects are the true core of the science of computing. You know, coding is just their implementation and these are the sorts of things, you know, that Donald Knuth and other scholars of the art have spent their lives exploring and documenting. So yeah, I think Leo, and this is what you've talked about, like the way you're now approaching the application of AI is because you understand about the way computers are applied. You're breaking the problem down into pieces that you, you intuit AI is able to do the, the, the, the grunt work on and, but you're, but you're, you know, giving it the interfaces that it needs for the various pieces of individual grunt work. Yeah.
Leo Laporte
And I'm finding more and more, well, I do the coding for, as a basis for things like cron jobs, like things that are going to run over and over again. And then a lot of what I'm using AI for now is text based stuff. I had IT plan our itinerary for Hawaii for instance. And if you give it the basis, the information it needs, I have IT use my Obsidian journals and things. It does really quite a good job. I sent it to my travel agent. I don't know how thrilled she was with the generated recommendations. I wanted her to say whether they were good or not, but I realized she might feel like it was kind of taking her job a little bit.
Steve Gibson
I think it's probably going to.
Leo Laporte
Yeah, yeah. I mean there's something that she does that no AI could do, which is the relationship she has with the various vendors.
Steve Gibson
Yes, yes.
Leo Laporte
And that's very, you know, that's human and only.
Steve Gibson
And things that she has heard from her other clients where, where it's like, you know, I heard about this, this, you know, you want to make sure you, you spend more time at, on, you know, at this court.
Leo Laporte
Because I did, I reassured her, I said, you know, this is, this is a nice starting point, but it can't duplicate what you do as a human being. And I think that's really the lesson that all of us should learn to calm down about AI is that we, we, we still need humans, Humans add something that no, I can, can, can do or will I think will ever be able to do.
Steve Gibson
Yeah. What I have found I remember very early on I shared one of my prompts and where I was, you know, I went on at some length and I remember, you know, you were surprised that I was talking to it as much.
Leo Laporte
Yeah.
Steve Gibson
Yeah. But if the more language you give it, because it is a language engine, the more language, you know, descriptive language you give it, the more it has to work with.
Leo Laporte
Yes. Yeah. I find I'm writing more and more detailed specs and plans than than ever because it does help it be more accurate if you're very clear about what you want.
Steve Gibson
Yeah. Okay, next Listener Randy Crumb says hi Steve in episode 1077 last week you mentioned you think companies that have closed source software should move quickly to utilize anthropic Mythos, Claude Security or similar tools as they emerge. Their closed source code is only closed sourced to the outside world. I think you glossed over the risk that using these online AI tools potentially exposes your closed source code to the world. They have privacy and security tools in place, sure, but their motivation is financial. They have a setting to disallow using your AI conversations to train their AI models. But you have to trust that they're actually following that setting for the same reason you trust Apple with your data more than Facebook or Google. The use of online AI tools to review closed source code is a risk. A security breach, internal or external, could be devastating to a software company that has their code exposed, he writes. I've been experimenting with LM Studio to run local offline open source LLM models for use with proprietary data, he says. Paren's note I work with client data, not code. The hypothesis is a local offline LLM can be safely used with confidential internal proprietary data or code. These LLM models are usually not as current as the online tools, but they catch up quickly. Also, they're not as flashy, newsworthy or marketing hyped as strongly as as the major online tools. What are your thoughts on the risk of exposing proprietary code or data by using the major online tools listeners since Episode one Thanks for everything you and Leo do so Randy's right. I did gloss over those risks, so I'm glad he brought it all up. And it's not at all that I meant to downplay them, given everything we know about cloud breaches, the network data interception and decryption, and so on. Even if the LLM provider did nothing wrong and made no mistakes, shipping highly valuable source code outside of a company's perimeter creates some risk however, that said, how many firms are already doing just that by using GitHub? You know, I think that's insane myself. Yet it has become common practice to use GitHub for highly proprietary source code management. I'm not doing that, so perhaps my view is skewed. But it does mean that the company's crown jewels are already exposed outside. Randy correctly notes that sending the code up into the cloud for an LLM to rummage around in poses another level of danger. No question about it. So I completely agree with that in principle. So the use of local models, which I have absolutely no doubt we will someday see much more in the future, makes a great deal of sense once they become as capable as what's available in the cloud. And at this point, Leo, I heard you mentioning, I didn't realize that, that there are now laptops being sold without RAM because memory has become so expensive. It's like, get your own ram, here's what you can plug it into.
Leo Laporte
I think maybe some people think. Some companies think, well, you might have some leftover RAM lying around or whatever, but they just can't get the ram, so they want to still sell something.
Steve Gibson
Or maybe they think, well, he'll take it from the previous laptop.
Leo Laporte
Exactly.
Steve Gibson
Put it in the new laptop.
Leo Laporte
Right, yeah, exactly.
Steve Gibson
Wow. And so, anyway, given the insane appetite that data centers have for GPUs and things that run AI, seems to me it's going to be a long time before we're able to buy things ourselves that also run AI, because we're competing with the data centers that are able to, you know, purchase all of the next year's production capability. I mean.
Leo Laporte
Right, Right.
Steve Gibson
It's crazy. It's crazy. Okay, so I want to plow now into what Digicert is doing. We got two breaks left. Let's take one now, even though we just did one. And then I will break in the middle of this Digicert conversation for our final one.
Leo Laporte
Good. Good. Perfect. This episode brought to you by Outsystems Speaking about AI. They're the number one AI development platform, but they solve a lot of these little issues that we've been talking about. OutSystems helps the businesses bridge the enterprise gap to their agenic future. And I know, you know, I think a lot of businesses want to do this, but they're very nervous, as they should be, about the quality of the code they're going to get, about whether it'll be useful, about whether it's good for just little things or whether you can do big apps. OutSystems is here. They've been doing this for 20 years. They know how to do it. They know how to do it for business. They help your business bridge that gap and get to your agentic future where the constraints of the past give way to unlimited capacity and scale. Outsystems enables your business to build AI agents that actually do work, such as take actions, make decisions and integrate with data. It's not a chatbot, it's not just answering questions, it's doing work. And Outsystems provides the only AI development platform that is unified, agile and enterprise proven. Let me explain. First of all, it's unified because you build, run and govern apps and agents on that single Outsystems platform. It's agile because now you can innovate at the speed of AI, very important, without compromising quality or control. And it is enterprise proven. They've been trusted by enterprises for mission critical AI applications and durable innovation. Outsystems is the secret weapon behind the world's most successful companies. Not just for those little one shot apps. They are for massive complex systems that run banks, insurance companies and government services. Outsystems even helps companies with aging IT environments bridge the gap to the AI future without a rip and replace nightmare. Outsystems provides the safest, fastest way for an enterprise to go from we need an AI strategy to we have a functioning AI application. Stop wondering how AI will change your business and start building the agents that will lead it. Visit outsystems.com TWIT to see how the world's most innovative enterprises use Outsystems to build, deploy and manage AI apps and agents quickly and cost effectively without compromising reliability and security. And that's fundamental. That's OutSystems out s y s t e m s.com twit to book a demo. We thank Out Systems so much for supporting Steve and Security now. And you support us when you go to that address. Outsystems.com/TWIT outsystems.com/TWiT all right, let's talk.
Steve Gibson
Okay. The first I learned of some trouble was from Someone posting to GRC's Security now news group with firsthand experience. Oh, yep. Peabody, which is his handle? Actual name is George. He wrote this morning Windows Defender told me it had discovered a severe root kit on my Windows 10 laptop called Win32 CartAgent. A exclamation point DHA now, okay, so consider that Windows Windows Defender tells you you've got a rootkit. It's like what? So you don't take that lightly, right, he says. Which it has quarantined, he wrote. Searching online tells me this is happening on both Windows 10 and 11 computers worldwide, and one hash involved is that of a legitimate digicert certificate. This is all above my pay grade, but I'm going to leave things alone for a while and see what happens. Turns out he was right, he said. Apparently lots of people are reinstalling Windows because of this, but I think that's super premature at this point. Right again, he said. My guess is this is a gift from Microsoft, which they will admit to shortly, and if you reformatted your drive, they'll apologize for the inconvenience.
Leo Laporte
Dripping with sarcasm there.
Steve Gibson
And unfortunately, their apology, their apology left a lot to be desired. Yeah, I was unimpressed. So later that same day, this was this this past Sunday, bleeping computers. Lawrence Abrams was all over this and was providing answers. Lawrence's Lawrence headlined his news writing Microsoft Defender wrongly flags DigiCert certs as Trojan colon win32SL serent a exclamation point DHA. So here's what he wrote. He said Microsoft Defender is detecting legitimate digicert root certificates as and then that that Trojan name, resulting in widespread false positive alerts and in some cases, removing certificates from Windows. Removing their root certificates, by the way. According to cybersecurity expert Florian Roth, the issue first appeared as after Microsoft added the detections to a Defender signature update on April 30th. Today, administrators worldwide began reporting that DigiCert root certificate entries were flagged as malware and on affected systems, removed from the Windows Trust Store. Okay, so hold on, do be Just to be clear what a disaster this was. As we know, root certificates anchor the chain of trust for everything that chains down to them. With them removed from a system, nothing that chains down to them will be trusted, despite having been trusted just moments before. That's the way the system works. And no one has come up with a better idea for validating signed code. Those two certificates are digicearch code signing routes and for example, all of GRCs. My signed apps are anchored by one of the two of those that were being deleted. So the mistaken removal of those two code signing routes from the Windows trusted root store automatically and instantly renders every app that was ever signed by a digicert certificate. You know who is the As I said before, the industry's now now the industry's number one certificate authority renders every one of those invalid and untrusted by Windows. Huge, huge mess. Bleeping Computer continues writing these false positives have led to concern gee, you think? Among Windows users, with some thinking their devices were infected and reinstalling the operating system to be safe, Microsoft has reportedly fixed the detections in Security intelligence update version 14494300, and the most recent update is now 1.4494310. Actually, I think that should be dot one. Anyway, reports on Reddit indicate that the fix also restores previously removed certificates on affected systems. Well, that's nice. So yeah, thank goodness for that. And it's not as if Microsoft had any choice right, about putting them back. It would have been a true disaster if there weren't some immediate means for reverting the specious removal of DigiCert's perfectly valid root certificates, as we'll see in a few moments. Even though Digicert did suffer a breach which caused it to mis issue a handful of code signing certificates, at no point was the removal of any of their root certificates ever warranted. I mean, that's just nuts. I hope Microsoft will put some safeguards in place to prevent such a thing in the future. Bleeping Computer continues the new Microsoft Defender updates will automatically install, and Windows users can manually force an update by going to Windows Security Virus and Threat Protection Protection updates and clicking on Check for Updates. After publishing this article, wrote Lawrence, Microsoft confirmed that the false positives were linked to detections for compromised certificates from a recent DigiCert breach. Well linked but completely ridiculous to have deleted the roots, Microsoft told Bleeping Computer. Here it comes. Quote this is Microsoft speaking. Following reports of compromised certificates, Microsoft Defender immediately added detections for malware in our Defender antivirus software to help keep customers protected. Earlier today, we determined false positive alerts were mistakenly triggered and updated the alert Logic Microsoft Defender suppressed and cleaned up the alerts for customer environments. Customers should update to Security Intelligence version then we get that same version number or later, but do not need to take additional action in order. In other words, don't reinstall Windows for these alerts. We've notified affected organizations and recommended administrators look for more details in the Service Health Dashboard, the shd within the M365 admin center unquote. Huh. Okay, well, that's an entirely unsatisfying answer from Microsoft. But I suppose, given what Microsoft has become, it's the best we're going to get and the best we can expect. Nothing they wrote is untrue, but neither should it satisfy anyone who would have appreciated hearing them say something like Quote in response to reports of compromised certificates. Microsoft Defender was a bit overzealous and mistakenly removed some related certificates that should have remained. Microsoft Defender was immediately updated to cure that behavior and has replaced any certificates that were mistakenly deleted. You know, is that so difficult to say? It shouldn't be. Just wait till you see how thoroughly Digicert took full responsibility for their part in this drama. Lawrence's reporting continues writing the false positives occurred shortly after a disclosed Digicert security incident that enabled threat actors to obtain valid code signing certificates used to sign malware. The Digicert incident report explained, quote, a malware incident targeted a customer support team member. Upon detection, the threat vector was contained. Our subsequent investigation found that the threat actor was able to procure initialization codes, which I'll explain in a sec for a limited number of code signing certificates, a few of which were used to sign malware. The identified certificates were revoked within 24 hours of discovery and the revocation date set to their date of issuance. As a precautionary measure, all pending orders within the window of interest were cancelled. Additional details will be provided in our full incident report, unquote. So that's a small sample of what good disclosure looks like, lawrence continues. According to DigiCert's incident report, attackers targeted the company's support staff, meaning DigiCerts support staff, in early April by creating support messages containing a malicious zip disguised as a screenshot. After multiple blocked attempts, one support analysis device was eventually compromised, followed by a second system that that went undetected for a time due to an endpoint protection sensor gap. Using access to the breached support environment, the hacker used a feature in DigiCert's internal support portal that allowed support staff to view customer accounts from the customer's perspective. While limited in scope, this access exposed initialization codes to to previously approved but undelivered ev, you know, extended validation Code signing certificate explained Digicert explained possession of an initialization code combined with an approved order is sufficient to obtain the resulting certificate. Since the threat actor was able to obtain these two pieces of information for a finite set of approved orders, they were able to obtain EV code signing certificates across a set of customer accounts and cas. So great explanation there, lawrence says. Digicert says it revoked 66. Zero code signing certificates, including 27 linked to a Zong stealer malware campaign, Digicert explained. Eleven were identified in certificate problem reports provided to Digicert by community members linking the certificates to malware, and 16 were identified during our own investigation. This aligns, writes Lawrence, with earlier reports from security Researchers who had observed newly issued DigiCert EV certificates used in malware campaigns and reported them to Digicert or which of course, you know that's the nightmare scenario, right? I mean the re all of the reasons I had to jump through all those hoops in order to get myself an EV certificate. Actually not even an ev, just a standard validation certificate is what all of this other mechanism is designed to prevent from happening. Researchers including Squibly Do, Malware Hunter Team and Gonksha reported that certificates issued I
Leo Laporte
should make you read hacker handles every week. No scribbly do says this okay, I'm
Steve Gibson
gonna go squibbity doo. That's right. Issued to well known companies such as Lenovo, Kingston Shuttle. Now these are these are the companies to whom these stolen certificates were issued, right? Lenovo, Kingston Shuttle Inc. Pallet Microsystems being were all used to sign malware. So question what do Lenovo Kingston Shuttle Inc. And Pallet Microsystems have in common? Posted Squibbly Doo on X EV certificates from these companies were issued and used by a Chinese crime group, Golden Eye Dog and that's an APT known as Q27. The malware in this campaign is named Zong Stealer, though analysis indicates it may be more like a remote access trojan than an info stealer. The researcher says the malware was distributed through the following attacks Phishing emails deliver a fake image or screenshot, a first stage executable that displays a decoy image, retrieval of a second stage payload from a cloud storage such as AWS and the use of, wait for it, signed binaries and loaders, including components tied to legitimate vectors or vendors so trusted because signed By DigiCert After DigiCert disclose the incident, the researcher said the incident report explains how the certificates used in these malware campaigns were obtained because like clearly illegitimate. It should be noted that the certificates flagged by Microsoft Defender are root certificates in the Windows Trust Store and do not match the revoked Digicert code signing certificates used to sign malware okay, so that's the great reporting posted Sunday before last by Bleeping Computers founder Lawrence Abrams. Security industry experts have been citing DigiCert's upfront incident report as a model of how this should be done. Starting 21 days ago, DigiCert began issuing Inc. A series of incident reports, with each succeeding report updating the previous one with the final report being posted seven days ago. Exactly one week ago, DigiCert named this event this final event, End point two which is that that's the the system where this bad guy was not immediately Discovered. And their final report begins with this statement. This is an updated version of our full incident report, which completes the investigation of endpoint 2. Their own overview description differs somewhat from what third parties reported and provide some additional detail. They summarize the whole thing saying on 2026, 4, 2 so April 2nd, a threat actor contacted DigiCert support team via so a threat actor. Right. The bad guy contacted Digicert support team via a customer a customer chat channel and delivered a zip file disguised as a customer screenshot. So they were saying, you know, I have a problem. I don't understand how your portal works. Here's a screenshot of the the problem. The file contained a dot scr which we know is a screen saver executable containing a malicious payload, CrowdStrike. Who is we? We know the Endpoint security company that does a great job, except that they once brought down all of Windows, but that was, you know, whoops. CrowdStrike and other security measures. They wrote. Successfully blocked four delivery attempts. Caught. This guy said, nope, sorry, bad. A fifth attempt compromised End Point one, a machine used by a support analyst. This delivery attempt was detected and contained by our Trust operations team on on April 3rd. So five times five attempts. Four were blocked, one got through. The next day it was discovered. F they they wrote. Following an immediate internal investigation based on the telemetry data at hand, it was assessed that the incident had been contained. Okay, so that's their summary. Then we receive an interesting narrative, some of which Lawrence posted, but the deeper details are interesting. So Digicert said, so this is Digicert, you know, writing it all up. Digicert received the initial third party report related to this incident on April 5th. Additional third party reports are identified in the timeline. So okay, just to recap before we go any further, the Penetration occurred on April 2nd. DigiCert's trust operations team determined it had been contained the next day on April 3rd. And the first third party report that is coming from some other outside source saying, hey, we got some malicious code here that's signed by like a certificate of yours that's fresh. So the first third party report of malicious code found in the wild, signed with a Then valid DigiCert EV code signing certificate, occurred the next day, the following day on April 4th. So that's gonna like, whoa. Bring Digicert to full attention. They they report DigiCert regularly receives certificate problem reports from community members and security researchers for code signing certificates and proven key compromise cases are revoked pursuant to the code signing baseline requirements. Initial problem reports ultimately linked to this incident report fit within the normal pattern of such revocations. Okay, so they revoked the certificate. Then 10 days go by, they write on April 14. Further investigation identified that endpoint two, a different machine, a machine used by another analyst, was also compromised through the same delivery vector on April 4th. So that this had a 10 day window, crowd strike was not installed on that endpoint, meaning the compromise was not detected. During the earlier April 3 investigation, the machine was established more than three, meaning endpoint two that, that this newest machine, they said was established, meaning, you know, set up more than three years ago. Because our end user machine logs are retained for three years, we cannot determine why CrowdStrike was not installed on this particular endpoint. Okay, so, you know, at this point it really doesn't matter why. What matters is that because CrowdStrike was not installed on that second infected endpoint 2 machine, it's infected when undetected for 10 days. But in the interest of a full forensic after the fact, how did this happen? You know, investigation, they would have liked to know exactly why that machine had apparently never been under the protection of CrowdStrike. Their records only go back three years and that machine predates that logging cutoff. So today they have no way of knowing what happened back, you know, when that machine was initially brought online, why it didn't get CrowdStrike. And now, of course, given that they found one machine that was missing its protection for an unknown reason, the question becomes what other sensitive machines might also be missing their protection? You can imagine that they're going to go find out. Their reporting continues writing. Our Trust Operations investigation found that the threat actor used the compromised analyst endpoint to access Digicert's internal support portal. The threat actor used a limited function within the customer support portal which allows authenticated Digicert support analysts. You know, the people that we talk to as Digicert customers. I've done that on a number of occasions to access customer accounts from the customer's perspective, to facilitate their support tasks. Makes sense. This access is restricted and does not permit actions such as managing accounts, users API keys, or submitting or managing orders. However, the threat actor was able to use this function to access probably meaning view initialization codes for orders that were approved but pending delivery for EV code signing certificate order orders across a finite set of customer accounts. They write, possession of an initialization code combined with an approved order is sufficient to obtain the resulting certificate. Since the threat actor was able to obtain these two pieces of information for a finite set of approved orders, they were able to obtain EV code signing certificates across a set of customer counts and casual. Okay, now, so just to put this in context, if you're wondering about Digicert's phrase across a set of customer accounts and CAs, the notion of like, why would it be more than one like them? Ca. The notion of differing CAs could seem strange since we're only talking about Digicert. But remember that when I was out shopping around for a new code signing certificate provider earlier this year and finally settled upon identrust, I discovered that a surprising number of the many apparent alternative certificate authorities all shared utterly identical prices, terms and conditions with each other. And with Digicert, it quickly became clear that Digicert had been busily gobbling up much of the competition. So all of these alternative CAs had just become different storefronts for Digicert. And what they've just written confirms that these various fronts were all sharing Digicert's common back end. I'm not criticizing Digicert, it's, you know, smart business. But it does mean that we now have much reduced competition and that's not usually best for consumers. Okay, so now we get some statistics and numbers. They write during our investigation, between April 14th and 17th, as DigiCert identified certificates potentially affected by the threat actors actions, we revoke them. DigiCert revoked 66. 0 certificates issued from the following CAs and there's four of them. DigiCert trusted G4 code signing and they've got a, a bunch of different specs on that and another of the same then, then one called go get SSL G4 code signing. So that's probably one of the, one of the other compromised sub cas and also something called Vero Key High Assurance Secure Code ev. So those were the, those were the root cas that, that had been used to sign those certs. They wrote 27 of the 60 revoked certificates were explicitly linked to the threat actor. Eleven were identified in certificate problem reports provided to Digicert by community members linking the certificates to malware. And 16 were identified during our own investigation. During our investigation, it included review of the threat actor's activity in the support system, as well as tracing delivery to IP addresses known to have been used by the threat actor. You know, so they, they took, they, they, they, they got information about the known problem certificates, then they looked at the metadata surrounding the issuance of those certificates and then were able to use that to broaden their search and find any other certificates that that threat, the same threat actor had also managed to issue to itself. And so they were able to say the IP addresses used by the bad actor to install certificates included and they provided in their report unredacted IPs, you know, 82, 231868 and so on. There's a bunch of them there. So Those are the IPs that, that, that the bad guy used in order to, to compromise DigiCert. They said. In addition to the 27 fraudulently issue and revoked certificates identified above, 33 of the total 60 certificates were revoked during our own investigation. As a precautionary measure for these certificates, we could not explicitly confirm customer control. In addition, pending orders were cancelled closing access to the threat actor. All identified certificates were revoked with within 24 hours of discovery with the revocation date set to their date of issuance. So, you know, note that we keep seeing this language within 24 hours of discovery. This is DigiCert explicitly asserting that it has carefully followed the wellestablished, you know, CA browser forum guidelines for proper CA behavior. This is where as previously, as previously we've seen and reported the other disgraced certificate authorities fell well short. You know, those others were, you know, under the rug sweepers, you know, first hoping that no one would notice and catch them in their mistakes and then once they've been exposed, you know, they worked overtime to minimize and hide their failures, you know, and you know, the truth is no one expects anyone in this arena to be perfect. Perfection is not a requirement. Proper behavior and acknowledgment of a, of a mistake, that's the requirement. So that's what DigiCert is busy doing here. They wrap up their initial overview by writing the exploited certificates identified by the community member were found to have been issued by and to, to sign the Zong stealer malware family and so forth. And basically the same stuff that Lawrence talked about. So the really interesting stuff comes next, it being how they take themselves to task over how this happened and in detail the contributing factors that facilitated the attacker's success. So what I'm about to share is the reason I gave today's podcast the title Digicert Doing It Right. This is written so objectively that it feels more like the work of outside auditors than DigiCert's own staff. It's just, it's just so difficult to fully disconnect one's own ego from, you know, truly self indicting statements. But to Digicert's credit, there was no sign of that. You know, the typical rolling disclosure that we've seen so many times elsewhere. You know, Microsoft for one, could certainly learn a two a thing or three from DigiCert. So we're going to talk next about their headline root cause analysis, Leo, after we take our final break.
Leo Laporte
Okie dokie, man, we live in a dangerous world.
Steve Gibson
We live in a world where a huge amount of industry is being applied by the bad guys. I mean, remember how at the beginning of this podcast we had cute little viruses and they used to infect people? And we go, oh, look at that. What does it do? Nothing, it just propagates. Why? Well, because it can, but it makes
Leo Laporte
it defaces your web page. That's all.
Steve Gibson
Everything changed when cryptocurrency allowed people allowed the bad guys to get paid. Yep, it turned it into a business model.
Leo Laporte
Well, fortunately, there's Threat Locker, our sponsor for this segment of security. Now, a little zero trust goes a long way. ThreatLocker's Zero Trust platform delivers the industry's most comprehensive suite of zero trust solutions. They just added protection for networks and the cloud by extending zero Trust enforcement to cloud services and company networks. I mean, this is huge. Threat Locker ensures that devices are validated through a secure broker before connecting to platforms like, you know, the ones you're using, Salesforce or Microsoft 365 Asana, Google Workspace, GitHub. This means even if a user is successfully phished, and this is $64,000 question, right? Can you keep your users from getting phished? But even if a user is successfully phished, with this Zero Trust platform, attackers cannot access resources. It stops them. They'd have to have physical possessions of the user's trusted device and authenticate through that. I mean, that's a lot better, isn't it? Threadlocker works across all industries. They have fantastic 24.7us based support. I've talked to them. I know how good they are and how much they care. It works in every environment. Windows, Mac, Linux. Absolutely. It enables comprehensive visibility and control. And I tell you, when you look at the control panel and you see how threatlocker's doing it, in fact, one of the things that's worth doing, if nothing else, get a demo. Because you'll see right there on the control panel who's trying to access your system and what they're using. You know, you may not know that you have 16 different remote access tools running on your company network, but you put threadlocker on there, you'll know immediately. So it's just great for diagnosis alone, but of course, it does a whole lot more. Just ask Rob Thackeray. He's the end user, technical architect at Heathrow Airport. You know, Heathrow's had problems in the past. I think there is a real burden on Rob to make sure that Heathrow is absolutely locked down. That's why they chose Threat Locker. He said, and this is a direct quote quote, Threat Locker was the most intuitive solution we tested and the responsiveness of the organization, the willingness to engage with us, to set up a demo, to work with us on weekly audit reviews was very good. It is great to have an ongoing relationship with a company that's so, so responsive to our requests. Thank you, Rob. Threadlock is trusted by global enterprises that just can't afford to be down. They cannot afford to be ransomware, they cannot afford to be phished. Companies like JetBlue, they use threat Locker, the Indianapolis Colts, the Port of Vancouver. Threat Locker consistently receives high honors and industry recognition, a G2 high performer and best support for enterprise. Summer 2025 peer spot ranked ThreatLocker number one in application control. GetApp gave ThreatLocker their best functionality and features award in 2025. And I can go on. I won't. But I mean they've won all the awards. With ThreatLocker, you confidently ensure that users have access to a consistent, safe network connection. Offices, remote users, internal servers and critical services can maintain smooth operations without the need to open inbound ports. You don't even have to deploy traditional VPN solutions. Your end users will get the secure, reliable internal system access they need wherever they are, without complex infrastructure changes. It really works. Get unprecedented protection quickly, easily and cost effectively with ThreatLocker. Do this for yourself. Visit threatlocker.com twit get a free 30 day trial. Learn more about how ThreatLocker can help mitigate unknown threats and ensure compliance. That's threadlocker.com twit threadlocker.com/twitter we thank them so much for their support of security now. And now back to the digicert story with Steve.
Steve Gibson
So under technical background they said understanding this incident requires understanding how EV code signing certificates are issued on hardware tokens. The customer requests a code signing certificate from DigiCert. Following validation, DigiCert securely provides an init, what they call an initialization code. We've heard that term throughout this an initialization code to the customer. The customer installs or Already has installed DigiCerts Hardware Certificate Installer certificate software locally. Meaning at their end the customer inputs the initialization code into the installer, which generates key pairs on the hardware token and submits the public key to the ca. The CA generates the certificates against the approved order. The installer retrieves the resulting certificate and installs it on the token. They said the process is described in a public knowledge base and they provide the link. Possession of the initialization code combined with an approved order is functionally sufficient to generate and retrieve the corresponding certificate. The initialization code operates as a bearer credential for the approved order and is single use. This feature made it apparent which initially initialization codes have been used. Okay, so I, I've done exactly this in prior years with digicert and when you think about it, the process of issuing a certificate that like creating and issuing, you know, creating and signing that will be contained in a on a hardware token is little trickier than you might just imagine. At first, the need is for the private key half of the public private key pair to never, ever, for any reason, ever. Just to be clear, ever exist.
Leo Laporte
Never.
Steve Gibson
Yes. Thank you, Leo. Outside the hardware, it it's in the key and it never leaves. That means that it must be generated by the dongle inside the dongle and that the dongle will never export its ultra protected hardware key. So web server public private key pairs have no such requirement. So a web server just uses the underlying operating systems cryptographic system to synthesize a key pair. A public key pair for holds onto the private key while the public key is placed into a csr, a certificate signing request which the CA signs and returns. But forcing the private key to never leave the hardware dongle very much complicates matters. The point here is, is that DigiCert issues these initialization codes against a customer's account, sends them to the customer, who then uses them with digisert's own hardware certificate installer app running on the client's machine with the hardware dongle plugged into it. The code validates, you know, the, the code which is ingested by their app, which they provide, validates their right to have a certificate by communicating on the back end with DigiCert's APIs and servers. Then it triggers the key pair generation on the hardware dongle, the public then, then the hardware dongle does allow the public side, the public key to be sent out, which the, the app then uploads to DigiCert and the install and the installation of the signed certificate. So digasert then signs that public key, sends it back to the app, which then installs it back into the hardware. So a lot is going on that the user never sees. From the user's perspective, it just kind of is magic. You, you enter your code and then you say go baby. And a Minute or two later it says, okay, you got your key, you know, you're all set to go. But the, you know, the point is that the stateful nature of the operation creates some points of exploitability. There exists a window from the time that initialization code is issued to the time it is that it. It is actually applied where the bad guys who are able to get it could install it in their own hardware rather than the customer installing it in their hardware. And these are big companies, right? Lenovo for example, where they, you know, they've got teams doing things they probably. And they have got initialization codes that are just have been issued but haven't been used yet. And so the bad guys took advantage of that window. So digicert states for what they called contributing factors. Things that things about the way their system is. And actually now was that helped this to happen. Contributing factor one, they said inconsistent or incomplete endpoint detection coverage. Well, we know that, right? They said security tooling CrowdStrike was not uniformly configured by DigiCert across the user population exposed to the attack. The CrowdStrike prevention setting on endpoint one was below the intended organizational standard at the time of the initial compromise, allowing the malicious payload to execute before blocking engaged. The CrowdStrike sensor on endpoint two was not installed. As a result, no detection fired on the compromised machine. Logs for end entity machines are retained for three years. Since this machine was set up more than three years ago, our security team could not determine why this particular machine either did not have did not have an installed crowd strike sensor. They also said the sensor not being installed was identified on April 14 during the expanded investigation triggered by the third party report. You know, they thought they had it contained on the 4th, 10 days later, it's like, oh crap, something's b big and bad has happened. They said the Original investigation on April 4 did not include a check of EDR, you know, endpoint detection enrollment status for all exposed users. If the sensor had been installed on endpoint 2, the connection on endpoint 2 would likely have been detected and contained in the same time frame as the other targeted machine that was M point 1. This created the window during which the threat actor was able to access the portal function. And then we're about to talk about that in the next contributing factor and harvest initialization codes, which actually is the third contributing factor. Okay, so the second contributing factor, insufficient privilege minimization in the support portal function. Again, taking full responsibility for how the bad guy was able to get up to as much as they were. Insufficient privilege. So Minimization. So they wrote Digicert's internal support portal includes a function that allows authenticated support analysts to proxy into specific customer accounts to facilitate customer support. You know, like viewing it the way the customer does. I don't really understand what's going on. Can you show me what I'm supposed to be seeing? And so the support guy says, okay, let me get onto your account and goes, ah, I see what you mean. So they wrote in this mode, certain functions are masked from the analyst. However, access to initialization codes for pending code signing certificate orders was not among the masked data elements. And they're saying it could have been leaving those codes accessible to support
Leo Laporte
an
Steve Gibson
analyst operating in a proxied session. They said the portal function had not been formally classified the portal function had not been formally classified within Digicert's privileged access management PAM framework. The definition of privileged access was primarily scoped to direct access to CA and did not encompass this indirect account management function that had a path to certificate issuance. As a result, the portal function was not subject to the PAM controls applicable to privileged users under the CAB Net security, including formal threat modeling against misuse scenarios and least privileged design review and access recertification. In other words. In other words, they missed this and they recognize that this did not have to be. They said the portal function is a long standing feature. On April 14th and 15th, following the discovery of the incident, we deployed a code change to mask initialization codes from proxied users on both our US and EU platforms using either the UI or the API. The absence of this initialization code masking was identified during the investigation triggered by the third party report on April 14 and finally, interaction with other factors. This factor. You know, this second contributing factor defines the scope of the damage enabled by the contributing factor number one, which was the lack of endpoint coverage. Without the EDR gap, the dwell time would have been minimal and the number of initialization codes accessible would have been limited. This factor also interacts with contributing factor three as the absence of masking is a direct consequence of the codes not being recognized as requiring credential tier protection. Which brings us to contributing factor number three. Initial initialization codes not protected as bearer credentials, meaning the initialization codes did not were not considered to have sufficient need for protection. They wrote the EV code signing certificate pickup workflow was designed with a threat model that assumed initialization codes would only be accessible to the validated subscriber, delivered via a secure channel and entered by the subscriber into their local hardware certificate installer. The threat model did not account for the scenario in which initialization codes stored within Digicert's internal support portal could be viewed by a compromised DigiCert analyst account operating through the portal function. Again, whoopsie. Therefore, initialization codes were classified as intermediate workflow data rather than bearer credentials, which would have elevated the their need for protection to a higher level. This initialization code workflow they wrote was designed at the time the EV code signing token issuance process was developed. The issue was identified during the investigation triggered by the third party report on April 14 was not identified through any previous design review prior to this incident. So this just kept getting missed because again, you can look at things but sometimes you just don't see them. And their point is that this mistaken definition allowed it never to be properly classified and and then it did not automatically fall within the proper security context and constraint. So everything we're seeing here is the work and it should sound like it, of true forensic security professionals patiently working to understand step by step not only what happened, but why a supposedly carefully designed security system that was even designed from a theoretical premise that are theoretical premises of what we want. Even so, how this was how that allowed this to happen. So the result is guidance about what can be changed to prevent successful exploitation at each stage. And this brings us to the fourth and final contributing factor which explains how the bad guys managed to infect two to infect DigiSearch 2 analysts in the first place. They wrote overly permissive file transfer capability in customer facing support channels. They said digicearch customer support chat channel and Salesforce case attachment workflow permitted delivery of inbound file attachments from external parties, including the general public, to CA support staff with insufficient restrictions on file type, automated sandboxing and content inspection. This created a direct delivery path for malicious executable content to personnel having privileged access. The support chat channel had not been adequately evaluated as a potential attack surface for malware delivery against certificate authority staff. The support chat channel and Salesforce case attachment integration were operational prior to this incident. The delivery vector was confirmed on April 4th and 5th, at which point additional malicious zips were identified across other Salesforce cases and removed. Which is significant, right? There were other infections that hadn't had a chance to to like take hold. Corrective controls, file type restrictions, sandboxing evaluation are in progress as described in Action Items. The number of channels by which customers can reach support staff has grown. The file controls on the chat channel were believed to be sufficient prior to this incident. This factor is the initial attack vector that enabled all subsequent factors to come into play. And this brings us to the lessons learned conclusion. You know, with what went well and what did not go well and and where they got lucky. So they explained under what did go well. Rapid initial containment on Endpoint machines where EDR was working as intended for these machines. Digicert's Trust operations team completed containment, process termination, registry cleanup and artifact deletion within hours of detection on April 3. Also what went well? Proactive identification of the full delivery chain. They were able forensically to catch that the investigation identified the Salesforce case attachment auto conversion mechanism as the delivery path and located additional malicious zip files across other cases before they could be opened, preventing further compromises from the same campaign. So when they talked about earlier they did not throw Salesforce under the bus, but they talked about how additional points of entry have been added. Apparently it was the the incorporation of the Salesforce services that allowed this stuff to get in that way and it sort of snuck by they said. Also what went well? Same day remediation of contributing vulnerabilities during incident response critical fixes were implemented without deferral. Crowdstrike prevention settings corrected on April 4th. OCTA Fast Pass disabled and multi factor authentication tightened on April 14th. Initialization code masking deployed across US and EU environments on the 14th and 15th. And confident attribution of the second compromise through forensic analysis they did figure out exactly what happened after. You know, with that End Point two machine. They said linking across compromised machines activity logs to the same threat actor required an analyst across identity analysis across identity events, endpoint telemetry and support workflow artifacts. So if that all went well, what did not? File type controls turned out to be insufficient on customer facing support channels. Inconsistent and incomplete EDR coverage enabled a blind spot that was unknown prior to the incident and that directly enabled the attacker's dwell time. You know, giving them 10 days. Initialization codes were not protected as bearer credentials as they should have been. The portal function exposed initialization codes that are functionally equivalent to the certificates they enable because they were classified as intermediate workflow data rather than credential material requiring masking and credential tier protections. Also, what did not go well? Device bound authentication Okta FastPass was used as an MFA. A multi factor authentication bypass. FastPass allowed a threat actor operating on a compromised device to inherit the device's authenticated session and satisfy multi factor authentication requirements without a genuine second factor enabling access to the portal initialization codes. After the initial endpoint compromise and finally following endpoint containment that end point one containment. On April 3rd, the investigation concluded, you know, obviously incorrectly, that compromise attempts had been neutralized without validating all endpoints exposed through the same delivery vector. So somebody a little too quickly said, okay, we're done here. In retrospect, of course, I imagine that they now wish that after that first attack, which was thwarted by CrowdStrike's endpoint defense, after a brief window, that they had checked for other instances of that malicious zip file across the rest of their network. We now know that they did that later and did find a handful of other instances of it. So it's sort of unclear why that did not happen at the time. And I imagine somebody's asking somebody. So they finally share two points in conclusion where they felt they got lucky. They said a community member involved in security research reported the evolving pattern of misused certificates and engaged in dialogue with our support with our support team. Without that report, the undetected compromise of N2 and the associated MIS issuance might have remained undiscovered for a longer period. Nice of them to admit that Our investigation also they got lucky. Where our investigation indicates that the threat actor's activity was focused on gaining access to code signing certificates, a differently motivated threat actor might have attempted to use the compromised account for broader actions. Several of our action items are designed to address this risk. And then they conclude with a final three points. This incident demonstrates that internal support tooling with indirect paths to certificate issuance must be subject to the same security scrutiny as their certificate authority. Issuing infrastructure tools that were designed for legitimate operational purposes can become high value attack targets. Second, the incident also illustrates the importance of defining privileged access broadly enough to encompass any system or function with a path to certificate issuance, not just those with direct access to HSMs or hardware security modules or signing infrastructure. And finally, the dwell time underscores the importance of comprehensive post incident investigation scope, meaning don't quit your investigation prematurely and continuous EDR coverage monitoring like make sure all your endpoints are actually being monitored. A single missed endpoint can negate the value of rapid containment elsewhere. So you know, they, they go on to actually enumerate 21 individual action items, many of which they already articulated or implied. So I'm not going to take us thankfully through each of those. Suffice to say that there's little doubt that Digicert was extremely unhappy and embarrassed by this event. Right? Everything is any certificate authority is about and a huge amount of security focused design and third party contractor support from the likes of CrowdStrike and several others. All of this was intended to prevent anything like this from ever happening. But it did anyway. But at no point did we see what we've previously observed from so many other certificate authorities who have been struck by similar abuse or even by Microsoft who you know, has nothing to lose. No one's going to leave Windows. Almost invariably they first hoped, you know, the other guys, these, these, these really, you know, denial certificate authorities first hoped that nobody would notice. Then when someone did, they would downplay the severity of the incident, hoping that that would be it. Then when additional evidence of further exploit came to light and surfaced, oh, they'd apologize with some lame excuse about having intended to mention that too. Uhhuh. Yeah, right. What we have from Digicert, the industry's largest commercial certificate authority, second only the let's encrypt, which you know is free, is full public disclosure and responsibility taking. The result has been a deep analysis followed by true action to prevent like at multi multiple levels and stages, not just like closing the front door, but all the hallways in between, you know, to prevent anything like this from transpiring again. And as usual, I think that's more to recommend them than, than not. If their code signing certificates were not so unreasonably expensive, I would not have invested so much time earlier this year preparing to jump ship and find another supplier. I'm glad I'll be moving to ident, but I'm only doing so because I object on principle to, you know, unconscionable costs for certificates, not the quality of their work. But as for Microsoft, given what we now know, it is unbelievably difficult to un, to understand, to explain, to explain away, to excuse how Microsoft could have possibly fumbled their end of this so thoroughly. There, there would presumably have been some dialogue between Microsoft and Digicert where Digicert provided Microsoft with the thumbprints or serial numbers of the 60 certificates they had revoked and blacklisted so that Microsoft could do the same with Mike with Windows Defender. You know, as we know, revocation is an imperfect answer with certificate management, but it's all we've got. So Microsoft would have definitely wanted to add Those certificates, those 60 certificates to Windows Defenders existing code signing deny list so that nothing signing them would have been allowed to run on Windows. But that just entails checking thumbprints against Defenders deny list. How Microsoft could possibly have fumbled this into the removal of Digicert's root certificates for all certificates ever issued in the world ever is just impossible to understand. It feels very much as though someone in control of that important process doesn't know what they're doing, which is horrifying. I hope AI didn't do it. Anyway, fortunately, that too was fixable and it was quickly fixed. So we go on Leo to the next adventure.
Leo Laporte
I love it that Microsoft, kind of the way they phrase this sounds like a blaming Defender. They say in response to reports of compromised certificates, Microsoft Defender was a bit overzealous. Defender was overzealous. Well, it certainly acted poorly, but I think somebody told it to do so. So this is interesting. So they were initially compromised through customer support. Chat files uploaded through chat.
Steve Gibson
Maybe chat. Or it sounds like maybe they've added some Salesforce software as a service thing.
Leo Laporte
Right.
Steve Gibson
Because there was a mention of Salesforce and they may. That may be the chat that the. Exactly. It might be Salesforce chat. And so. So that it did get in that way. So some. Somebody said to. To the support guy, hey, I don't understand what I'm seeing here. Let me send you a screenshot.
Leo Laporte
Yeah, here's. Yeah. Or here's a zip file of something. Yeah. And then the, the CSR who had escalated privileges, which is a problem, unzipped it. It attacked. They responded very quickly. But this show. This is the issue with certificates, unlike, say, a ransomware attack, although that can happen pretty quickly. But if I have 10 minutes with digicert and I can get their root certificates, I'm golden. I don't need more than that or time than that.
Steve Gibson
Yeah.
Leo Laporte
Wow.
Steve Gibson
And I mean, it. Is it. One of the things that I did think was that we clearly have a system. We have so many security firms that are. That are looking for malware. When they talked about, you know, an industry partner or a third party, you know, it was some security firm who, who they didn't identify that, that called up and said, hey, we're seeing some malware that was signed an hour ago by Lenovo and it's your cert. So what do you think about that? And then. So, I mean, so the. We have know there's a lot of closed loops here that are. It's good that they're closed and that they're looping.
Leo Laporte
There's really a huge kind of web of threat analysis.
Steve Gibson
It's a big ecosystem.
Leo Laporte
It's a really big ecosystem. And they know that they have to work hand in hand. They have to work together. So in some ways, you know, that's been the response to all of these attacks is a much improved, I think, early warning system, which, which, you know,
Steve Gibson
here I am trying to publish My little software and being blackballed by Windows Defender. You, you'd think that with, with this system working as well as it is, they could wait until actually seeing malware and then, then, you know, you know, you know, call Digicert to report me. But no, Defender says, I don't know about this.
Leo Laporte
They removed the entire brain. Instead of just the, instead of just the tumor, they just said they'll just take the whole brain out. You don't need that, do you?
Steve Gibson
Yeah, we decided we're not going to trust anything Digisert has ever signed. I mean, it's unbelievable. Easy.
Leo Laporte
Yeah, that's a very bad response. It's almost, I, you know, it feels like a panic response. Like they were so freaked out that they overreacted. And it wasn't Defender doing this. It was some human or maybe some AI. I don't know.
Steve Gibson
I don't think it sounds like somebody really took two to go in and remove a root. You had to know you were removing none. At no point did the response to Digicert require removing anything. It meant checking thumbprints. Does this thumbprint match? That's all. So how it got extended, I mean, again, I do actually wonder. I had the thought before I was recording this with you, Leo. Could this have been AI that that hallucinated a root cert removal? Maybe that what we're now doing, that what we're now going to be dealing with.
Leo Laporte
If so, it's not the case because.
Steve Gibson
Get your seat belt.
Leo Laporte
Yeah.
Steve Gibson
Buckle up, baby.
Leo Laporte
Yeah. Well, what a good story though. And I'm glad digital did the right thing.
Steve Gibson
Oh, I, I again, I just, you know, a hat tip to those guys. The fact that they ruthlessly not only, you know, investigated, but then self reported. Yeah, I mean you, no one could ever take issue with, with the way they're behaving. I'm sure that was their goal too, right? I mean they, they don't want anyone to have any doubts about them. And who would after this? I would trust them more now.
Leo Laporte
Right. Good stuff. Thank you, Steve. Steve Gibson does this every Tuesday, as you probably know. I'm sure you make this a regular stop on your podcast list. You can catch the show live if you want to get it right away. We, we stream it live as we're doing it Tuesdays right after Mac break weekly. That's 1:30 Pacific, 4:30 Eastern, 20:30 UTC. The live streams of course in the Club Twit, discord, but also YouTube, Twitch X, Facebook, LinkedIn and Kick. So watch wherever you want. If you aren't around on a Tuesday morning then, or afternoon or evening, depending on where you are.
Steve Gibson
We always.
Leo Laporte
It's a podcast. You can always get copies of the show. Steve's got actually completely unique versions of it. So depending on what version you want, you would. It's the same show, but he's got the 16 kilobit audio versions, admittedly not the highest fidelity, but very small, 64 kilobit, which is full fidelity. He's got the show notes, which are amazing, and I think a lot of people like to read along as they're listening. Well, it's also a great reference to have. He's very complete in the show notes. 22 pages today of show notes. You'll get those either via email or directly from his website, GRC.com he also has transcripts take a couple of days because they're written by a human. They go up a couple of days after the show. So that's a good way to search or again, read along as you listen. All of that@grc.com if you do want to get the email transcripts, go to GRC.com email and there's, there's a. The initial point of this page was to whitelist your email so you can send Steve questions, comments, suggestions, picks for the picture of the week, that kind of thing. So there's an email form and he'll validate your email. But below it, since you're providing the email, there's also two checkboxes, one for the weekly newsletter that contains all of the show notes, the other for a very infrequent email when he's got a new product like, of course, the world's best mass storage maintenance and recovery utility, Spinrite, currently 6.1. And his DNS Benchmark Pro, which just came out a few months ago, that is $9.99 and very much worth it. I, you know, frequently we say, oh, the Internet's slow today. And it's not really the Internet that's slow, it's your DNS server that's slow.
Steve Gibson
Something's going on with Quad 9, by the way. If anybody's using Quad 9, I, I've noticed it's, it's cached response is very fast. But it's when it, when the Quad 9 server doesn't have something in its cache, it needs to go out. And it is really. It's like taking half a second for it to get a response back.
Leo Laporte
So, and that's something DNS Benchmark Pro would let you know.
Steve Gibson
That's how I found out about it.
Leo Laporte
Yeah. It would also give you good alternatives. Right. If you, you know, they're faster. So GRC.com, the Gibson Research Corporation, you can go to our website, Twitter TV SN for the 128 kilobit audio, which is, to be honest, not any better than the 64 kilobit audio. But we need to do it because Apple down samples and they need a high quality one to down sample it. It's a long story. We also have video which Steve does not have and that is @Twit TV SN. There's also a YouTube channel for the video. Great way to share clips. I think a lot of times people want little clips of pieces of this to send to the boss or your IT team or friends, whatever. That's the easiest way to do it. Everybody can watch YouTube, but the easiest way to make sure you don't miss an episode is to subscribe in your favorite podcast client. That way you'll get automatically as soon as we've polished it all up. Benito, we working on that this afternoon. Steve, thank you so much. Great show. And we'll see you next week on
Steve Gibson
security now, episode 1079 coming up.
Leo Laporte
Hi there. Leo Laporte here. I just wanted to let you know about some of the other shows we do on this network you probably already know about. This Week on Tech. Every Sunday I bring together some of the top journalists in the tech field to talk about the tech stories. It's a wonderful chance for you to keep up on what's going on with tech, plus be entertained by some very bright and fun lines. I hope you'll tune in every Sunday for this Week in Tech. Just go to your favorite podcast client and subscribe. This Week in tech from the TWiT network. Thank you, Security.
Steve Gibson
Now, Ryan Reynolds here from Mint Mobile with a message for everyone. Paying big Wireless way too much. Please, for the love of everything good in this world, stop with Mint.
Leo Laporte
You can get premium wireless for just $15 a month.
Steve Gibson
Of course, if you enjoy overpaying. No judgments.
Leo Laporte
But that's weird.
Steve Gibson
Okay, one judgment anyway.
Leo Laporte
Give it a try.
Steve Gibson
@mintmobile.com Switch upfront payment of $45 for 3 month plan equivalent to $15 per month Required intro rate first 3 months only, then full price plan options available, taxes and fees extra. See full terms@mintmobile.com if you work in
Leo Laporte
university maintenance, Grainger considers you an MVP because your playbook ensures your arena is always ready for tip off. And Grainger is your trusted partner offering the products you need all in one place, from H VAC and plumbing supplies to lighting and more. And all delivered with plenty of time left on the clock so your team always gets the win. Call 1-800-GRAINGER visit grainger.com or just stop by Grainger for the ones who get it done.
In this episode, Steve Gibson and Leo Laporte cover significant security topics from the past week, with a deep dive into how DigiCert, the world’s leading certificate authority, exemplified best-practice breach disclosure and remediation after a certificate compromise. The hosts also discuss the FCC's revised firmware policy for routers, an AI-discovered 21-year-old vulnerability in FreeBSD, alarming trends in AI model repository security, poor password management in Microsoft Edge, and the broader implications of AI on secure software development.
| Time | Segment | |----------|--------------------------------------------------------------------------| | 00:00 | Intro & Episode Preview | | 12:12 | FCC Router Firmware Policy Reversal | | 22:00 | AI Uncovers 21-Year-Old FreeBSD Vulnerability | | 32:33 | Let’s Encrypt Outage & Response | | 40:53 | AI Model Repository Malware Threats | | 61:09 | Microsoft Edge Passwords Exposed in Memory | | 68:59 | Listener Feedback: AI Pattern-Matching in Vulnerability Research | | 88:25 | Start of DigiCert Incident Segment | | 124:08 | DigiCert’s Root Cause Analysis & Response Details | | 141:24 | Lessons Learned & Final Analysis | | 147:26 | Microsoft’s Flawed Defender Response |
The episode is marked by Steve's technical clarity, methodical breakdowns, and a touch of wit, with Leo providing relatable analogies and probing questions. The language is direct, precise, and often critical—particularly when discussing illogical policy, poor security hygiene, or corporate missteps—but always grounded in practical reality and informed optimism.
Security Now 1078 is a masterclass in by-the-book incident response, cautionary tales about the consequences of cutting corners in security process, and the evolving reality of AI on both the attack and defense side of the security equation. DigiCert’s openness becomes the gold standard, while Microsoft’s hasty, opaque reaction is roundly criticized. The episode highlights the growing threat of poisoned AI supply chains and the non-negotiable need for responsible disclosure, careful privilege management, and layered, defense-in-depth design.