Loading summary
Steve Gibson
Cybersecurity. Now Steve Gibson is here. He is armed with the knowledge that Google is now downloading 4.7 gigabytes when you download Chrome, what is it? A local AI model. Steve talks about its implications Next on Security now this episode is brought to you by outSystems, a leading AI development platform for the enterprise. Organizations all over the world are creating custom apps and AI agents on the Outsystems platform. And with good reason. Build, run and govern apps and agents on one uniform, unified platform. Innovate at the speed of AI without compromising quality or control. Trusted by thousands of enterprises worldwide for mission critical apps, teams of any size and technical depth can use Outsystems to build, deploy and manage AI apps and agents quickly and effectively without compromising reliability and security. With Outsystems, you can accelerate ideas from concept to completion. It's the leading AI development platform that's unified, agile and enterprise proven, allowing you to build your agentic future with AI solutions deeply integrated into your architecture. Outsystems build your agentic future. Learn more@outsystems.com TWiT that's outsystems.com TWiT podcasts
Leo Laporte
you love from people you trust.
Steve Gibson
This is Twit. This is Security now with Steve Gibson. Episode 1077 recorded Tuesday, May 5, 2026. A browser AI API. It's time for Security Now. Yes, it's Tuesday. Yes, I am not in my studio. I am in beautiful Hawaii, the Big Island. But that doesn't mean I'm not going to get Steve Gibson on the horn and talk about security, because I know you need your fix. Hello, Mr. G. You know, Leo, you
Leo Laporte
look a little more tan. We saw you.
Steve Gibson
I am. See, my, my, my hand is light, but my face is a little dark.
Leo Laporte
Do my ties increase skin pigmentation?
Steve Gibson
Maybe that's what it is. Maybe that's what it is.
Leo Laporte
Something.
Steve Gibson
Yeah, we were on vacation, but I can't, you know, I still want to do the shows and so I've set up. If. If you could only see this cookie setup. I am outside on the lanai.
Leo Laporte
We hear the. The exotic birds tweeting in the back.
Steve Gibson
There are some exotic birds there.
Leo Laporte
Birds.
Steve Gibson
There's house sparrows and there's a bird that looks like a little chicken called Falico. I can't remember the name of it, but Chicken bird. Chicken bird. And it's very noisy, so you'll know if it decides. But the sparrows are very aggressive. They might think I have something to give them, so they might be coming up here and sitting on my shoulder.
Leo Laporte
Well, it all adds to the ambiance. That's it. We have with you and Lisa in the Big Island.
Steve Gibson
It's so beautiful. Have you been to Hawaii, Steve?
Leo Laporte
Oh yeah, I had half of my the second half of my honeymoon in Hawaii.
Steve Gibson
I love Hawaii.
Leo Laporte
I do.
Steve Gibson
So what's coming up on Security now this week?
Leo Laporte
So episode 1077. It's funny, every so every so often I think about 1077, which sort of puts the infamous 999 into context.
Steve Gibson
Yeah.
Leo Laporte
Now it's been a while. Yeah. Actually it's been more than a year. 77 Ws. Yeah. So there were two main topics which were contesting for to win the the coveted title of the podcast this week. Google's arguably premature move to build AI into Chrome ended up winning because Mozilla has said not so fast. Yeah, but we got a lot of good things to talk about. Turns out that some hackers used AI to code up a portal for credit card stolen credit card verification. They forgot to ask it to add authentication, so. Whoops. Also the the UK's security group, the NCSC has issued their own Mythos warning, which caused me to wonder where is cisa? Why. Why haven't you heard anything from cisa? We're going to, we're going to touch on that. We've got another of many recent Linux local privilege escalations. This one is bad and it's affected Linux for years. And yes, AI found it. Also some interesting commentary about the ground shifting of under AI and vulnerability research, how it's looking like it may spell the end of bug bounties and why that is probably the right thing to have happen. Also, Anthropic has released what they call Claude Security as a mini mythos. ChatGPT has made some changes which demonstrate it's getting very serious about login security. I want to make a comment about something I discovered since we last talked about the end of life of sync tracer version 1, which is what I use to sort of bundle sync thing into a nice little applet for Windows, but there's a replacement for it. And then we're going to talk about how Google has sort of surprised everyone by just saying we think it's time that we add AI support in JavaScript. So lots of fun things to talk about and of course a great picture of the week. So yeah. Oh, and there are a couple things that happened just now just so that our listeners know that I'm aware of the fact that Digicert suffered a major breach which allowed 30 EV code signing certificates to get minted behind their back.
Steve Gibson
Oh, that's not good.
Leo Laporte
And used. However, their disclosure is being called a reference, a state of the art. This is the way you do it. If you're gonna, if you're gonna say what happened, if you're gonna share with the industry, your, your postmortem. So they just updated it 10 minutes ago, so it's still a little bit in flux. We'll take a look at what they had to say, things that went right, things that went wrong, and, and what they learned. It ended up being a social hack, a malicious screensaver of all things. Got into two of their tech support member PCs and it wasn't detected due to a crowdstrike endpoint security misconfiguration. So anyway, I'm all up to speed on it, but I just didn't have time. I didn't have a chance to. Well, actually we're still learning a lot and it's still in flux, so we'll have good coverage of that next week. I just want to let everybody know that I was aware of that. So we'll take our first break, we'll look at the picture of the week, and then we'll get into all this.
Steve Gibson
It really just shows you how anybody is vulnerable to this. And as you said when we were at the threat locker Zero Trust world, the threats coming from inside the house, a network engineer who put a screensaver on his system and suddenly you're compromised. That's. That's terrible. All right, well, I'm not going to do the commercials from here in Hawaii. I'm on vacation. So I'm going to let Leo and Petaluma take this one and then we'll be back with a picture of the week. Right out.
Leo Laporte
White skinned Leo. Yes.
Steve Gibson
This episode of Security now brought to you by Zscaler, the world's largest cloud security platform. You know, the potential rewards of AI are too great to ignore, but so are the risks. Loss of sensitive data and attacks against enterprise managed AI. Generative AI increases opportunities for threat actors, helping them to rapidly create phishing lures and write malicious code. Automate data extraction. There were 1.3 million instances of Social Security numbers leaked to AI applications. It's time for a modern approach with Zscalers. Zero trust plus AI. It removes your attack surface, secures your data everywhere, safeguards your use of public and private AI and protects against ransomware and AI powered phishing attacks. Don't believe me? Check out what Siva, the director of security and Infrastructure at Zwora says about using Zscaler Watch AI provides tremendous opportunities but it also brings tremendous security concerns when it comes to data privacy and data security. The benefit of Zscaler with ZIA rolled out for us right now is giving us the insights of how our employees are using various gen AI tools. So ability to monitor the activity, make sure that what we consider confidential and sensitive information according to you know, companies data classification does not get fed into the public LLM models etc. Thank you Siva. With zero trust plus AI you can thrive in the AI era. You can stay ahead of the competition, you can remain resilient even as threats and risks evolve. Learn more@Zscaler.com Security that's Zscaler.com Security now back to Steve and security now. Thank you Leo. Just to explain behind the scenes we weren't sure this was going to work at all and so we thought well, I better pre record the commercials in case, I don't know, Micah had to jump in or something. And I brought a Starlink mini and all sorts of backup stuff and it turned out, wow, who knew in Hawaii they've got cable modems, high speed Internet. I didn't need to bring anything. I'm just.
Leo Laporte
So it isn't a complete proof of concept of your ability to roam through like anywhere in the, on the globe and use.
Steve Gibson
Not yet. Not yet. I, I probably should though, you know, set it up. We have this nice lawn behind us. There's plenty, it's perfect space for the Starlink. So I probably should just set it up before I go home and, and make sure that I could do that. But you know, everybody could double as
Leo Laporte
a bird bath, couldn't it?
Steve Gibson
Yes, it could. Or a serving tray. All right, I have. And now this is going to be another interesting experiment. I have the picture of the week. Shall we share it?
Leo Laporte
So this was great. Shared from a listener, of course, as. As are they all. I gave this one the caption attempting to preempt the inevitable question, why has the lobster become so expensive? Okay, so we see a sign,
Steve Gibson
all right, I had seen the lobster part, but I hadn't seen the rest of it. That's hysterical.
Leo Laporte
The sign is taped to the window of a buffet or a restaurant or something. Explaining, just again, preempting the inevitable question. So the sign says all lobster prices have increased due to higher lobster prices.
Steve Gibson
And I can see in the background there's a, okay, a little old lady, an elderly person and you know, she went up to the guy at the restaurant, said why are the lobster prices so high? And he just pointed to the sign, said, ladies. Yeah, because they're so.
Leo Laporte
Read the sign. That's right. All lobster prices have increased due to high lobster prices.
Steve Gibson
Very nice, Very nice.
Leo Laporte
What are you going to do? Okay, so we begin this week with a story that intersects several security fronts. Last Wednesday, Cyber News headline was scammers Vibe code server to verify stolen credit cards leak details of 345,000 cards. I had to read that one twice to make sure I understood what they were saying. So here's what they discovered. They wrote, threat actors, like so many programmers around the world, are no strangers to AI assisting in their operations. However, like so many Vibe coders, scammers also run into security issues. On April 16, the Cyber News research team discovered an exposed server owned by a threat actor. The exposed information is controlled by a carding market called Jerry's Store. As in Tom and Jerry. There were some little cartoons of a mouse jumping around on things posted on the dark Web. They said the tool provides credit card validity percentages for each seller. In other words, threat actors use this tool to check if the stolen payment card is still operational. According to our team, Jerry's Store operators extensively used Cursor, an AI assisted development environment. In fact, it's one of the very earliest AI based coding assistants from several years back. Cursor, they said, to set up the leaking server and to not knowing that it was leaking and to create administrator facing dashboards. Cursor, they wrote, is a legitimate service developed by the US software company Any Sphere. Researchers believe that relying on an AI assistant to set up the server was the reason it was exposed. Based on the chat logs our team was able to access, the threat actor received flawed instructions from their AI. Imagine that for building the dashboards. The team explained, quote, we were able to confirm the leak originated from the user asking to create a statistics dashboard. And Cursor created an unauthenticated open web directory to serve the web page, ignoring the need because of course, you didn't ask for it. Ignoring the need to set up authentication or, or ensure that only the intended dashboard would be accessible. In other words, it's just like a regular user. If you don't ask for authentication, you're not going to get authentication anyway. They finished saying. Moreover, the chat history reveals there was sufficient information for the Cursor large language model to identify that it was helping set up a credit card verification service, indicating a lack of sufficient guardrails to prevent abuse. And as you've often heard me say, I Don't think you can really control a large language model. Researchers said, quote, it's a lesson for developers using Cursor for legitimate uses, showing how it can lead to accidental data leaks. Right, it's just going to write what you ask it to. It's not going to like be your security nanny. So Cyber News said that they'd reached out to Cursor for comment and would update their article with any additional information they received. Believe the fact that the Cursor AI produced a statistics dashboard driven by an unsecured and open web directory allowing unauthenticated remote access. I think it's a great example of the danger of using AI without being a domain expert. That is you know, without knowing what to ask for because you, you know, it'll give you what you ask for, but you need to know what that is. I have no doubt that the Cursor AI would have easily provided instructions for the authentication that was needed if it had been asked to. But apparently the bad guys never thought to ask. So somebody who wasn't really up to speed on web based application security could easily fall, you know, or fail to anticipate all the various ways others might access and penetrate their system. So, you know, use expecting AI to produce secure solutions by default. It's probably a fool's errand in this case. Either it never occurred to them that authentication should be required where it was absent, or they didn't know it was going to be absent, or they assumed that the AI would know like what it should do and would do it without bidding. The Cyber News article also provided some interesting background reporting a little bit on the underground industry in stolen credit cards, which I thought was interesting. They wrote. Operations such as Jerry's Store are integral to the cybercrime infrastructure. Once scammers obtain stolen credit card information, they need to verify what which cards can still be exploited. Jerry Store provides that service. Our team noticed that to complete the task, Jerry's Store operators use legitimate, well known merchants. The Cyber News team explained, quote, threat actors used multiple legitimate merchant websites such as Amazon us, Amazon Japan, grubhub, Sam's Club, Temu, Lyft, Elf Cosmetics and Country Max. Utilizing hundreds or in some cases thousands of accounts that have already been established on these platforms to perform credit card validity checks. Attackers created those accounts to register stolen cards and then perform low risk actions, as they called them. These could include adding cards as a payment method or making a very small purchase if the platform accepts the card. Threat actors mark the card as valid and sell it to other Threat actors on the dark web. Using large merchants like Amazon or grubhub, of course, is a way to mask their activity. Since large merchants process billions of payments. Tiny transactions on a well known website don't ring any alarm bells, they wrote. According to our team, the exposed server contained a treasure trove of credit card details. Details meaning you know, everything you need to process someone's card. Researchers identified nearly 200,000 credit card details that the service had verified as valid and over 145 accounts. I'm sorry, 200,000 that were invalid and over 145,000 accounts that they had verified as valid. Now, the exposed information includes all the details that you need. Credit card numbers, expiration dates, the security code, the cardholder's name and their address. Typically, they wrote, valid credit card details are sold for between $7 and 18 each on the dark web, meaning that the value of the valen, the valid stolen card data, that's the 145,000 cards that are, that have been verified there is somewhere between a million and 2.6 dollars, they said. However, our team added that the actual value of the exposed infrastructure may be a lot higher since Jerry's store sells much more than just credit card data. That's just one of the types of, you know, fraud that they're, that they're there making available in the store, they said. While it's unclear where Jerry's store is located, internal tooling and leaked large language model chat logs suggest that the Marketplace's administrator is fluent in Chinese. The server itself appears to be hosted in Germany by a suspected bulletproof hosting provider, the Marketplace, which, yeah, the Marketplace, which launched in late 2023, is a well known credit card vetting tool within the cybercrime underground, aimed primarily at cards stolen from victims in the US and the eu.
Steve Gibson
Fluent in Chinese, but not AI, apparently. You know, this comes up a lot. We're going to see more of these. Whoops. Whoa. So hold on camera. I'm over here. Thank you.
Leo Laporte
Nice feeling.
Steve Gibson
Fan people blame AI for stuff that they do. That's dumb. There was a big story last week and everybody blamed AI because the guy's database, production database got clobbered. But of course, if you give AI the keys to your production database, it's on you, my friend. And if you're dumb enough to say, hey, just make me a website to authenticate. But, but don't put any authentication layers in there, AI is going to do what you say. It's, it's. I think this comes from sort of a magical belief about AI, that it's somehow intelligent or, or, or you know, it's going to take care of you and it's not.
Leo Laporte
And probably it's a hope as, as much as I believe.
Steve Gibson
Yeah, it's a hope.
Leo Laporte
Like I hope that AI knows how to do this and since it seems to know a lot, I'm just going to assume that it does.
Steve Gibson
Well, it does. If you tell it to, it will. I mean, you're absolutely right. If you say AI is great at OAuth, if you say write the login page, use OAuth, make it secure, it will absolutely do that. But you have to tell it to. It's not going to assume that. It might and, but again, it might not. The other, the other thing I wanted to mention is I have had credit cards stolen due to my own stupidity. And the very first of all, credit card companies know to look for those low risk, low value charges, you know.
Leo Laporte
Right.
Steve Gibson
In fact, they used to say if somebody buys sneakers and then tries to fill up a tank of gas, they invalidate that credit card immediately because that's the first thing somebody steals a credit card is going to do. That's times of change. That's not true anymore. When I credit, my credit card was stolen. I mentioned this before. They added it to an Apple wallet. So they had a prior set up Apple account and added it to an Apple wallet, which they then used the credit card through obscuring the source of the credit card, the actual credit card. I thought, very clever. And I should have known because when I gave it the six digit code, it said, okay, we're trying to add this to that Apple wallet. And I said, what are you talking about? I'm not trying to do an Apple Wallet. That should have been the hint to me that they were doing something funny. They're, you know, it's a cat and mouse, Tom and Jerry kind of a game.
Leo Laporte
Yeah, it does feel like, as I have said on the podcast before for like, I mean in the early days, probably almost before this podcast, I used to fly up to Northern California to, to visit my family in Northern California for the holidays. And this was, you know, pre Expedia and so forth. So I actually had a travel agent from the old, old, old days who I just kept around. And when we would have our conversation, she would invariably say, well, so Steve, do you have the same credit card or have you lost that one too? Because I was out on the Internet poking around and it's like, oh yeah,
Steve Gibson
I don't feel so bad now.
Leo Laporte
I Did lose that one. So, okay, so last Friday, Ali Whitehouse, the Chief Technology Officer, you know, the CTO for the UK's NCSC, their National Cyber Security center, issued a clear warning at the level of the government. Ali's warning posting was titled Preparing for a Vulnerability Patch Wave and it carried the tagline, organizations must act now to prepare for a wave of patches that will address decades of technical debt. And I love that term. In this, in this instance, the. I think that the term technical debt is exactly the right way to express the concept that, you know, the Piper may be about to get paid. I have a friend from the Midwest who, whose favorite term for this would be. They're about to get their just comeuppance. Yes, comeuppance, indeed. So here's what the UK's NCSC CTO wanted everyone within the United Kingdom within his, his sphere of influence to appreciate. He wrote, Whether they are technology producers and vendors or consumers and operators, all organizations have technical debt, a backlog of technical issues that's both expensive and time consuming. As a result of prioritizing short term gains over building resilient products. Artificial intelligence, when used by sufficiently skilled and knowledgeable individuals, is showing the ability to exploit this technical debt at scale and at pace across the technology ecosystem. As a result, the NCSC expect there will be a forced correction, which is the way he phrased it, we're going to have a forced correction to address this technical debt across all types of software, including open source, commercial, proprietary and software as a service. This is why we're encouraging all organizations to prepare now for when a patch wave arrives, a rush of software updates that will need to be applied across the technology stack to address the disclosure of new vulnerabilities. All organizations must take steps to identify and minimize their Internet facing and other externally exposed attack surfaces as soon as possible. As we've argued for some time, you should prioritize technologies on your perimeter and then work inwards, covering cloud instances and on premises environments. By doing this, organizations can reduce the risk posed by latent vulnerabilities when they become known and exploited by attackers. Where organizations cannot apply updates across their entire environment, they should prioritize updates, applying updates to their external attack surfaces. Where capacity extends beyond the external attack surface, organizations should prioritize critical security systems. It's also important for organizations to realize that patching alone will not always suffice. Some technical debt may be present in end of life or legacy technology that's out of support and so cannot receive updates. In such instances, organizations will need to replace technologies or bring them back within support, especially where it presents an external attack surface. Building on the principles contained within our vulnerability management guidance or guidance, organizations should make plans to deploy software security updates quickly, more frequently and at scale, including across their supply chains. We are expecting an influx of updates to address vulnerabilities across all severities and expect a number to be critical. NCSC recommend that or that and they have three where automatic secure hot patching is available, that is patching that does not involve service disruption, this should be enabled as a priority. Okay, well that's you know, not hard to imagine as the first where automatic updates are available, including for embedded devices. This should be enabled to reduce the workload on support teams. So yeah, turn on automatic updates and go for it. And third, where neither of the above are available, organizations will need to ensure that processes and risk appetites support frequent and scaled updating. Noting the operational trade offs around disruption and safety critical systems, a risk prioritized approach such as the Stakeholder Specific Vulnerability Categorization System can be used to prioritize installing updates and then they continue. However, should a critical vulnerability be under active exploitation, especially when affecting an Internet facing system, then it is essential to accelerate the update process. Organizations can refer to the NCSC's new guidance on responding to active exploitation of vulnerabilities for more information. To summarize, you should put in place a policy to update by default where you always apply software updates as soon as possible and ideally automatically. This should be at the core of your update management process, but we recognize it may not apply in some instances, such as for safety critical systems or operational technology. Patching alone won't address the systemic problems that my he he writes My previous blogs have addressed. I've appealed to technology producers and vendors to ensure systemic technical security debt is minimized by including, where appropriate, memory, safety and containment technologies. Similarly, for consumers and operators, a focus on cybersecurity fundamentals to raise resilience and to reduce the impact of breaches should be a priority. This includes adopting and fully implementing Cyber Essentials or the Cyber Assessment Framework for organizations operating essential services such as energy, health care, transport, digital infrastructure and government. Finally, prepare for the patch wave now. In conclusion, the NCSC advise all organizations, irrespective of size, to plan and prepare for the vulnerability patch wave. A good start is to is by reading the NCSC's updated vulnerability management guidance for larger organizations. We also recommend working to gain assurance from your supply chains, both commercial and open source, so that they're prepared to navigate any required response. One thing that occurred to me as I was going through this is that in the na, in the name of preparedness, and this certainly applies to everybody in the UK and out this notion of, of gaining assurance from your supply chains, I would say make sure that the providers of the equipment that you have on the edge which are under support, which can obtain updates, make sure they've got a, like a, a, a, a greased path through, you know, into your email. Make sure that when they do notify you of updates that are available, it doesn't get routed to some. We'll get around to it, you know, next month during our monthly review process. I would say, you know, given what we expect to have happening here over the next couple months, make sure that the communications inbound from the vendors that you are depending upon to have the most recent code running can get to you. And largely what I just shared from the ncsc, you know, it's a restating of what we already know, right? At the same time for many of the CIOs and CSOs and IT heads in organizations throughout the UK where this has rain a clear statement and posting such as this can provide the coverage and the backup they may need to succeed in getting their organizations, you know, and the other C suite executives to take this seriously to understand what is probably going to be happening shortly. And as I was seeing this Note from the UK's NCSC, I realized that I hadn't seen anything from our own CISA in the us and that struck me as odd since the SISSA we've all come to know would normally have been shouting about this from the mountaintops. So I went digging to see whether maybe I had missed that statement, which, you know, it seemed clear SISSA should have made. In the wake of the Mythos revelations, I found A report published two weeks ago on April 21 by Axios and it exactly addresses the question where CISA. The reporting was posted as a scoop, titled Scoop. CISA lacks access to Anthropics Mythos and Axios explained, writing the Cybersecurity and Infrastructure Security Agency. You know, CISA does not have access to Anthropic's powerful Mythos preview model, even though some other government agencies are using it. Two sources tell Axios this matters because the country's top cyber defense agency, tasked with helping to secure everything from banks to power plants, is on the outside looking in. At a time when the industries it works with are deeply concerned about AI powered cyber attacks overwhelming their defenses, Anthropic decided against a public release of Mythos. This is Axios bringing less informed readers up to speed. Anthropic decided against a public release of Mythos due to its unprecedented ability to quickly discover and exploit security vulnerabilities. Instead, Anthropic provided it to more than 40 companies and organizations who are now testing it and working to shore up their systems. CISA is not on that list. Earlier this month, an Anthropic official told Axios the company had briefed CISA and the Commerce Department on Mythos capabilities. The Commerce Department's center for AI Standards and Innovation has reportedly been testing Mythos, so they have it. The NSA is also amongst the organizations using Mythos, despite the Department of Defense, which oversees the agency, having declared Mytho Anthropic is a supply chain risk, unquote. It's unclear if the ongoing turmoil within the agency during the second Trump administration played any role in the agency not moving more swiftly to secure access. Spokespeople for CISA and Anthropic both declined any comment for this reporting by Axios. They wrote, the Trump administration has spent the last year, as we know, reducing capacity at cisa. Instead of opting to give more policy influence to the White House's national Instead of that, they have opted to give more policy influence to the White House's national cyber director and pushing some programs out to the state and local level. So trying to, you know, distribute this instead of having it as centralized as it had been under CISA. CISA's acting director, a guy named Nick Anderson, told lawmakers last week that the agency's resources are, quote, more limited than I would like, unquote. He said Trump has proposed cutting as much as $707 million more from the agency's budget in the upcoming fiscal year. CISA has already lost more than a third of its workforce and millions of dollars in funding. National Cyber Director Sean Karen Cross is among the Trump officials negotiating broader civilian agency access to Mythos. The Treasury Department has also been negotiating access. Sources tell Axios that other organizations with access to Mythos have predominantly been using it to find exploitable security vulnerabilities within their own networks and software. Security teams at critical infrastructure organizations have often looked to CISA to share threat intelligence across their sectors and determine how to prioritize their security strategies. And as we know, those critical infrastructure organizations have have very much depended upon sissa, but also on that that that blanket of hold harmless so that they're free to disclose things they discover, which is still a little bit in limbo. So I heard I hadn't heard about this Acting SISA Director Nick Anderson, so I checked him out, and he appears to be imminently competent and qualified. He's a decorated U. S. Marine corps veteran who served as cio, Chief Information Officer for Navy Intelligence, and head of the Office of Intelligence, Surveillance and Reconnaissance Systems and Technologies at the U. S. Coast Guard. He served in on active duty managing intelligence mission systems in Iraq, Europe, and Africa, and is a veteran of the Operation Iraqi Freedom. He served as Principal Deputy Assistant Secretary at the Department of Energy's Office of cybersecurity, Energy Security and Emergency Response, where he led national efforts to secure US energy infrastructure. He also served as federal cybersecurity lead and Senior cybersecurity advisor to the Federal CIO at the White House Office of Management and Budget. So, you know, this guy's, you know, he. He's certainly competent to be on top of sisa. I've got no complaints. With Nick's background, it appears what he needs is more resources and support and that sisa's lack of access to Mythos is largely due to the, as we're now calling it, the War Department's unfortunate feud with Anthrop. Anthropic. Anthropic made clear in 2025, at the time that it signed its contract with the Pentagon, that it did not want its AI technology to be used for mass surveillance of people within the United States or for fully autonomous weapons systems, as we know then. Subsequently, the Department of War demanded that Anthropic drop those restrictions, and Anthropic refused to do so. They published a public statement explaining their position. And regarding fully autonomous weapons, they wrote, frontier AI systems are simply not reliable enough to power fully autonomous weapons. And without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that highly trained professional troops exhibit every day. Anthropic offered to work with the Department of War on R D to improve the reliability of these systems, but were turned down. So after that, in apparent retaliation and without any evidence, the Pentagon declared Anthropic suddenly to be a supply chain risk. And this is all very unfortunate since CISA should absolutely have access to Anthropic's Mythos. Preview. Hopefully, the White House's National Cyber Director, this Sean Karen Cross, who appears to understand the need, will be able to make something happen. You know, it's clearly ridiculous to have one of the US's leading AI firms frozen out of the government because, as Secretary Pete Hegseth declared, it is woke AI, whatever that means in this context. For the time being, it appears that CISA is silent for purely political reasons.
Steve Gibson
This is really politics should not be in intrude into this at all. And unfortunately very much has. I mean SISA is in the doghouse because of what happened in 2020.
Leo Laporte
Right?
Steve Gibson
Chris Krebs.
Leo Laporte
Right.
Steve Gibson
And now the White House is saying they want approval of all future AI models, period. They're about to draft a proposal that AI models can't be released without government approval. This is exactly the wrong direction to take with this stuff.
Leo Laporte
Well, and I also did see that Anthropic wanted to do a second round. They wanted to expand their program by adding an additional 70 organizations that would have access to Mythos Preview. And the White House said no is like blocking their ability to incrementally roll this out. An incremental disclosure here is exactly what you want. You get the core 40, they have a month with it now and then, and then you widen the circle again and let another, you know, like, like next tier have access to it.
Steve Gibson
Yeah, this is, it's a little infuriating because political motivation and what's the right thing to do from a security point of view don't necessarily coincide. And, and that's what you're seeing here. And it makes us all less safe, frankly.
Leo Laporte
Yeah. Okay, break time and then we're going to look at this newest Linux local privilege escalation and look at how AI is reshaping the bug bounty business.
Steve Gibson
Excellent. Well, I could just sit back and relax because Petaluma Leo is going to take control. This episode of security now brought to you by Meter, the company building better networks. If you're a network engineer, you know the headaches. Legacy providers, inflexible pricing, it, resource constraints stretching you thin, complex deployments across fragmented tools. Look, you're mission critical to the business, but you're working with infrastructure that wasn't built for today's demands. That's why businesses are switching to Meter. Meter delivers full stack networking infrastructure, wired, wireless and cellular that's built for performance and scalability. Meter designs the hardware, they write the firmware, they build the software, they manage the deployments, they provide support. Meter offers everything from ISP procurement to security, routing, switching, wireless, firewall. They do cellular, they do power, DNS, security, VPN, SD, WAN, multisite workflows all in a single solution. Meter's single integrated networking stack scales from major hospitals, branch offices, warehouses and large campuses to data centers, even Reddit, the Assistant Director of Technology for Web School of Knoxville said this quote. We had more than 20 games on our campus between our two facilities. Each game was streamed via wired and wireless connections and the event went off without a hitch. We could never have done this before Meter redesigned our network. With Meter, you get a single partner for all your connectivity needs, from first site survey to ongoing support, without the complexity of managing multiple providers or tools. One number to call. Meter's integrated networking stack is designed to take the burden off your IT team and give you deep control and visibility, reimagining what it means for businesses to get and stay online. Meter built for the bandwidth demands of today and tomorrow. Thanks to Meter. So much for supporting Steve and Security now and we invite you to go to meter.com securitynow and book a demo. You'll be glad you did. That's M E t e r meter.com Security now book a demo. Okay, Meter. All right. Now back to Steve. All right, Steve, on we go with security.
Leo Laporte
So the news late last week was of the discovery of another serious local privilege escalation discovered in the Linux kernel. And it had been there for a long time. And yes, before you ask, it was found by an AI vulnerability discovery system operated by a security firm named Theory. They wrote, quote, an unprivileged local user can write four controlled bytes into the page cache of any readable file on a Linux system and use that to gain root. A simple 732 byte 9 line Python proof of concept has been posted to GitHub, which immediately elevates any normal user to root. And of course that's not something you want to leave unpatched. So this important and I'm sorry, this is important and Linux distros, the ones that are for sure known, Debian, Ubuntu and Susie have immediately issued patches for the problem and overseers of many other distros have as well. Red Hat initially said it was going to defer the fix, but then later changed its guidance to indicate that it will be going along with the other distros and will be patching promptly. The CVE has been rated as high severity at a 7.8 out of 10. And of course it's only. It's only, only. I mean, still that's bad. 7.8, which is, you know, it's as bad as it gets for a local privilege escalation, but the attacker first needs to get into a non root account where they're able to then execute this script in order to obtain elevation. But on the other hand, anybody who has local access to a machine also is able to use this. So it's a complete breach of, of Linux security, you know, account security. At the end of one of the reports of this, I ran across the statement AI Assisted Vulnerability research recently prompted the Internet Bug Bounty, that's ibb, the Internet Bug Bounty program, to suspend awards until it can understand how to manage the growing volume of reports. I thought that was interesting and it was news to me, so I went hunting. Here's what I found about that. Near the end of March, the Internet Bug Bounty program, which is run by HackerOne, paused their acceptance of new vulnerability submissions due to what HackerOne described as an increasing imbalance between vulnerability discoveries and the ability for open source maintainers to remediate them. And of course, yes, AI is the underlying driver of all this. Okay, but let's for bap, we'll back up a little bit. Recall that the Internet Bug Bounty is a crowdfunded vulnerability reward program that was started 14 years ago back in 2012 and it's operated through the HackerOne platform. Its purpose and intent is to reward and thus incentivize independent security researchers to find and responsibly disclose vulnerabilities in widely used open source software. The funding for the program comes from a consortium of major tech companies including Facebook, GitHub, Shopify, TikTok and others who all contribute to a shared bounty pool. The underlying idea is that since everyone depends on open source infrastructure, everyone should share in the cost of helping to secure it. And the vulnerability discovery payout structure is pretty simple. 80% of each awarded bounty goes to the researcher who reported the vulnerability, with the remaining 20% being contributed to the open source project itself, where the trouble was found to support, you know, its repair and remediation. So that helps to fund the remediation work and, and makes the program go. It's been widely seen as a success, having paid out more than one and a half million dollars since the program began. But almost predictably, AI has messed everything up. Hacker one stated quote, the discovery landscape is changing. AI Assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed. The balance between findings and their ability to fix them. You know, remediation capacity in open source has substantially shifted, so the problem is being called triage fatigue. And the trouble is not just the increased volume of reports that would be bad. What's interesting is it's not, it's also not the signal to noise ratio. The actual problem is the nature of the noise. Weirdly, the quality of the noise while still noise has increased. We all know Daniel Stenberg, the creator of Curl. He expressed it this way. He said more convincing crap is worse than obvious crap. You can't dismiss it quickly, you have to investigate it, and you waste real time getting to the point where you can prove it's nonsense. At scale, this stops feeling like a helpful external contribution model and starts to resemble something closer to a denial of service attack on the people who are responsible for security, which is like, yikes, a consequence of AI. So 31 years ago, way turning the clock way back 31 years ago, in 1995, Netscape launched the first widely recognized paid bug bounty program offering to pay researchers back in 1995 for their responsible reporting of significant bugs, which they discovered in Netscape Navigator 2.0. So they were really ahead of the game at that point. Of course, they also had a web browser that was ahead of the game, and that model has been functioning vibrantly. The notion of paying researchers for responsibly reporting bugs they find been functioning ever since. So the notion of that AI may be driving a fundamental change to this long standing vulnerability discovery and reporting model is important enough, as I said at the top of the show, to be a contender for today's main topic. Except that the idea of Google going off half cocked and adding an explicit AI interface for JavaScript in Chrome that also needed ample discussion space today. And we're going to cover Mozilla's pushback against that at the end of the podcast. But meanwhile, the company Aikido, which is deep into automated vulnerability discovery as a business, recently interviewed not only curls Daniel Stenberg, who I just quoted, but also Casey Ellis. Casey's the founder of Bug Crowd, and as such is one of the people who helped establish and formalize bounties for bugs starting back in 2012. Aikido titled their report Bug Bounty isn't Dead, but the Old Model is Breaking. I'm going to share what he wrote and also what my intuition immediately suggests about the nature of the change. So they wrote Bug Bounty has been a very hot topic lately. We're seeing high profile programs go offline or fundamentally change the Internet. Bug bounty, one of the most important programs for open source programs, is pausing submissions, Curl is removing payouts, and Node JS is removing its bounty entirely. That's not noise, that's signal. We wanted to understand where Bug Bounty is actually heading. So we sat down with two of the most credible voices on opposite sides of this conversation. Daniel Stenberg, creator of Curl, who's living the maintainer reality and recently halted bug bounty payments, and Casey Ellis, the founder of bugcrowd, one of the people who helped Establish the model in the first place. What we found was that the bug bounty model is at a crossroads, and we're in the midst of a big shift. Before we get into where the model is headed, let's take a step back and understand why it's been one of the most effective ideas in security over the last decade. It all stems from the idea of letting the Internet try to break your stuff before attackers do. And it worked because it gave companies scale they could never hire. As Casey put it, quote, if you're trying to outsmart a global pool of attackers with someone working 9 to 5, the math for that is wrong, unquote. They said, that's the magic of bug bounty. Instead of relying on a handful of internal people, you tap into a global pool of different skill sets, different perspectives, and different motivations, all attacking your system in ways your internal team never thought of. And that's without the significant overhead required to hire specialist experts internally and then work to keep them busy. All this explains why bug bounties became fundamental to modern security programs. What's changing now is not the demand for security. It's the economics of how bug bounties operate. AI has altered the balance, and not in a good way. Finding bugs is now cheaper than ever. Writing reports is even easier, and submitting them has become effectively frictionless. Meanwhile, the cost of validating those reports and then actually fixing the issues has not changed at all. Those final two required steps, validating and then fixing bugs, remains as labor intensive as ever. We are seeing this play out in practice. There are three types of report submitters. There are those companies that use a new approach for legitimate reports. These are reports that use layered AI approaches that combine the strengths of multiple AI models, guardrails, orchestration, and context, such as Aikido's own AI pen testing capabilities. And Aikido is of course, plugging their own solution, as we would expect them to on their own website. But we know that Anthropic also set up their Mythos preview system to do the same. Both are discovering and importantly, verifying suspected vulnerabilities to produce much higher quality reports, which in the case of Mythos, include proofs of concepts, of exploits. Aikido continues enumerating these three classes of bug sources. So they, they, they said. Then there are individuals who escalate their research and report writing using AI as a tool. And finally, there are individuals who are able to upskill by virtue of these AI models. They generate reports that seem technically plausible, but are still completely wrong. Daniel described it perfectly. And this is where we quoted him earlier saying more convincing crap is worse than obvious crap. They said you can't dismiss it quickly, you must investigate it right, because it looks real. And then you waste real time getting to the proof that it's nonsense. At scale, this stops feeling like a helpful external contribution model and starts to resemble something closer to a denial of service attack on the people responsible for security and the impact they write has been truly devastating. The Internet bug bounty program paused all new submissions because AI has dramatically increased discovery volume beyond what their maintainers can handle. Node JS lost its bounty when funding disappeared. The reports still come in, but the payouts are gone and Curl removed. Financial rewards after being flooded with AI generated reports, Casey emphasized that this isn't a new problem, it's an old one, just massively accelerated. He said we're doing stupid things faster with more energy. Bug Bounty, they write, has always had an issue with being a level playing field. One person submits a report and another person has to validate it. That sounds equal on paper, but in practice it has always been difficult for one person to keep up with validation. Even before AI existed. Now it's practically impossible. We're now in a world where anyone can generate dozens of reports, make them appear credible and submit them instantly on the receiving end. However, the constraints have not changed. It's still humans reviewing, triaging and making decisions. Open source has been the first to feel this impact. Open source is where the pressure has shown up first, largely because it was already operating close to its limits. Most projects are maintained by small teams, often volunteers with limited time and resources. Yet they underpin massive portions of the web. Of course, we all think of that XKCD cartoon, right with a little tiny block that's holding up this whole creaky infrastructure. They said add financial incentives, global participation and now AI generated submissions and the system is quickly overwhelmed. The Internet bug Bounty program said it directly quote, AI assisted discovery has shifted the balance between findings and remediation capability. Translation, we're finding more bugs than we're able to handle. So now the bounty is gone and yet the expectation of reporting remains. But the question is, is the way bug bounty programs have been used to effectively scale security teams and improve security posture still viable without financial incentives? Bug Crowds founder Casey Ellis doesn't necessarily believe so every organization should have a vulnerability disclosure program because if you're on the Internet, people will find issues. But not every organization is in a position to run a public reward driven bounty program. In Casey's words, Curl likely should not have had One to begin with. Casey said, I don't think every organization should run a bounty program. The Curl program should not have been a bounty program in the first place, unquote. And yet Daniel's experience shows something more nuanced. Daniel views the bounty program as a success because it incentivized real scrutiny of the code. He said, I've always thought about it as a success because it's a great way to actually encourage people to scrutinize the code. So what happens when you remove financial incentives? You'd assume that when you remove financial incentives, you'd get rid of AI slop, but that you'd also reduce the likelihood of genuine vulnerabilities being disclosed. However, when Curl removed the financial incentives, something interesting happened. The low quality AI generated noise largely disappeared. Daniel said, quote, we have stopped getting AI slop security reports. Instead we get an ever increasing amount of really good security reports submitted in a never before seen frequency, which put us under serious load, unquote. Okay, so I'm going to interrupt here to mention that I have a theory about why that is. Back when discovering vulnerabilities required long hours of painstaking, grueling work to step through and reverse engineer code, it was no fun. The only motivation, and it needed to be significant, was the promise of a big pot of gold payout at the end of that tunnel. AI driven vulnerability discovery has changed that. Today, AI makes bugs both fun and easy to find. It allows less skilled users to participate, thus broadening the bug hunter base. And there are plenty of people who would sincerely like to give back and contribute. Until now, they haven't been able to, but now they have the means. They don't need a monetary incentive. They truly want to help. I. I think it makes sense. Aikido continues with their report writing. Instead of drowning in low quality reports, maintainers are now dealing with a high volume of genuinely useful findings, many of which are powered by AI assisted research. The barrier to entry has dropped, not just for bad reports, but for good ones too. But this creates a new kind of pressure. Even high quality reports take time to understand, to validate, and to repair. And many of these good findings still fall into gray areas. Bugs that may not meet security thresholds, but still require some attention. The result is a sustained and in some ways increased load on already constrained teams. So in a strange way, the system has not been relieved, it's been refined. And this is where it gets interesting. Because while this is painful in the short term, it might actually be a step in the right direction. By removing Financial incentives. We strip away a large portion of the noise. What's left is a signal that is, on average, of higher quality, more intentional, and more aligned with actual security outcomes. AI is lowering the barrier for researchers to do meaningful work. It's enabling more people to find real issues faster than ever before. That combination, less noise, more signal, but still overwhelming volume, suggests we're in a transition phase. The historical model is breaking under the pressure, but what's emerging underneath it might be better. This would look like a system where disclosure is expected, not incentivized. Rewards are more targeted, not broad, and the focus shifts from more reports to better outcomes. We're not there yet. Right now, we're in the messy middle, where the old model no longer works and the new one hasn't fully formed yet. But if this plays out correctly, we don't end up with less bug bounty. We end up with a more sustainable version of it. What we're likely moving toward is a model where vulnerability disclosure becomes a baseline expectation across the industry rather than something optional or incentivized. Public bounty programs don't go away, but they become more controlled, more targeted, and more aligned with organizational maturity. AI will inevitably play a larger role in filtering and triaging the growing incoming volume of reports. It won't solve the problem entirely, but it will become part of how we manage it. We'll also see a shift in what gets rewarded. As automated systems become better at finding low level issues, the value of those findings will drop. Instead, incentives will move toward higher impact work, the kind that requires creativity, context, and a deeper understanding of the systems. That means researchers will increasingly focus on areas like chaining vulnerabilities, exploiting business logic, and breaking complex or emerging technologies where automation may continue to struggle. Okay, so think about this from the bounty provider standpoint. Taking curl as an example, Daniel terminates bug bounty payouts and observes an immediate drop in the total number of reports. But it's the bogus reports predominantly that disappear, not the useful reports that describe true problems. Given that, why would he ever resume bounty payouts? The Internet bug bounty is likely to observe the same thing. As I noted, what appears to be happening is that bugs are now so much easier to discover, even fun to find and report, that it's no longer necessary to dangle a carrot. Actual human altruism, which, believe it or not, in 2026 still exists, is now sufficient to drive what once required the promise of payment. It'll take a while for this to percolate throughout the industry, but my prediction is now that the 31 years of bug bounty programs we've had ever since Netscape first offered payment for reports of bugs in Navigator 2.0 is probably going to wind down over time. And the reason our programs are currently overwhelmed by good bug reports is that unfortunately, they are very buggy. It's going to take a while. I mean, this is the, that, that, that, that new phase where AI is finding problems that were not. Is truly finding problems that were not known to exist. Those will wash out of the system over the next six months or so, and then the volume of really good reports will necessarily drop because there won't be nearly as many bugs to be found in real time. And as AI then continues to check code before it goes out the door, we're not going to have new bugs introduced into the ecosystem. I think it's really interesting that potentially we are talking about a major shift in the way bugs are discovered. It won't nearly be as much for money moving forward as it has been in the past. Leo.
Steve Gibson
Okay, you want to take a little break?
Leo Laporte
I do.
Steve Gibson
All right.
Leo Laporte
And then we're going to look at a new product from Anthropic which we might call Mini Mythos or Mythos Light or Mythos junior or something. Yes. And it's available to all Claude Enterprise users now.
Steve Gibson
Okay. Oh, cool. You're watching Security Now, Mr. Steve Gibson. We do this show every Tuesday right after Mac break Weekly. That's about 1:30 Pacific, 4:30 Eastern, 20:30 UTC. And you can watch it live if you really want. The freshest version of it, our club members get to watch in the club Twit Discord. But there's also, of course, tick not TikTok x.com, facebook, LinkedIn, Twitch, YouTube and Kick. So pick your platform, watch us live or get it after the fact on Steve's site, GRC.com or our site Twitter TV SN. We'll have more security now right after this. This episode of Security now is brought to you by Bit Warden, the trusted leader in password, passkey and secrets management. With over 10 million users across 180 countries and more than 50,000 businesses, Bitwarden is consistently ranked number one in user satisfaction by G2 and software reviews. With Bitwarden access, intelligence, organizations can identify weak, reused or exposed credentials and take action immediately, while vault health alerts and password coaching surface risks to individual users in real time and guide them to fix issues on the spot, turning one of the most common causes of breaches into something visible, prioritized and fixable. And now Bit Warden is introducing the New Agent Access SDK a powerful way for developers and teams to securely integrate controlled credential access into applications, automation, workflows and AI agents. It enables programmatic just in time access to vault stored credentials without exposing sensitive data supporting secure use within modern development environments. Now this release does not incorporate very important does not incorporate any AI functionality into the Bit Warden solution and maybe even more importantly, does not grant AI systems persistent or unrestricted access to your vault data. That's not the point of the Agent Access SDK is. It's a separate open source development toolkit designed to enforce secure human approved and scoped credential access for teams that leverage AI agents in their workflow. It's available now in an alpha phase, early days yet for testing, but they want everybody to use it. Not just every Bit Warden customer, but everybody using any password manager anywhere. The Agent Access SDK introduces a secure framework for how agents request, receive and use credentials, helping define a model for safe credential interaction in agent driven systems. And I love Bitwarden as they're giving it away. Any password company that wants to use it can use it. It's open. Bit Warden now enables passkey login. I love this for Windows 11 securely unlocking devices at the OS level. Of course they have to work with Microsoft on this to provide native passkey support. This will extend SSO to automatically log users into more apps, making credential management across devices more seamless than ever. And it works with Windows. Hello. Imagine never having to enter your password again. For those who want a lightweight option, Bitwarden Lite offers a self hosted password manager designed for home labs, personal projects or quick deployments with minimal overhead. And don't worry, Bitwarden's open source code. Besides the fact that it's on GitHub, it's GPL licensed. You can look at it yourself. It's also regularly audited by third party experts. It meets all the standards SoC2, Type 2, GDPR, HIPAA, CCPA, ISO 270012002. Of course it is absolutely secure. Get started today with Bitwarden's free trial of a teams or enterprise plan. Or get started for free across all devices as an individual user@bitwarden.com Twitter that's bitwarden.com Twitter we thank him so much for supporting security now. Okay, no more singing. Back to Steve. Yeah, I thought that was really interesting that the first bug bounty was 31 years ago. That's. That's remarkable. That is really amazing. Yeah, yeah, it's a.
Leo Laporte
It's a program that has worked. But I, I, to me it really makes sense if we have, I mean, finding bugs and contributing, you know, giving back. We know that there is a lot of altruism out there in the world.
Steve Gibson
Absolutely.
Leo Laporte
You know, people who would like to contribute, you know, but, and so, you know, spending some time working with a security, an AI enhanced vulnerability finding system, I think that makes just total sense.
Steve Gibson
Well, that's one thing I don't think Netscape could have anticipated 31 years ago that AI would suddenly be fighting all these laws.
Leo Laporte
For the intervening 30 years it's been fabulous, fabulously successful, it's worked really well. You know, millions of, of do millions and millions of dollars have been paid out to, you know, authentic bugs and vulnerabilities that have been found. So the systems have working now we have AI able to pick up that, that burden and carry it forward.
Steve Gibson
There's another category of people who are out of work, bug bounty finders.
Leo Laporte
Well, that's true. It's not a, probably not a career path. Although if you are expert in running AI discovery, then you've got a new way to make some money.
Steve Gibson
Well, actually that's a good point. That Linux copy fail flaw, they found it not with the AI solely, but because a very smart security researcher pointed the AI at a specific direction and said, hey, I wonder if this is a problem. And then the AI was able to go a little step further. So it was really a partnership.
Leo Laporte
Exactly, yeah. Okay, so we can add. Well, it's, it's, it's apropos of the changes being wrought to AI vulnerability discovery that we have Anthropic's announcement last, late, late last week of Claude Security, which is now entering public beta for their enterprise customers. We could think of it as Mythos Jr. And that's sort of how they're casting it. Here's what Anthropic posted about this. They said Claude Security, which is what they're calling it, Claude Security is now available in public beta to CLAUDE enterprise customers. AI cybersecurity capabilities are advancing fast. Today's models are already highly effective at finding flaws in software code. The next generation will be more capable still and will be particularly effective at autonomously exploiting these flaws. Now is the time for organizations to act to improve their security, preparing for a world in which working software exploits are much easier to discover. Recently we made Claude Mythos Preview, which can match or surpass even elite human experts at both finding and exploiting software vulnerabilities available to a number of partners as part of of Project Glasswing. But our cybersecurity efforts go beyond Glasswing. With CLAUDE Security, a much wider set of organizations can put our most powerful, generally available model Claude Opus 4.7 to work across their code bases. Opus 4.7 is among the strongest models available for finding and patching software vulnerabilities and for discovering complex context dependent issues that might otherwise be missed. CLAUDE Security, previously known as CLAUDE Code Security, has already been tested by hundreds of organizations of all sizes in limited research preview, helping teams scan their code bases for vulnerabilities and generate targeted patches. Their feedback has shaped today's release, which makes CLAUDE Security available to all enterprise customers. It comes with scheduled and targeted scans, easier integration into audit systems, and improved tracking of triaged findings. No API integration or custom agent build is required. If your organization uses Claude, you can start scanning today. Opus 4.7's capabilities are also being brought to cyber defenders through Claude's integration into software tools that many enterprises already use. Our technology partners including CrowdStrike, Microsoft Security, Palo Alto Networks, Sentinel One, Trend AI and Wiz are embedding Opus 4.7 into their tools. In addition, services partners like Accenture, BCG, Deloitte, Infosys and PWC are now helping organizations deploy CLAUDE integrated security solutions. We're entering a pivotal time for cybersecurity. AI is compressing the timeline between vulnerability, discovery and exploitation. We believe the right response is to make sure defenders have access to frontier capabilities in the ways most accessible to them. Through CLAUDE directly and through our partners, CLAUDE Security can be accessed directly from the CLAUDE AI sidebar or at CLAUDE AI Security. To begin, select one of your repositories or scope to a specific directory or branch, then start a scan. While scanning, CLAUDE reasons about code much like a security researcher. Rather than finding vulnerabilities by searching for known patterns, CLAUDE seeks to understand how components interact across files and modules, traces data flows, and reads the source code. Once complete, CLAUDE provides a detailed explanation of each of its findings, including its confidence that the vulnerability is real, how severe it is, its likely impact, and how it can be reproduced. It also generates instructions for a targeted patch, which users can open in CLAUDE code on the web. To work through the fix in context just sounds fantastic. Over the past two months, we've refined CLAUDE Security in line with what we learned from its use in production across hundreds of enterprises. Specifically, we've seen that detection quality is paramount. Teams have told us that high confidence findings are what really accelerate security work. CLAUDE securities Multi Stage Validation Pipeline independently examines each finding before it reaches an analyst, which drives down false positives, and Claude attaches a confidence rating to every result. This means that the signal that reaches the team is worth acting on time. From scan to fix is the metric that matters. Early users pointed to this consistently with several teams going from scan to applied patch in a single sitting, where instead of days of back and forth between security and engineering teams, teams want ongoing coverage, not one off audits. We've added the option to schedule scans so teams can set a regular cadence around reviewing and acting on findings. With this release, we've also added the ability to target a scan at a particular directory within a repository, dismiss findings with documented reasons so that future reviewers can trust prior triage decisions, export findings as CSV or markdown for existing tracking and audit systems, and send scan results to Slack, JIRA or other tools via webhooks. Okay, given the wind up we've seen from Mythos over the past month and the way they describe this, I cannot imagine why any organization whose software might contain external exploitable vulnerabilities or bugs would not be jumping on this with all possible speed. As I noted a few weeks back, an organization's own internal software is only closed source to the outside world. To the organization, their own source code is wide open. And there is now an emerging tool that stands a good chance of discovering bugs that have until now escaped notice. I I would love to be a fly on the wall in the software development dungeons of the world's enterprises, you know, watching their reactions to what they begin seeing from this clawed security. Basically this is anybody is now able to purchase a mini version of Mythos. And and I would argue that that if Mythos is even better that at finding bugs, there's still benefit from from running this mythos Jr. You know Claude security over your code base to see if it's able to find something. Certainly if it can, Mythos would Mythos may be available in the future? Well, we, we presume it will be at some point. But you have this now. So I, I think this is, you know, a maybe in retrospect a predictable evolution on Anthropic's part, but certainly welcome.
Steve Gibson
I do this anyway. I mean I don't have Mythos or anything like it. I just have the regular, you know, Claude Opus 4. 7 and chat GPT 5. 5. And I always say, in fact I often have chat GPT check Claude's work and Claude check chat GPT's work.
Leo Laporte
Yep.
Steve Gibson
I frequently say let's do a security audit on these on these repositories I mean, that by itself is useful. I found all sorts of stuff. I've also had it done security audits on my systems and it's found errors and, and corrections there too. It's a, you know, just the regular models are useful. I can't wait to see how Mythos does. Yeah, yeah.
Leo Laporte
And I think from what they've said, what this adds is the ability, for example, to schedule scans so that your engineering software development team, they're just working along and then periodically the code base is given a scan and a check to see if anything significant has been found.
Steve Gibson
That's a great idea. I think that's. Yeah, that's brilliant.
Leo Laporte
Okay. So OpenAI announced that they've decided. I was very impressed by this. I'll just, I'll just say ahead of time to make account login security a selling point. Their posting was titled Introducing Advanced Account Security. And they explain, today we're introducing Advanced Account Security, a new opt in setting for Chat GPT accounts. And you've got it now. Leo. Designed for people at increased risk of digital attacks, as well as for those who want the strongest account protections available. It brings together a set of heightened security measures that help safeguard against account takeover while making those protections easier to activate in one place. Once enrolled, Advanced Account Security protects users in Codex as well. They wrote people are turning to AI for deeply personal questions and increasingly high stakes work. Over time, a Chat GPT account can hold sensitive personal and professional context and sit at the center of connected tools and workflows. For some people, like journalists, elected officials, political dissidents, researchers, and those who are especially security conscious, the stakes are even higher. This effort is part of our broader cybersecurity action plan to broaden access to the technologies that can help protect communities, critical systems and our national security. We want users to have the controls to make the security and privacy choices that are right for them. At the same time, we want to ensure users understand. Here's a critical part, understand that the increased protection of Advanced Account security comes with an increased responsibility for account recovery. And so now they get specific. Advanced Account Security brings together a series of controls that strengthen sign in protections, tighten account recovery, reduce exposure from compromised system sessions, and give users more visibility into account activity. It's available to opt into in the security section of users ChatGPT accounts on the web. Protection applies to both ChatGPT and Codex accounts that are accessed through that login. So we have stronger sign in methods. Advanced Account Security requires pass keys or physical security keys while disabling password based Login help helping make phishing resistant. Sign in the default for people who need it most. So password based login gone. You must use a passkey or physical security key. Next, more secure account recovery. If a user's email account or phone number is compromised, an attacker may try to use one of them to gain access to their chat GPT account via email or SMS based recovery. We know that, right? They say to reduce this risk, Advanced Account security disables email and SMS recovery and requires stronger recovery methods, backup pass keys, security keys and recovery keys. Because account recovery is restricted to these more secure methods, OpenAI support will not be able to assist with account recovery for users enrolled in Advanced Account Security. Again, with heightened, with truly heightened security comes much more responsibility. You know, they're saying we can't help you because you don't want bad guys posing as you to get help from us either. So you know, now we're talking, you know, I hopefully this sort of much more responsible security becomes more commonplace. Now. The only gotcha of course is that it makes users entirely responsible for the security they claim to want and to cherish. By explicitly removing email and SMS account recovery loops, the most common phishing and other attacks will be thwarted. You know, but you know, I could see in the case of chat GPT login, this makes sense. OpenAI explains two additional security enhancements writing shorter session management. Sign in sessions are shortened to reduce the window of exposure if a device or active session is compromised. Users also receive alerts when there's a login to their account and they can review and manage the active sessions across the various devices they're signed into. And finally, automatic training exclusion. People working with especially sensitive information may opt to not have those conversations used for model training. With advanced account security enabled, that preference is automatic conversations from those accounts will not be used to train our models. They finish saying using physical security keys such as Yubikeys is one of the strongest defenses against phishing. To make that level of protection easier to access, we have partnered with Yubico, a leader in hardware based authentication and account protection, to offer our users preferred pricing on a customized bundle of the best in class security keys. The Yubikey C Nano is designed to stay in your laptop. You know, you stick it into a USB port and it's just its head just basically tucks out a little bit so you're able to touch the, the, the little gold metal convex head of it in order to authenticate. And they said for, for low friction daily authentication and the Yubikey C NFC for backup and use across laptops and mobile devices. We're launching this partnership as part of Advanced Account Security, but the bundle will be available to all eligible users in their security settings on the web so more people can adopt stronger phishing resistant account protection. Users will also be able to use other Fido compliant security key or use software based pass keys. So I logged into Chat GPT which I am no longer using as my daily driver. You know, I've switched to Claude after appreciating how confused an AI's context window would become if I were to share it with my wife Lori. So now we each have our own once I was there in Chat GPT, sure enough, the security panel of the settings dialogue now has many new features and I think this is great. I expect to see this sort of enhanced security become a standard feature to more rigorous I mean like across the industry to more rigorously protect the potentially very highly sensitive dialogues that many people are having with their AI chatbots. You know, once you appreciate which Claude recently made explicitly clear that the entire history of your conversation is by default retained for use in creating a conversational context, the importance of more tightly controlling exact its access becomes I think, very clear. Okay, the before we get into our main topic, I did want to update everybody on something that I just discovered about Sync Thing and Sync Tracer. Of course we've spoken often about syncthing. You know, both Leo and I and many of I of our listeners I know are huge fans. Sync Trezor T R S Y N C T R A Y Z O R is a terrific little Windows GUI wrapper that turns Sync Thing into more of a Windows app in the words of its creator. He said Sync Tracer is a little tray utility for Sync Thing on Windows. It hosts and wraps syncthing, making it behave more like a native Windows application and less a command line utility with a web browser interface. Features include has a built in web browser so you don't need to fire up an external browser. Optionally starts on login so you don't need to set up syncthing as a service has Dropbox style file download and progress window. The tray icon indicates when synchronization is occurring. Alerts you when you have file conflicts one of your folders is out of sync folders have finished syncing and Devices Connect and Disconnect has a tool to help you resolve file conflicts. Can pause devices on metered networks to stop Sync Thing transferring data, for example on a mobile connection or a WI FI hotspot and contains translations for many languages Anyway, I've been using both of these for years, synctrazer, which contains syncthing, and I hope to continue doing so. As we mentioned, syncthing can also be installed into the Synology nas, and I've been using it there for many years, ever since my first Drobo died and I got a Synology to replace it. And at that point, as I said, I switched to Synology. Syncthing works perfectly there as well. I'm mentioning all of this since Sync thing on Windows 10 has been noting that version 2.0.16 has been available for some time, since I heard from several of our listeners that the major version 2 of syncthing is fully backward compatible with version 1.3, which was where version 1 left off and which is where I'm still stuck on my Windows 7 machine because it won't run anything after version 1.3. I decided it was time, you know, to quiet down that new version available notice but when I updated syncthing, it complained about an unknown command line switch, meaning that the way syncthing was being launched by Synctic Tracer, it wasn't familiar with the Pro. The trouble of course was that the version of Sync Tracer I had was also out of date, so I updated it. That's when I learned that Sync Treasures creator had abandoned his baby last August when he archived his GitHub project. At the time he wrote I stopped using Sync Thing some years ago and I'm afraid I don't have the time to maintain it. Sorry, German coding has kindly forked it as sync tracer version 2 and is continuing development and this fork is recommended by syncthing. Please switch to sync tracer version 2 after determining that you trust the fork. So I first verified that the that the Sync Thing project does indeed still recommend the use of this forked sync tracer 2. And indeed they do. It is recommended among their contributed software. So I wanted to let everyone who's been happily using syncthing know that indeed major version cross compatibility works. I've got syncthing 2 now in one location on windows 10 and sync thing 1.3 on still on windows 7 until I shut down this workstation and consolidate my locations which I'll be doing in over the next couple months. I ran the new installer for for sync tracer 2. It saw that there was an older version available, offered to upgrade. It did that, everything went smoothly and everything is working now perfectly. So I just wanted to make a note for those people who have maybe Windows users using Sync Thing. If you don't know about Sync Tracer It's a neat little wrapper and if you do, you are able to update and everything works great.
Steve Gibson
You don't have to use it though. I mean you can use something directly of course.
Leo Laporte
Absolutely. You're able to use sync thing and set it up as a Windows service and it just works by itself.
Steve Gibson
Let's take a time out before we get to the subject. The browser AI API. You know, there was a story, I don't know if this is related that came out earlier today that Chrome is automatically downloading a pretty, pretty hefty AI model and you can't stop it.
Leo Laporte
Not so Nano.
Steve Gibson
Not so Nano. I don't know if that's related. Maybe it is.
Leo Laporte
It was initially 22 gigs. I think they got it down to 4.4.7. I know, I know, I know.
Steve Gibson
Okay, you kind of have to have Chrome, unfortunately. For instance, I'm using Chrome right now because the Restream, which is what we use for this show, works best with Chrome. I'm a Firefox user, but I have to have a copy of Chrome and I have to have a 4 gigabyte copy of Nano along with it. Oh, well, we'll take a little break. We'll talk about this in and more in just a bit. You're watching Security now with Steve Gibson. But first, this word, this episode of Security now brought to you by Hawks Hunt. As a security leader, you've been there. The eye rolls during training, the one size fits all phishing simulations that your employees spot from a mile away, and the report button that gets ignored. More often than not, your programs are running, but it ain't changing employee behavior. Meanwhile, AI is making real attacks more convincing by the day, and leadership is starting to ask the question you don't have a clear answer to. Is this actually working well? Hoxon empowers your employees to spot and stop advanced phishing attacks to drive measurable behavior change through personalized gamified micro training powered by AI and behavioral science, and by the way, as an admin, you'll love it. Hawkshunt does all the heavy lifting simulations run automatically, not just in email, but in Slack and teams too. They're personalized to each employee, just like the bad guys, based on role, location and behavior. And every simulation uses AI to mirror real world attacks, meaning your employees are being tested on what's actually getting through, not some outdated template they recognize immediately. Gamified training keeps the engagement high without feeling punitive. And because every interaction generates a coaching moment, you're not just tracking completion. You're actually building behavioral indicators that tell a real story, Reporting rates, repeat clicker reduction, and time to report the kind of metrics that hold up when leadership asks you the hard questions. But you don't have to take my word for it. With over 3500 verified reviews on G2, Hocksun is the top rated security training platform, recognized for best results and easiest to use. It's also recognized as customer choice by Gartner and thousands of companies like Qualcomm, Docusign and Nokia. Trusted to train millions of employees worldwide. Visit hoxhunt.com securitynow today to learn why modern secure companies are making the switch to Hoxhunt. That's hoxhunt.com/security now. Thank him so much for supporting Steve and the work he's doing. And now back to the work he's doing. Actually back to our discussion of Chrome and AI on security now. Steve.
Leo Laporte
Yeah, so turns out, and this is actually exactly on point for this this Nano LLM, Google is planning to define a new API to bring AI into our browsers. This was sir, this would serve as an interface to large language models existing outside the browser or brought in by the browser. Google appears to be mostly targeted at local LLMs, but support for cloud based LLMs is present too. So this would be a means for allow Just to make this clear, for allowing web pages or browser extensions to invoke a user's local or remote large language models for many purposes such as locally reading and summarizing a web page's content, proofreading a web page document being edited, or reading through someone's webmail to produce summaries or take actions. In other words, it would create a JavaScript API large language model prompting interface. Now, not everyone thinks this is a good idea, and many of those not everyones includes end users who feel uncomfortable with this creeping trend toward AI ifying everything. An early instance an example of this which we covered at the time was Vivaldi Browsers CEO John von Tetchner who said we don't see AI as something that our users are asking for. Rather the opposite. I think a lot of people are reacting to force fed AI. John cited as a no thanks example Microsoft's recall compiling a long term history of everyone's desktop screenshots every five seconds. Giving recall the label of AI now seems sort of quaint in today's world. We've come a long way in a short time. Tetchner said that quote the future of browsers is about who controls the pathway to information and who gets to monetize you, unquote, which frames the race to insert AI into our browsers as a power grab more than as a feature competition. So the thing that put this on my radar last week was seeing that Vivaldi's John von Tetchner has some other company, notably Mozilla. In a posting to Blue sky Last Thursday, April 30, Mozilla's Jake Archibald wrote, Chrome looks to set. I'm sorry, Chrome looks set to ship an LLM prompt a API to the web platform. At Mozilla, we oppose this API. We feel it has a large interoperability risk and Google imposing terms and conditions on a web API sets a dangerous precedent. Okay, now Leo, listen to this. Before I go any further, I want to touch on those terms and conditions, since that alone is a deal breaker for me. Last week in a thread in Mozilla's GitHub account, Jake wrote, According to Google, according to Chrome's documentation, to use the prompt API you must acknowledge Google's generative AI prohibited uses policy. Elements of this policy go beyond law. For example, do not engage in generating or distributing content that facilitates sexually explicit content. Do not engage in misinformation, misrepresentation or misleading activities. This includes facilitating misleading claims related to governmental or democratic processes. So here we have a proposed web browser API that implicitly contains acceptable use policy. This would be like a web browser refusing to display controversial four letter words on the grounds that someone might be upset by what a website might wish to have their browser display. Hearing this causes me to want to select a couple of four letter words myself. This is so wrong.
Steve Gibson
Yeah. Now this is the system prompt though for the AI, right?
Leo Laporte
No, there is a system prompt for the AI which is part of the API. Yeah, but, but, but so, so this is, this is saying that the use of the prompt API by JavaScript running in the browser must acknowledge these this acceptable uses policy because it sounds more
Steve Gibson
like the kind of thing you tell an AI not to do.
Leo Laporte
And that's no, no understandable. This is developers. This is for. Yes, for developers. So I just thank God we have respected developers at Mozilla to push back and I hope this also comes to the attention of the EFF because this is seems wrong. Okay, so to obtain some pro and con balance here, let's first look more closely at what this new so called prompt API that, that's the name that they're giving it, the prompt API that Google has already implemented and moved into Chrome. It's already in Chrome, which is why Leo, you noted that this multi gig download is happening because they're also downloading a model their nano their so called nano model. The explainer for this nascent feature says so this is now Google speaking. This explainer and the and the accompanied draft report are in active development by the Web Machine Learning Community Group. Community group members are seeking feedback and they're getting some seeking feedback and support for this proposal to gain working group and implementer adoption. Implementations are experimentally available in Google Chrome and Microsoft Edge browsers and operating systems, they write aren't. In order to set the context here, browsers and operating systems are increasingly expected to gain access to language models. Okay, I didn't know that, but okay. Language models are known for their versatility. With enough creative prompting they can help accomplish tasks as diverse as we have some bullet points Classification tagging and keyword and keyword extraction of arbitrary text helping users compose text such as blog posts, reviews or biographies summarizing, for example of articles, user reviews or chat logs generating titles or headlines from article contents answering questions based on the unstructured contents of a web page translation between languages and proofreading. In other words, all of the things I mean like AI in your browser. Things that Vivaldi said. I don't know if we want to jump into that just yet, they said. The Google Chrome, Microsoft Edge and the Web Machine Learning Community Group are exploring purpose built APIs for some of these use cases, namely translator, language detector, summarizer, writer, rewriter, and proofreader. This proposal additionally explores a general purpose prompt API that allows web developers to prompt a language model directly. This gives web developers access to many more capabilities at the cost of requiring them to do their own prompt engineering. Currently, web developers wishing to use language models must either call out to cloud APIs or bring their own and run them using technologies like WebAssembly or WebGPU, usually through JavaScript runtime frameworks. By providing web platform API access to the browser or operating system's existing language model, we can provide the following benefits compared to to cloud APIs local processing of sensitive data, for example allowing websites to combine AI features with end to end encryption. Potentially faster results since there's no server round trip involved offline usage lower API costs for web developers and allowing hybrid approaches such as free users of a website to use on device AI, whereas paid users use a more powerful API based model. Okay, now I'll I'll just interrupt here to note that those seemingly I don't know those. To me, they feel like made up reasons you know, local processing of sensitive data, for example allowing websites to combine AI features with end to end encryption. I get the local processing angle that's potentially valid, but the end to end encryption part makes little sense to me in this context. We already have TLS connections with all websites and we have decades of history and experience with making TLS privacy and security bulletproof. Then there's potentially faster results since there's no server round trip involved. They cite okay, so the assumption here is that a local potentially underpowered LLM is going to outperform an LLM in these monster data centers that are being frantically built today. All of everything I'm seeing says that the cloud blows away local LLM and so on for the remaining three benefits. You know, our browsers already do have the ability to query cloud based LLMs using the tried and True XML HTTP Request API, which has been around forever, or the more recent Fetch API, and both of those offer state of the art mature security and privacy protections. So what really appears to be going on here is for Google to be engineering a means for their Chrome and other Chromium based browsers, notably Edge from Microsoft, to access non cloud based LLMs. Since everyone can already do that, that is can already access cloud based LLMs, their explainer continues writing. Compared to developer supplied model approaches, using a built in language model can save the user's bandwidth, storage and memory resources while using a model that's optimized for the device. This pattern could also provide a lower barrier to entry for web developers by removing the need for developers to serve models and manage their dependencies. Okay, now I'm not sure that that makes sense to me. Again, this presumes that any and all large language models are identical and interchangeable and that the the web developer doesn't care which one they're interacting with, they're just using a generic LLM that the user has provided to their browser. You know, today that's already not the case. I mean, it's already not the case that all LLMs are identical and interchangeable. And I expect model design and capability to diverge more as we move into the future rather than converge. Of course, we'll see how that goes. So next, Google's explainer clearly states its goals Our goals are to provide web developers a uniform JavaScript API for accessing browser provided language models of varying capabilities encapsulate model management and execution details as much as possible. For example for downloads, updates, templating and parsing guide web developers to gracefully handle failure cases. For example no browser Provided model being available, I guess by always having one develop formal implementation guidelines and definitions. For example Initial on device models and possible cloud services. The following are explicit non goals they said we do not intend to force every browser to ship or expose a language model. In particular, not all devices will be capable of storing or running one. It would be comforting to implement this AP I'm sorry, it would be conforming. It would be conforming to implement this API by always signaling that no language model is available. In other words, that's acceptable. It may also be viable to implement this API entirely by using cloud services instead of on device models. We do not intend to provide guarantees of language model quality, stability or interoperability between browsers. In particular, we cannot guarantee that the models exposed by these APIs are particularly good at any given use case. These are left as quality of implementation issues similar to the shape detection API. The following are potential goals we're not yet certain of. Allow web developers to know or control whether large language model interactions are done on device or by using cloud services. This would allow them to guarantee that any user data they feed into this API does not leave the device, which can be important for privacy purposes. Similarly, we might want to allow developers to request on device only language models in case a browser offers both varieties. Allow web developers to know some identifier for the language model in use separate from the browser version. This would allow them to allow list or block list specific models to maintain a desired level of quality or restrict certain use cases to a specific model. And finally, they said both of these potential goals could pose challenges to interoperability. So we want to investigate how more important such functionality I'm sorry, investigate more how important such functionality is to developers who fought to find the right trade off. So in other words, we in the world are not yet necessarily ready for this or in need of this. So we're unsure how it should work exactly, but we're going to charge ahead because this will be better than nothing. You know, essentially what this comes down to when you strip it away is Google. And as you started with this Leo, Google wants to add a 4 gigabyte. Actually, it's 4.7 is the number I saw, down from 22, which it was earlier, a a massive language model to Chrome, so that Chrome will become AI enabled intrinsically. And that would allow Chrome hosted web pages to do lots of things they can't now. So, okay, today's web browsers are littered with yesterday's great ideas that, while they may have never achieved critical mass, must still be present and supported, since random websites scattered around the world still use them. As one example, it may not be fair to single out Flash, since it did have its day. There was a time when you could only do things with Flash that you wish you could do on the browser. But the but JavaScript and scripting in general had not caught up. But boy, was Flash difficult to kill off, and in some places, even today, it won't die. As I look over the Prompt API implementation section, I can I can empathize with Mozilla's gut reaction, since this does seem sort of, well, both obvious, but also forced and a bit unnatural. For example, this API defines a specific system prompt, as they call it. The specification says the language model can be configured with a special system prompt, which gives it the context for future interactions. The system prompt must be the first message, whether passed via the initial prompts option to the Create function or as the first message to the first prompt or append method calls. We then see three examples of these various semantic options that they just described. The first one shows where a a a constant variable session one is set to the the large language model dot create functions output where the initial prompt the the init the the initial prompts system prompt is prepend pretend to be an eloquent hamster and then we we have the we log to the console. The output of that that large language model was just created being prompted what's your favorite food? So of course an eloquent hamster is going to respond to the question what's your favorite food? I guess that's what lettuce. I think that's what.
Steve Gibson
I don't know.
Leo Laporte
I think that's what hamsters like.
Steve Gibson
Anyway. It's an eloquent hamster, which is a different matter entirely.
Leo Laporte
Right. It might be lettuce with caviar. Yes. Anyway, my reaction to all of this is that web standards are too important to be created in any half baked fashion, and Mozilla apparently feels that it's too soon to do this. Once a web standard exists, as we know we've seen this over and over, it is incredibly difficult to deprecate it since as we saw with Flash, someone somewhere will be using it. Browser bloat and the security implications of that are very real problems.
Steve Gibson
Google has never held back though, right? In unilaterally declaring web standards. Yes, they say, well, you know, we're the dominant browser, we can do whatever we want. I understand Mozilla's reluctance to go along for the ride, and I think people are not going to be happy about 4.7 gigabytes being downloaded to their hard drive.
Leo Laporte
It's really going to change the, the whole complexion of Chrome.
Steve Gibson
Yeah. So it becomes massive. I can understand why Google may say, oh, well, maybe for spell checking or local grammar or something, you, you know, developers might find a use for this. But it is a little, I think, I think Mozilla is right. This is premature. There's no reason to be doing this now. No, there's no demand for this now, I don't think, is there?
Leo Laporte
No, it, no. And there, I mean, the. What, I guess what they're. They recognize that you can do this in the cloud. Now browser pages are able to reach back out to the cloud and talk to a large language model that's going on already. They're saying, well, but we want, you know, we've got this cool technology. We've managed to squeeze a large language model down to 4.7 gig. We want it in the browser because we can. Because we own the browser.
Steve Gibson
Right, Right. And we might imagine down the road some use. Yes. It's hard for me to imagine what that use is.
Leo Laporte
But yeah, I agree. That would, that would justify this. So Google's working specification goes on and on and on and it's all extremely specific to the application of today's LLMs. They are creating something as important at an industry wide, you know, as an industry wide specification for what could just be the moment we're in today. I mean, to me that's the problem is that none of this is gelled yet. I mean, it is still a moving target. So the idea of API ing it to create a web standard seems premature and misguided. Anyway, I've dropped the URL of Google's full specification into the show notes. It's at the top of page 20. For anyone to follow up who may be interested, I want to now switch to Mozilla's response. I have the rather dry conversation thread in Mozilla's GitHub account under it's under their standards positions. So I've dropped that URL into the notes also. But since this podcast endeavors not only to inform but also to entertain our listeners, rather than sharing Mozilla's dry recitation. Ah yes, I want to share the registers. Typically feisty and irreverent take on this controversy. Leo, let's take our final break. We're going to look at the flip side of what's going on.
Steve Gibson
I can only imagine what the reg has to say about this. I'm trying to Give Chrome the benefit of the doubt. But this is the problem. This is my problem and my problem with Google for a while now. They don't go to the IETF or W3C and say, here we want to do a standard. You know, let's get everybody involved.
Leo Laporte
It's already in there.
Steve Gibson
Yeah, they're so big, they're so dominant, there's something like 90 of the browser space that they can just do it and it becomes a de facto standard. So I'm, I'm with you. I'm not necessarily against the idea. And it sounds like in their spec they're saying, well, it doesn't have to be our model, it doesn't have to be Gemma, it could be something else.
Leo Laporte
But I, I don't, I don't know
Steve Gibson
if there's a demand for this and I know people are going to be very upset. I already see the upset over this giant download and you don't get a choice. You can't turn it off. Right. It comes with Chrome now. All right, well, let's take our, our final break and we'll be back with Mozilla's response as seen through the filter of the register. You're watching Security now with Steve Gibson. More in a moment. This episode of Security now brought to you by Trusted Tech. If you're managing Microsoft 365 for your company, you are responsible for both the cost and whether it's set up correctly. And I think you might already know on July 1st, Microsoft's raising prices, so any mistakes in your licensing are about to get more expensive. Most companies using Microsoft 365 are either over licensed, paying for unused seats and features, or under licensed, creating compliance and security risks. Sometimes it's both. The result is wasting thousands, sometimes tens of thousands per year on tools your team doesn't use, or worse, I guess, missing critical security features you thought you had. You gotta get this just right. And that's why you need trusted tech. Trusted Tech helps businesses understand what they have, what they actually need, and how to lock in the right setup now before the costs go up. Their team ensures your M365 environment is well supported and aligned with how your business actually operates. And if you need ongoing help, they also offer reactive support for your Microsoft environment through their certified support services. Microsoft licensing is, I don't think I have to tell you, constantly changing E3 versus E5 versus business premium. There's add ons, the new E7. It's confusing and easy to misconfigure and overpay and Licensing mistakes don't just cost money, they create compliance exposure that's going to get more expensive after July 1st. So even if you think your licensing is dialed in, it's absolutely worth a second look. And no one could do it better than Trusted Tech. Just ask Kevin Turner. You know his name. Former Microsoft coo. This is what he said when he talked to Trusted Tech. He said, you Trusted Tech, you have an incredible customer reputation and you have to earn that every single day. The relentless focus you guys have on taking care of customers gives them value and differentiates you in the marketplace. End quote. He was impressed. You will be too. Now remember, after July 1st, you're stuck paying more. This is the last chance to fix your licensing before costs go up. TrustedTech is offering a free Microsoft 365 licensing consultation right now. Visit TrustedTech team SecurityNow 365 to get a clear, data backed view of your current licenses, what you're wasting, and how to lock in savings before the price increases. Go to trustedtech.team SecurityNow 365 and submit a form to get in contact with Trusted Tech's Microsoft licensing engineers. I'm going to say it one more time. Write this down. Trustedtech.team SecurityNow365. You got to do it now. Trustedtech.team Securitynow365 now back to Steve and back to our conversation. Actually, Darren Okey, who is of course, as you know, one of our most avid AI users in the club Twit Discord, says he thinks this may be the most important thing to happen to browsers since AI. He thinks it's, it's, it's really important. I'm not sure I agree. I mean, I could see there's some potential.
Leo Laporte
It's a huge change to our browser.
Steve Gibson
It is a big change. I guess that's the, that's. Nobody disagrees about that. And I think it's also the case that Google is forcing this instead of proposing it. And I don't like that either. But I didn't like it when they forced HTTPs down our threshold.
Leo Laporte
Well, and you don't always get the right design when one person does it. That's why, you know, so much of what is done correctly is a collaboration. And you know, Mozilla has, you know, even though Firefox is a diminishing percentage of the desktop space, Mozilla as a company has been at the forefront of all of the standards work for ever.
Steve Gibson
Yeah, we talked to the CEO of the Mozilla Foundation a few weeks ago on Intelligent machines and even then at the time, and this is before they'd added that little switch, he said, we're going to be very judicious about AI in our browsers. And now they actually have a switch that says disable all AI features. This is a switch. Most notably, Google is not offering Chrome. You cannot disable this.
Leo Laporte
Okay, so I'm going to share the Register's typically feisty and irreverent take on this controversy. They also supply a great deal of additional useful background, and when we see that their headline is Quote Firefox Maker Torches Google for building Prompt API into Browser well, you know it's going to be good, the Register wrote. Jake Archibald, Mozilla Web developer Relations lead, articulated the organization's concerns in a GitHub discussion of the API, which provides a standard way to send and receive prompts and responses from a local machine learning model. Archibald wrote, quote, we continue to oppose this API and feel it has severe negative consequences to the interoperability, updatability and neutrality of the web platform. Unquote, the Register writes, the prompt API, as Google describes it, quote gives web pages the ability to directly prompt a browser provided language model specifically. And here it comes. It provides a way to send natural language instructions to Google's Gemini Nano model, which is small enough to be downloaded for local inference through Chrome. However, writes the Register, it's not small. Google recommends having 22 gigabytes of space available, although the Nano v3 Nano model for desktop use is 4.27 gigabytes. Web developers already have a variety ways to interact with AI models. They can use cloud service APIs to communicate with hosted models, or they can access local models through technologies like JavaScript, runtime frameworks, WebAssembly, or Web GPU. Various vendors like OpenAI and Perplexity have shipped browsers that embed access to remotely hosted AI models. Mozilla itself is testing an AI based smart window in Firefox and is developing tools for AI model scaffolding. The prompt API aims to make it easier to run local inference in a way that takes advantage of browser security mechanisms to produce faster response times, to allow offline usage, and to provide more cost effective ways to integrate AI services, for example, providing a free AI fallback if users lack a paid API key. Okay, so that's interesting. That suggests that Google wants us to register our LLM AI provider accounts with our browser so that random websites we visit will be able to submit their prompts to our AI account. This brings to mind the famous rhetorical question, what could possibly go wrong, the Register continues. Mozilla's concern, as articulated by Archibald, has to do with what the prompt API means for the web, not to mention Google's justification for deployment. First, he worries that Google's own Nano model will become the default and that developers will standardize on it in an effort to make the non deterministic responses of an AI model more predictable. That tendency, he argues, will create pressure for Apple and Mozilla to license Nano for the sake of a common user experience. Perhaps more significantly, Archibald notes that using the prompt API requires agreeing to Google's Generative AI Prohibited Uses policy, which prohibits activities that are not necessarily illegal, like generating disturbing content. I'll just pause to say who determines what content is disturbing. There is nothing that attorneys love more than ambiguous language in contractual agreements. It's a built in full employment guarantee. The Register quotes Jake saying this seems like a bad direction for an API on the web platform and sets a worrying precedent for more APIs that have browser specific rules around their usage. Amen to that anyway, the Register continues. Finally, Archibald argues that Google misrepresented demand for the API by cherry picking a few a few social media posts and calling that a groundswell of developer support. Jake posted, quote, the intent to ship on Blink dev states web developers as strongly positive and links to the Explainer for evidence. The evidence provided there does not seem to fit the claim, unquote in an email Archibald wrote. Archibald told the Register that the question is whether the prompt API is good for the web, and Mozilla doesn't believe it is. Jake said, quote, the core problem is interoperability. Prompts are tightly coupled to models. Developers will inevitably tune to the quirks and policies of whatever model they're building against. That's how you end up with model specific code paths, which is the browser compatibility problem all over again. The terms and conditions issue is part of that. If using a web API means accepting a specific vendor's content policy, especially one that goes beyond the law, you're not really building for an open platform anymore, unquote. And just just to pause, what he means is remember Those days where JavaScript had to determine which browser it was in and then would do this code for ie, that code for Firefox, this code for Safari, and that code for Chrome? Those were. No, those were not good days anyway, the Register says. With regard to Google's exaggeration of developer enthusiasm, Archibald said there are definitely devs interested in AI capabilities, but Google failed to provide evidence of that. The signal is polarized, not strongly positive, but either way, developer demand alone does not meet the bar. The question is whether the API can work across implementations without tying the platform to one vendor's model. Google did not immediately respond to request for comment. However, on Thursday, Rick Byers, the Google Chrome engineer responsible for shipping the prompt API, chimed into the GitHub discussion to acknowledge the concerns articulated by Archibald. To his credit, he wrote, quote as one of the Blink API owner approvers for approvers for shipping this in Chromium, I admit that I share the concerns here in Mozilla's standards position. Where I differ is in preferring paths that promote experimentation, learning from mistakes and competition to those which err on the side of stalling innovation out of fear of what might happen. Unquote. Right? That's a perfectly articulated response to the more cautious we should wait a bit to see what happens stance. The Register concludes their piece by writing buyers asked the web community to help collect evidence of harm to advance the discussion. Pointing to the debate over other controversial web technologies like encrypted media extensions, remember eme? He suggested the outcome has not been as desire as was predicted, but focusing on data so far has not done much for Google's cause, according to a report created in February that compares the performance of Chrome with Gemini Nano and edge with 54 mini instruct using the prompt API. These models do not provide very good results, the report says. Quote for generative tasks, composition, Tag generation, etc, 24.29 edges and 15.17% of Chrome's responses failed to complete the task at all. This is in reference to a rubric that defines failure as a score of 2 or less. On a scale of 1 to 5 for classification tasks, 29.58% of edges and 23.93% of Chrome's responses did not label or categorize input correctly, so it's often also wrong. They finish with the report's conclusions, noting in terms of groundedness and accuracy, Edge failed, which is to say hallucinated 17% of the time, while Chrome failed 6% of the time. Is that good for the web? You could ask Chrome, but you might not get a reliable answer. And that's how the Register signs off. So burn, Burn. So where does this leave us? I guess it leaves me more happy than ever that I've stuck with Mozilla. I look at what Chrome, what Google now presents us on a page of search results and it becomes clear that we are the product. You know, I search for something specific and sponsored, you know, instead of what I ask for, I Get sponsored interception advertisements that are promoted to the top of the page and are presented before the result that I'm seeking. Then I need to way down past a bunch of YouTube video links that I have zero interest in. Okay, now in fairness, Google's not alone in doing this. Apple has similarly succumbed in their app store. The thing I'm looking for is never first any longer. Even when I search for it by name and spell it correctly. What's first is what someone paid them to show me first in the hope that I wouldn't notice or wasn't sure what it was that I wanted. And on the Google side, in return for tolerating a bunch of advertising, we do receive a ton of services at no charge. You know, I author these show notes every week in Google Docs for free. And you know, the catch all junk email account I maintain over at Gmail is similarly valuable. All of that means a lot. So thank you Google. But all of that seems fundamentally different to me from intermixing the design and establishment of crucial web standards with a single company's commercial interests. Yes, Google has succeeded in leveraging their position as the winner of Internet search into the winner of the web browser wars. I get it. As I use the Internet daily, I am more or less continually being offered the opportunity to improve my life in one way or another by switching to Chrome. I constantly need to decline. Most people have given up declining and they're perfectly happy using Chrome whether or not their lives are any better for it. And that's great. But tremendous responsibility burdens Google's dominance with Chrome. You know, they need somebody knowledgeable to push back and to question their actions if for no other reason than to help them make the best choices. So I'm very pleased that we have Mozilla watching and actively participating. Google may and likely will still plow ahead and force Mozilla to keep up or to be left behind while Mozilla and Apple and you know, either be, you know, keep up or get left behind and become irrelevant, but everyone will likely get a better browser, whether that's Chrome, Edge, Safari, Brave, Vivaldi or Firefox, if this is a collaborative effort. And Leo, the, the thing that I think was most significant here is the observation that, that, that Archibald made that LLMs are inherently non deterministic. You know, every time you ask a question question you get a different answer. And so we're putting, we're now talking about having the browser interface to one vendor's solution which has a random number generator in its heart, not a Very good one either. You know, it's got, it's got some temperature, it's got some temperature setting.
Steve Gibson
Right.
Leo Laporte
And apparently it's, they had to sacrifice a lot of reliability in order to get the size down to something heavily quantized. Yeah, that was tolerable. They wanted it to be 22 gigs. And people said F off. I am not putting that in my, you know, is that, is that, is that mass storage? Is that ram? Where, where does that 22 gigs live?
Steve Gibson
Somewhere you don't want it to live, probably.
Leo Laporte
And so they've had to squeeze it down in order to make it acceptable. And in the process it's lost its reliability. So, so I mean really if, if we want to be able to surf the web with any browser we choose, and if web pages that we download are going to start wanting to use local large language models, whose large language model will it be? And they aren't interchangeable. We know they're not interchangeable.
Steve Gibson
Right. Well. And you know, Darren, who loves AI, said, well, I can imagine some uses. For instance, it's hard to write software that detects misspellings, but the AI could quickly detect a misspelling if somebody's entering it in and correct it. And so there is that convenience. But I also think that this is Google bigfooting the whole process. And I, it's, it's part of the initiatification of Google. They don't feel any responsibility to anything at this point except their stakeholders to make more money. And that's clearly, you know, this is about dominating the browser space and putting everybody else out of business. The other thing that worries me as an AI fan, and I know you're an AI fan too, the more we force AI down the throats of unwilling users, the more they're going to hate it.
Leo Laporte
We.
Steve Gibson
Google's found that out. Microsoft's found that out.
Leo Laporte
Annoying chat box in the lower right corner of your screen. Well, that's going to end up running locally. And it's like, so I don't want
Steve Gibson
to turn people against AI. AI is a real value, but by doing this kind of forcing it down people's throats, you're actually making enemies. And I don't think that's good either for Google or for AI in general. So yeah, I have lots of problems. We'll talk about this. I'm looking forward to a convers conversation tomorrow.
Leo Laporte
One thought would be unbundling the LLM from the browser that is creating an interface, but not having it like secret. I mean, it's essentially secret. Right now. I mean I get it. That's the way to minimize friction so that everybody has it because Google wants everybody to have it.
Steve Gibson
But it's not a very well kept secret. Agree. I should point out that one of the reasons Google thinks this is okay is because they're already doing this on Android as is Apple doing it on, on iOS. There are built in local models on both those systems. Apple touts this all the time and your data stays local on the device. Apple Intelligence is a local model so there is a precedent for this on those platforms. I think I still wish, maybe it's a futile wish that the web would be a standards based interface and that everybody should be able to choose the browser of their, of their choice. And it should all, they should all work.
Leo Laporte
Well, the only one who doesn't want it to be a standard is the big guy.
Steve Gibson
Is Google the winner? Yeah, you wouldn't, you're not going to see Vivaldi saying well we think a standard should be favor us. They can't. Nor could Mozilla, but Google can. And yeah, clearly they, they do. I, I agree with you. I think this is a, you know we saw Google back way down on a number of its proposals.
Leo Laporte
True, the, the, the, the whole, the whole anti tracking technology they had, they tried several times but they got real pushback. But they got pushback from people who had invested. I mean like advertisers. Yes, exactly. Large, large, large commercial interests and there's no one to push back on this.
Steve Gibson
Well just remember as users, maybe as individuals we don't have much power but collectively we do. They still need us to use their darn browsers.
Leo Laporte
But would somebody leave Chrome to go to Firefox?
Steve Gibson
Well, you and I have, I, I. Yes, you and I have and this is one of the reasons and fortunately so far you can mostly use the Internet with a Firefox based browser. Mostly wide Find is another example. DRM in the browser.
Leo Laporte
Yeah, the open table site I use for restaurant reservations and it doesn't work under Firefox.
Steve Gibson
It's Chrome. As I said. Restream needs chrome because of WebRTC and the WebRTC implementation it uses. I think that's, you know, this is an object lesson. This is what happens. And if you don't want, if you want Chrome and Google to be the only player in the Internet, this is, this is how these things happen. I think we can fight. We gotta fight. Hey, great, great topic, great show as usual. Thanks to all of you for being here. We do security now. Tuesdays As I mentioned, Steve's got copies of his show if you can't watch live at his website. GRC.com actually has a lot of good stuff@grc.com the the show is there, including unique versions only. Steve has a 16 kilobit audio version, a 64 kilobit audio version. Those are nice small versions for people who just want the audio. They don't want a lot of bandwidth. They don't want 4.7 gigabytes of AI downloaded with every single episode. He also has very nice transcripts written by Elaine Ferris. She's going to put that up a couple of days from now. It takes a while. She actually has to physically type it. He also has the show notes there. 20 pages, 21 pages this week, 22 pages of goodness. Now you can get those show notes ahead of time. He's been sending them out on Sunday lately. If you go to grc.com email you put in your email address there. It has two benefits. One, you're now whitelisted. He will validate that you're not a spammer. That allows you to comment, send him questions, submit photos. Oh, there goes another one of those helicopters. My wife might even be on it. Send in your photos for the picture of the week, that kind of thing. But below that email form there is also there are two checkboxes. One for the weekly newsletter mailing. You can sign up for that. You'll get it automatically every Sunday or Monday. The other is for a much less frequent email when he's got a new product. Something like Spinrite. I don't know. Seven could be in the horizon. We're at 6.1 in the world's best mass storage maintenance, recovery and performance enhancing utility. He also has that fabulous DNS Benchmark Pro which he has been updating. And a new update I think is coming soon. Anyway, that's how you'd find out about those things. Do get spin, right? That's Steve's bread and butter. You'll find that at GRC too. And if you have mass storage, you need spin, right? If you don't have it, get it now for sure. Also lots of other free stuff. Steve's kind of prolific. All hand coded in assembly language. Steve, I don't think you're ever going to use Vibe coding tools somehow, you know, I'm.
Leo Laporte
I think my first exposure may be creating some homegrown iOS things.
Steve Gibson
Ah, just for yourself.
Leo Laporte
Not something. Yes, it's, it's for home automation and just for like monitoring GRC servers. And I'm Thinking, you know, not commercial products but just my own, my own purpose. I think it'd be fun to looking
Steve Gibson
at my GitHub repo and my projects. I have 22 or 23 little things like that that I vibe coded that are, that are incredibly useful. You know, it's scanning my email now, it's preparing my calendar. It's doing all sorts of great stuff. But there is a big difference between putting that on your own hardware and using it for yourself and giving it to the public. And Steve knows that's a high, much higher calling, higher responsibility. We have copies of the show at our website, Twit TV SN. We have 128 kilobit audio which does not amazingly enough sound any better than the 64 kilobit audio. But it's there because Apple down samples and we need it to be bigger. Etc, Etc. It's a long story. I don't even want to bore you with it. We also have video which Steve refuses to do because he fought us every step of the way. But we've been offering video for some time now on Security now you can get that@TWIT TV SN. There is a YouTube channel for the video. More importantly, you can subscribe in your favorite podcast client. Get it automatically so you don't even have to think about it. You just. Every Tuesday you'll get a new security now like Match Magic. We will be back next Tuesday. I will be back home. I'm sad to say. I apologize for any stray noises coming in from from my lanai.
Leo Laporte
But it's been very nice to have a little tropical bird tweeting background.
Steve Gibson
It's in the 80s, the breezes are blowing. It's just beautiful. It's really, it's really lovely here. Thanks Steve. Have a great week. We'll see you next time on Security now.
Leo Laporte
Right o hi there.
Steve Gibson
Leo laporte here. I just wanted to let you know about some of the other shows we do on this network you probably already know about. This Week on Tech. Every Sunday I bring together some of the top journalists in the tech field to talk about the tech stories. It's a wonderful chance for you to keep up on what's going on with tech, plus be entertained by some very bright and fun minds. I hope you'll tune in every single Sunday for this Week in Tech. Just go to your favorite podcast client and subscribe. This Week in tech from the TWiT network. Thank you. You can't reason with the sun. Trust us, we've tried this summer. It's time to put that angry ball of fire on mute. Columbia Colombia's Omnishade technology is engineered to protect you from the sun's harsh rays that can burn and damage your skin. The sun is relentless, but so is our gear. Level up your summer@columbia.com to spend more time outside and less time slathering on aloe lotion. You're welcome Columbia. Engineered for whatever Ryan Reynolds here from Mint Mobile. I don't know if you knew this,
Leo Laporte
but anyone can get the same same premium wireless for 15amonth plan that I've been enjoying.
Steve Gibson
It's not just for celebrities.
Leo Laporte
So do like I did and have one of your assistant's assistants switch you to Mint Mobile today.
Steve Gibson
I'm told it's super easy to do@mintmobile.com
Leo Laporte
Switch upfront payment of $45 for 3 month plan equivalent to $15 per month Required intro rate first 3 months only, then full price plan options available, taxes and fees, extra fee full terms@mintmobile.com Some follow the noise. Bloomberg follows the money. Whether it's the funds fueling AI or crypto's trillion dollar swings, there's a money side to every story. Get the money side of the story. Subscribe now at Bloomberg.
Date: May 6, 2026
Host: Steve Gibson
Co-host: Leo Laporte
Theme: The collision of AI, browser APIs, and the future of security—in Chrome, open source software, and industry bug bounties.
This week’s Security Now dives deep into Google’s controversial move to embed a local AI model in Chrome and define a web-standard AI prompt API, sparking immediate pushback from Mozilla. Steve Gibson dissects the technical, security, and philosophical implications—not just for browser users, but for the entire open internet. The show also covers AI’s impact on vulnerability research, a hilarious real-world security goof by hackers using AI, major Linux privilege escalations, the status of the U.K.'s and U.S.'s government security response, and new approaches to account security from OpenAI.
"This is so wrong… This is the system prompt though for the AI, right?"
— Steve Gibson (1:47:43)
"People are not going to be happy about 4.7 gigabytes being downloaded to their hard drive. It's really going to change the whole complexion of Chrome."
— Leo Laporte (2:03:14)
"It's a great example of the danger of using AI without being a domain expert... It’ll give you what you ask for, but you need to know what that is."
— Steve Gibson (11:17)
"More convincing crap is worse than obvious crap. You can't dismiss it quickly... at scale, this stops feeling like a helpful external contribution model and starts to resemble something closer to a denial of service attack on the people responsible for security."
— Daniel Stenberg, cURL, quoted by Steve Gibson (1:05:57)
"My prediction is… the 31 years of bug bounty will wind down… Bugs are now so much easier to discover… AI is making human altruism sufficient."
— Steve Gibson (1:11:15)
“This is really… politics should not intrude into this at all. And unfortunately very much has.”
— Steve Gibson (41:00)
“I cannot imagine why any organization whose software might contain external exploitable vulnerabilities… would not be jumping on this with all possible speed.”
— Steve Gibson (1:16:00)
A JavaScript interface for web pages (and extensions) to invoke local (or remote) LLMs for a variety of tasks: summarization, proofing, rewriting, etc.
"If using a web API means accepting a specific vendor's content policy… you're not really building for an open platform anymore." (From Mozilla GitHub & Register Summary)
"Web standards are too important to be created in any half-baked fashion, and Mozilla apparently feels that it’s too soon… Once a web standard exists… it is incredibly difficult to deprecate it." (2:02:13)
"This brings to mind the famous rhetorical question, what could possibly go wrong?" (2:12:44)
"I prefer paths that promote experimentation… to those which err on the side of stalling innovation out of fear of what might happen."
“Actual human altruism, which, believe it or not, in 2026 still exists, is now sufficient to drive what once required the promise of payment.” (1:11:15)
“Annoying chat box in the lower right corner of your screen…that’s going to end up running locally.” (2:26:54)
“We feel it has severe negative consequences to interoperability, updatability and neutrality of the web platform.”
“We don’t see AI as something that our users are asking for. Rather the opposite…” (1:43:12)
| Time | Topic | |------------|-------------------------------------------------------------------| | 03:09 | Episode overview: Google’s Chrome AI, credit card AI goof, bug bounties | | 11:40 | Hackers use AI for crime—expose stolen credit cards due to no authentication| | 21:44 | Steve & Leo discuss personal experiences with credit card theft | | 23:22 | U.K. NCSC’s vulnerability 'patch wave' warning | | 39:00 | U.S. CISA’s struggle for AI access; politics and security | | 45:00 | AI discovers new Linux privilege escalation; bug bounty crisis | | 65:10 | Deep dive: The end of traditional bug bounties? | | 73:45 | New: Anthropic’s Claude Security launches public beta | | 84:50 | OpenAI debuts Advanced Account Security for GPT users | | 95:24 | Quick SyncThing / SyncTrayzor update for Windows | | 99:44 | Main topic: Google’s Prompt API—overview and ramifications | | 122:52 | Register/Mozilla reaction: API risks, gatekeeping, and overreach | | 145:27 | Wrap-up on browser bloat, AI, and web standards politics |
This episode is a masterclass in synthesizing security, technological innovation, industry politics, and the practical realities that users, developers, and defenders face in the age of AI. Google’s Prompt API for AI in the browser may well be the most significant browser shift since JavaScript—but also the most controversial, potentially rewriting not only technical boundaries but the open, interoperable ethos of the web.
Mozilla’s resistance, and the industry-wide conversation, make this a must-watch issue for anyone interested in the future of both AI and the internet.