Security Now 1069: "You Can't Hide from LLMs"
Released: March 11, 2026 | Hosts: Steve Gibson & Leo Laporte
Episode Overview
In this episode, Steve Gibson and Leo Laporte explore the transformative and somewhat unsettling influence of large language models (LLMs) on cybersecurity and privacy. From Anthropic's Claude finding vulnerabilities in Firefox and developers' evolving reliance on AI-powered tools, to the ominous reality that LLMs can now de-anonymize pseudonymous identities at shocking precision, the episode is equal measures technical, practical and thought-provoking. Along the way, they touch on streaming device privacy, password randomness, the security of localhost services, and much more—concluding with a deep examination of new research showing how LLMs obliterate assumptions of online anonymity.
Key Discussion Points & Insights
Community & Listener Interactions at Zero Trust World
- 00:35-04:03
- Steve and Leo recall meeting fans and security professionals—including some with government backgrounds who used SpinRite for recovering data from field drives, and discovering a copy of SpinRite is even used on the International Space Station.
- Steve: "We had a story this week that somebody did some instrumentation of Firefox. 10% of Firefox crashes and failures came from bit flips, you know, in non ECC ram ... often because cosmic rays are striking your ram." (03:40)
Main Security Headlines & Show Roadmap
- 04:13-08:03
- ETH Zurich and Anthropic show LLMs are excellent at de-anonymizing internet users from small samples of public posts: "You can't hide from LLMs."
- Anthropic's rapid progress with Claude is highlighted, especially its collaboration with Mozilla to assess Firefox's security.
- Cross-platform RCS encrypted messaging between Apple and Google is coming.
- Ubuntu changes its default
sudofeedback for password entry. - The risk of inviting web proxies (Bright SDK) into smart home devices is explored.
- OpenClaw, a self-hosted AI assistant, fixed a critical remote takeover flaw.
Picture of the Week: Sidewalk Fail
- 13:02–15:32
- Listeners provided a humorous photo of a sidewalk blocked by an awkward barricade and sign—serving as a metaphor for 'obvious' security solutions.
Major Segment: Anthropic & Mozilla—AI-powered Vulnerability Discovery in Firefox
- 15:35–42:39
Key Points:
- Claude 4.6, Anthropic’s LLM, found 22 vulnerabilities in Firefox in two weeks; 14 of the 22 were high-severity and accounted for almost a fifth of Firefox’s 2025 critical vulnerabilities.
- Steve: "AI is making it possible to detect severe security vulnerabilities at highly accelerated speeds." (16:32)
- Claude reproduces and discovers vulnerabilities much faster than traditional methods.
- Claude could reproduce historical CVEs (perhaps due to overlap in training data), but more significantly, was able to find new, unreported vulnerabilities in Firefox’s JavaScript engine.
- After 20 minutes, Claude found a "use after free" bug and quickly found more than 50 unique crashing inputs.
- Claude’s exploitation capabilities lag its detection capabilities: In $4,000 worth of API calls, it managed to generate working exploits in only two cases, and even those wouldn’t have bypassed modern browser sandboxing.
- Steve: "Claude is much better at finding these bugs than it is at exploiting them ... the cost of identifying vulnerabilities is an order of magnitude cheaper than creating an exploit for them." (35:46)
- Best practices for LLM-powered bug hunting: Accompanying minimal test cases, proof of concept, and candidate patches—all vital for responsible disclosure.
- AI’s positive impact currently outweighs its potential for harm—though this window may not last.
- Steve: "If you're not using AI, get on it, because this is where this has all moved in the last couple months." (36:42)
- Steve: "AI is only going to get better. And so if you haven't cleaned your code of exploitable vulnerabilities by the time... the exploitation side, you'll wish you had." (41:01)
Segment: Web Proxy SDKs in Smart TVs—Privacy Risks
- 52:38–69:50
- Bright Data SDK has been embedded in smart TV apps, turning TVs into global residential proxy nodes for web scraping, used by AI companies for model training and data gathering.
- Users are sometimes (but not always) prompted to opt-in, but background data collection can persist even after app closure.
- Steve: "While it might feel a little yucky, it's also diabolically clever. There's really no way to prevent it if the smart TV provider is willing to go along." (67:41)
Segment: OpenClaw Remote Takeover Vulnerability
- 72:31–86:50
- OpenClaw, a self-hosted personal AI assistant, was vulnerable to remote takeover via a brute-force attack against its local websocket server (port 18789) because localhost connections were exempt from rate-limiting.
- Malicious websites could exploit this via JavaScript running in the browser, connecting to localhost services.
- Steve: "A website visited by the user ... can itself silently open a connection to WS127001 ... without any user prompt, warning or permission dialog." (79:35)
- Immediate patch was shipped; update urgently if you use OpenClaw.
Notable exchange:
- Leo: "I always thought that localhost was inaccessible from the outside world."
- Steve: "It's because it's coming from your browser ... the JavaScript in your browser is able to reach into your computer because it's already in." (82:36-84:52)
Security News Shorts
- 47:13–52:31, 86:50–95:59
- Apple and Google will soon offer cross-platform, end-to-end encrypted RCS messaging.
- Ubuntu will now echo asterisks for sudo password entry by default (controversially).
- TikTok will not offer encrypted messaging, arguing it would increase user risk.
- Microsoft briefly banned the term "Microslop" in their Copilot Discord server to combat spam.
- Interesting listener stories about using self-signed certs for internal software, best practices for private local dev, and using CISA's free Cyber Hygiene Service for vulnerability scanning.
Listener Feedback & Notable Moments
Randomness and Passwords from LLMs
- 108:57–114:20
- Listener GP tested LLMs' ability to generate secure random passwords and found that, with large enough output, character repetition was similar to openssl's random generator. However, Steve cautions that LLMs aren’t true random number generators and shouldn't be trusted for password generation.
Programming is Changing—AI as the Drunk Genius Colleague
- 113:58–127:12
-
The role of developer is shifting towards managing and directing AI agents rather than "writing code" line by line.
- Listener quote: Managing these systems is "like supervising what [was] described as brilliant but occasionally drunk PhD students."
- Steve: "It feels less like writing code line by line and more like directing the system, setting constraints, verifying outputs, and managing the behavior of these AI tools." (115:48)
-
Donald Knuth, legendary computer scientist, was "shocked" when Anthropic's Claude solved a mathematical problem he’d been working on for weeks:
- Knuth: "I learned yesterday that an open problem I had been working on for several weeks had just been solved by Claude Opus 4.6 ... What a joy it is to learn not only that my conjecture has a nice solution, but also to celebrate this dramatic advance in automatic deduction and creative problem solving." (127:13)
Headline Segment: "You Can't Hide from LLMs" — De-anonymization Breakthrough
- 145:29–157:27
ETH Zurich & Anthropic Research on LLM-powered De-anonymization
Summary & Insights
-
Recent paper: “Large Scale Online De Anonymization with LLMs”
-
LLMs can now, at near-perfect precision, de-anonymize internet users by comparing their pseudonymous online writings—even across multiple platforms—against public identity clues like LinkedIn, Reddit, Hacker News, etc.
-
LLMs extract identity-relevant features, search for candidate matches with semantic embeddings, and then reason over the top matches to confirm identity.
- Steve reading abstract: "Our agent can re-identify hacker news users and anthropic interviewer participants at high precision ... Given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator." (145:29)
-
This method works on unstructured user content (not just structured datasets) and obliterates the practical obscurity that previously protected pseudonymous users online.
- Steve: "The practical obscurity protecting pseudonymous users online no longer holds, and threat models for online privacy need to be reconsidered." (146:36)
Threats and Implications
- LLM-powered de-anonymization is now scalable, practical, and will be available to adversaries large (governments, corporations) and small (scammers, stalkers).
- Law enforcement and intelligence can use LLMs to de-anonymize and surveil dissidents, journalists, and activists via writing style, word choice, beliefs, etc.
- Steve: "We're each individually leaving identifying content in everything we post ... this wasn't an issue, since the cost ... was astronomically high ... The emergence of LLM technology has forever changed this calculus." (153:08)
- Paper quote: "Users, platforms, and policymakers must recognize that the privacy assumptions underlying much of today's Internet no longer hold." (155:07)
On the LLM Revolution
- Leo: "I'm surprised it can do this... that's really kind of surprising."
- Steve: "How surprised are we that it can talk? ... If it can talk, this is just something else that it can do." (157:27–158:22)
Memorable Quotes
- Steve Gibson: "Frontier language models are now world class vulnerability researchers ... AI is only going to get better."
- Donald Knuth, as read by Steve: "Shock, exclamation point. I learned yesterday that an open problem I had been working on for several weeks had just been solved by Claude Opus 4.6 ... What a joy it is to celebrate this dramatic advance in automatic deduction and creative problem solving." (127:13)
- ETH Zurich via Steve: "We argue that the asymmetry between attack cost and defense cost may force a fundamental reassessment of what can be considered private online." (155:07)
- Leo Laporte: "A website can host some JavaScript which I then download and run on my browser, which then ... can connect back to 127.0.0.1 and attempt to sync? ... What's protecting my syncthing instance?" (82:36)
Timestamps for Key Segments
- [00:35] Zero Trust World reflections, SpinRite in the field
- [13:20] Picture of the week (sidewalk fail)
- [15:35] Anthropic, Mozilla & AI bug hunting in Firefox
- [31:00] Limitations/exploits and how AI-powered bug hunting works in practice
- [52:38] Bright Data proxy SDK in smart TVs (privacy Exploit)
- [72:31] Apple passes NATO security audit; OpenClaw remote takeover vulnerability
- [82:36] Localhost brute force risk discussion
- [108:57] Password generation randomness, LLMs, and entropy
- [113:58] The changing nature of programming—AI as a boozy genius assistant
- [127:13] Donald Knuth’s "Claude's Cycles" and the shock of LLM deductive success
- [145:29] ETH Zurich/Anthropic: LLMs as de-anonymization engines—main headline
- [157:27] Discussion: Are LLMs a discontinuity? Privacy forever changed
Summary Takeaways
- AI/LLM cyber research is now the state of the art: If you’re a developer, you need to integrate LLM-driven code and vulnerability auditing now or fall dangerously behind.
- Localhost ≠ Safe: Local services may be vulnerable to attack via browsers—patch, update, and audit your configurations.
- Pseudonymity is dead: Online 'nicknames' and pseudo-identities can be mapped to real-world people via LLM-powered semantic analysis—at scale and with high precision.
- AI’s empowerment of developers—and attackers—is permanent: The window in which defenders have a significant head start is closing; both sides will have powerful tools.
- No more practical obscurity: The bar for online privacy just got much, much higher.
For further reading, see the episode's show notes for a direct link to the ETH Zurich/Anthropic de-anonymization research paper and other resources discussed.