Security Now #1069: "You Can't Hide from LLMs – Was Your Smart TV a Stealth Proxy?"
Date: March 11, 2026
Hosts: Steve Gibson and Leo Laporte
Episode Overview
This episode explores the rapidly evolving impact of large language models (LLMs) on cybersecurity, including their ability to both improve software security and threaten personal privacy. Steve and Leo discuss how LLMs are uncovering vulnerabilities in critical software, highlight startling new research on LLM-powered de-anonymization, and report on the latest issues concerning smart TVs acting as stealth proxies. The episode also features lively listener Q&A, analysis of recent security news, and practical advice for users and developers alike.
Key Discussion Points & Insights
1. In-the-Field Experiences and SpinRite Stories
- The hosts revisit anecdotes from their recent appearance at Zero Trust World, including meeting high-profile listeners and stories of SpinRite's use in surprising places (e.g., data recovery from Taliban drives and on the International Space Station).
- Quote: “SpinRite was up in space. There’s a copy on the space station. The ISS has it. They use it all the time apparently...” — Steve Gibson, [03:21]
2. LLMs and Software Security: Claude & Mozilla Tackle Firefox Vulnerabilities
[13:02–42:01]
- Partnership Details: Anthropic's Claude Opus 4.6 (LLM) was used with Mozilla to scan Firefox for vulnerabilities.
- Findings: Claude found 22 vulnerabilities in two weeks, 14 rated high severity by Mozilla—a fifth of all such issues patched in 2025.
- Implications: AI is already world-class at finding vulnerabilities; it greatly accelerates the "find and fix" window for defenders, but is not (yet) as good at developing working exploits.
- Exploit Development: Claude was able to weaponize only two vulnerabilities in controlled environments, required $4,000 in API credits, and still couldn't bypass modern browser sandboxes.
- Best Practices: Close collaboration between LLM researchers and software maintainers is essential. Proper triage, proof-of-concept, and reproducibility are vital when submitting AI-generated bug reports.
- Future Risks: The gap between finding and exploiting vulnerabilities will likely close, necessitating proactive adoption of AI-powered code auditing tools.
- Quotes:
- “Frontier language models are now world-class vulnerability researchers. That statement is not hyperbole.” — Steve Gibson, [35:43]
- “We met a listener in Florida during the Zero Trust World who is now earning a full-time income bug hunting. … If you’re not using AI, get on it because that’s where this has all moved…” — Steve Gibson, [36:41]
- Timestamps:
- [13:02] – Start of Claude/Mozilla Discussion
- [31:06] – Task verifiers and responsibility in AI-powered bug reporting
- [36:41] – Full-time income from AI-powered bug hunting
3. Latest Security News Highlights
a) RCS Messaging Encryption
[47:11–50:14]
- Apple and Google start testing encrypted RCS messages cross-platform, finally improving privacy for Android and iOS messaging.
- Rollout details and beta requirements explained.
b) Ubuntu's 'sudo' Command Change
[50:14–52:37]
- Ubuntu 26.04 LTS now echoes asterisks for each password character entered in
sudo, a departure from traditional silent password input in Unix/Linux.
c) Smart TVs as Stealth Residential Proxies
[52:37–69:50]
- Key Story: Some streaming apps (e.g., via Bright Data SDK) can turn users' smart TVs into residential proxies: their IP, bandwidth, and devices are used to crawl the web and bypass scraper bans.
- Legitimate uses touted ("journalists, nonprofits"), but also enables AI training data scraping and could cover for malicious uses.
- Many platforms (Roku, Amazon) are now banning such SDKs in response.
- Quotes:
- “With Bright’s SDK, a viewer’s smart TV becomes part of a massive global proxy network that crawls and scrapes the web…” — Steve Gibson, [53:20]
- “It’s diabolically clever. There’s really no way to prevent it if the smart TV provider is willing to go along.” — Steve Gibson, [67:54]
- Leo’s Advice: “Best thing to do with a smart TV is just not connect it to the Internet… Hook up your Apple TV to it.” — [69:28]
4. Other Noteworthy Security & Privacy Updates
[72:29–82:27]
- Apple iPhones/iPads: Now cleared for NATO sensitive use (Germany audit).
- OpenClaw Vulnerability: Major remote takeover flaw fixed; highlights dangers of web pages accessing localhost services over WebSockets with no rate limiting or CORS protections.
- “A web page visited by the user… can silently open a connection to ws://127.0.0.1… without any user prompt, warning, or permission dialog.” — Steve Gibson, [76:00]
- TikTok DMs Remain Unencrypted: For security monitoring reasons.
- Microsoft Discord Censorship Drama: "Microslop" keyword saga and community response.
5. Listener Feedback & Practical Tips
[93:20–139:59]
- LLM Password Generation: Don’t use LLMs for passwords—fundamentally not suitable for high-entropy randomness.
- TTL Security for Routers: TTL/hop limit as a means of filtering remote attacks (RFC 5082).
- AI and the Role of Programmers: Listeners and hosts reflect on programming transforming into prompting and verifying LLM output—less manual coding, more orchestration (the "brilliant but drunk PhD student" analogy).
- “It feels less like writing code line by line and more like directing the system, setting constraints, verifying outputs, and managing the behavior of these AI tools.” — Listener Brian Dort, [119:53]
- Donald Knuth's Praise for Claude Opus 4.6:
"Shock! I learned yesterday that an open problem I had been working on for several weeks had just been solved by Claude Opus 4.6… This was definitely an impressive success story.” — Donald Knuth (read by Steve Gibson), [127:13] - CISA’s Free Cyber Hygiene Scanning: New availability for broader organizations (not just government); highly recommended for any eligible enterprise.
Deep Dive Topic: "You Can’t Hide from LLMs" – LLM-Powered De-anonymization at Scale
[145:01–157:59]
- Key Research: ETH Zurich + Anthropic paper: LLMs can de-anonymize users on an unprecedented scale using only writing style and unstructured online content.
- Experiments: Demonstrated 99% accuracy on re-identifying pseudonymous users across platforms like Hacker News, Reddit, LinkedIn.
- Process: LLMs efficiently extract identity-relevant features, search for candidates, and reason about true matches—far outperforming prior (non-AI) methods.
- Implications:
- Cost and effort of de-anonymization have dropped dramatically; privacy by pseudonym is effectively broken.
- Risks: surveillance of dissidents and journalists, hyper-targeted ads, sophisticated spear-phishing/social engineering.
- Quotes:
- “LLMs fundamentally change the picture, enabling fully automated de-anonymization attacks that operate on unstructured text at scale.” — ETH Zurich paper (read by Steve Gibson), [146:42]
- “We show that the practical obscurity... no longer holds.” — [146:59]
- “The emergence of LLM technology has forever changed this calculus.” — [150:56]
- Discussion: Both hosts emphasize how the privacy assumptions underlying much of the Internet are now outdated:
- “There’s nothing any of us can do, but it might be worth keeping it in mind. … We are—if somebody deploys something like this—we’re leaving our footprints everywhere.” — Steve Gibson, [154:51]
Memorable Quotes & Moments
- “No one should doubt the degree to which AI has, is, and will be changing the landscape for security research. I mean, it’s here already…” — Steve Gibson, [39:44]
- "Claude was able to weaponize only two vulnerabilities... But vulnerabilities that escape the sandbox are not unheard of." — Steve Gibson, [29:56]
- “If it’s possible, somebody will do it… Not that it’s a good idea, but it can be done.” — Steve Gibson, [67:39]
- “It’s a double-edged sword. Great for your business, but there are downsides.” — Leo Laporte, on AI, [44:14]
- “It’s the equivalent of the experience we initially had a couple years ago when the thing started talking. It’s like… Oh, my. What? What?” — Steve Gibson, [42:16]
- “The identity of a programmer seems like it may be evolving into something more like an orchestrator of intelligent tools.” — Listener Brian Dort, [120:40]
Timestamps for Important Segments
- [13:02] – Start of Claude/Mozilla/Finding Security Vulnerabilities with AI
- [31:06] – Task verifiers, responsible use of AI in bug triage
- [36:41] – AI bug bounty economy
- [47:11] – RCS encryption update
- [52:37] – Smart TVs as stealth proxies and privacy concerns
- [72:29] – Apple cleared for NATO classified use, OpenClaw security issue
- [93:20] – Listener Q&A: LLM password generation, TTL security, the future of programming
- [127:13] – Donald Knuth on Claude Opus 4.6 solving a previously unsolved problem
- [145:01] – ETH Zurich’s LLM-powered de-anonymization study
- [154:51] – Summary and implications for online privacy
Final Notes
This episode underscores a key turning point: LLMs are not just futuristic—AI-driven security research and privacy attacks are here, now, and breaking our old limitations. The most important takeaway is the urgent need for both individual and organizational adaptation: if your code, data, or privacy protection strategy doesn’t already account for AI’s new powers, it’s time to catch up—or get left behind.
For more notes, episode links, and transcripts:
Security Now – "The Future is Here"