Risky Business #813 – FFmpeg Has a Point
Release Date: November 5, 2025
Host: Patrick Gray
Guest: Adam Boileau
Episode Overview
This week’s Risky Business dives into the evolving challenges of vulnerability disclosure in open-source projects—spotlighting the recent AI-fueled fuzzing drama between Google and the FFmpeg project. Patrick Gray and Adam Boileau discuss not only the cultural tensions at play, but also the implications for AI in security testing, vulnerability management, and the future of SAST and bug bounties. The episode wraps up with news on malicious insiders, cyber-enabled freight theft, the latest developments in malware law enforcement, and insights into vulnerability triage and remediation with Scott Kufa from Nucleus Security.
Key Discussion Points & Insights
1. The FFmpeg Disclosure Drama (00:00–12:58)
- Background: Google’s DeepMind found and reported bugs in FFmpeg, leading to FFmpeg pushing back—requesting patches rather than just reports.
- Open Source & Security Culture Clash:
- Adam observes the tensions:
“Other projects just … are there for community, they have other priorities … interacting with the modern security research community … probably could be a little frustrating.” (02:24) - Patrick’s take: “If you want to do security research that's helpful, just grabbing a bunch of bugs out of an AI model and dumping them onto busy people who don’t get paid to fix them I think is just not helpful.” (04:39)
- Adam observes the tensions:
- Debate on Responsibilities: Should big players like Google be expected to do more than just report bugs to resource-strapped projects?
- On Rigid Disclosure Timelines:
- Adam: “If you show up and start making demands, then that’s just kind of rude. And these communities are ultimately about good neighborliness, right?” (06:08)
- Patrick: “Giving FFmpeg the same disclosure terms as Fortinet … just feels a little bit dumb.” (08:38)
- Notable Quote:
- “The current drama is plucky security researcher Google takes on volunteer open source behemoth ffmpeg.” (Patrick, quoting Grok, 08:38)
- Positive Example:
- Rob Graham patches the bug himself—Patrick calls this “the boss move” (11:53).
2. AI’s Disruption in Security Testing (12:04–22:53)
- OpenAI’s ‘Aardvark’:
- OpenAI introduces a reasoning-driven AI agent for vulnerability discovery, reportedly more collaborative with projects than Google’s approach.
- Patrick quotes OpenAI’s disclosure policy: “We recently updated our outbound coordinated disclosure policy which takes a developer friendly stance focused on collaboration and scalable impact rather than rigid disclosure timelines …” (14:51)
- SAST is ‘Over’?:
- Both hosts discuss the diminishing value proposition for static analysis tools compared to AI-driven bug discovery.
- Patrick relays an industry founder’s verdict:
“He’s like, I’m not going to bother trying to raise or build anything around this because it took me a couple days … existing models … worked so unbelievably well … SAST is over.” (13:55)
- Bug Bounties Threatened by AI:
- Recent Bugcrowd acquisition of Mayhem Security prompts reflections on the existential risk to bounty platforms.
- “Most of bug bounties ... is, I mean, a lot of it ... labor arbitrage ... and AI does scale.” (Patrick, 18:03)
- Wild AI Valuations and Shortcomings:
- Both hosts share skepticism about current overhype despite clear real-world impact (“AI shares are at infinity valuations.” – Patrick, 23:22)
3. Insider Threats and Cybercrime (24:16–36:37)
- Ransomware Negotiator Indicted:
- DOJ indicts an incident responder for facilitating ransomware, echoing the Peter Williams espionage/leak case.
- Adam: “People who work in this industry … you would kind of hope, understand that ecosystem a little better than average.” (24:16)
- Peter Williams/Trenchant Updates:
- Further reporting on Williams’ persistent bug-selling after discovery and his massive access as GM.
- Adam: “You can't reasonably expect a place like that to protect against ... the general manager ... you just got to trust people.” (26:54)
- Public Sector vs. Private Sector Security Tensions:
- Patrick recaps a conversation with Citizen Lab’s John Scott Railton about the risks of rapidly scaling up the private sector’s offensive security capacity.
- On the effectiveness of regulation vs. market pressure (i.e., NSO being punished by the Biden administration).
4. Spyware Vendor Transparency and Operator Controls (36:37–38:31)
- Memento Labs Admits Attribution:
- The Italian spyware vendor publicly confirms Kaspersky’s findings that its tools hit Russian targets, blames clients for misuse.
- Adam: “It’s kind of a bold move to come out and … out yourself as a vendor that's selling tooling to an adversary ... I can't imagine he ran this past Corp comms.” (35:59)
- Vendor Policing and Norms:
- Discussion on the spectrum of responsible sales, and where oversight is entirely lacking.
5. Technical Vulnerabilities and Research (36:53–40:30)
- Teams Social Engineering Bugs (Checkpoint Research):
- Vulnerabilities in Microsoft Teams allowed impersonation, editing messages without trace, and chat titling/social engineering vectors.
- Adam: “Teams is a nasty, thickety, thorny mess and I’m glad I don’t have to use it every day now.” (36:53)
- Proofpoint – Cyber-Enabled Freight Theft:
- Hackers infiltrate logistics software to hijack real-world shipments, blurring cyber and traditional crime.
- Adam: “Anytime we see a new mechanism for turning cyber into money ... that’s always an interesting time.” (40:30)
6. Law Enforcement and Malware Developments (41:35–47:04)
- Ransomware & Infostealer Busts:
- Court appearance for Conti affiliate in the US; rare Russian police operation against Medusa Stealer authors.
- Krebs on Jabba’s Zeus Coder Arrest: (42:59)
- Brian Krebs details the extradition/arrest of a Ukrainian developer behind early two-factor authentication bypass malware.
- Connection to Russia-Ukraine war shifting Eastern European cybercriminals into range of Western law enforcement.
- WSUS (Windows Server Update Services) Fallout:
- Around 50 victims compromised due to lingering use of deprecated WSUS, despite Microsoft guidance.
- Adam critiques the difficulty of adopting Intune and other modern update mechanisms.
7. DNSSEC, TLS, and the Reality of Security Hygiene (47:04–50:59)
- Listener Corrections & Real-World Churn:
- A listener writes in about the Kaminsky DNS bug, advocating DNSSEC as the true fix.
- Irony: Risky Biz had to disable DNSSEC to enable modern TLS certificate automation—highlighting persistent practical friction.
- Adam: “Linux on the desktop is actually usable, whereas dnssec honestly is mostly just about causing outages.” (50:54)
Sponsor Interview Highlights: Scott Kufa, Nucleus Security (52:46–64:21)
Main Theme: Vulnerability management can no longer be solved by just better prioritization—volume and complexity mean even the “top 5%” of issues can’t be managed.
- Quote: “What we're seeing is ... more data that's being presented ... it's really more of a volume issue ... We're getting worse in aggregate at fixing the new numbers of vulnerabilities that are coming.” (Scott Kufa, 52:46)
- Historic Approach Has Peaked:
- “...the approach has been how do we just whittle that down to the point where we only look at the 5% that matter. And that works ... until the 5% that matter are more than what you can actually physically fix.” (54:41)
- AI’s Uncertain Role in Patch Automation:
- Patrick: “Why do you need AI for that part? ... It doesn’t seem … the missing piece is going to be filled with AI. … It’s just a fiddly, horrible, unpredictable thing, which is why it’s hard.” (58:46)
- Scott: “Everybody is super excited right now about the AI hype train ... but most of what you're trying to automate is declarative automation …” (59:04)
- On AI Replacing SAST and Code Audits:
- Patrick relays a founder’s insight: “It is over. This is a market segment that won’t exist in a couple of years.” (61:02)
- Scott concurs: “I would agree that within a few years, we will see ... super widespread adoption of this because it’s going to end up being more efficient and effective to do that ... we’re hitting upper limits with some of these traditional tools …” (62:46)
Notable Quotes & Memorable Moments
- “If you want to do security research that's helpful ... just grabbing a bunch of bugs out of an AI model and dumping them onto busy people who don't get paid to fix them I think is just not helpful.”
— Patrick Gray, 04:39 - “OpenAI are not publicizing the bugs that they are finding other than cherry picking a few here and there. They did find apparently an off by one in OpenSSH, which, that's a codebase that has had a lot of eyes on it over the years.”
— Adam Boileau, 13:55 - “It must just be wild to not be cost constrained and to see what the possibilities of [these AI models] are. And the rest of us are stuck out here using the … pennies models and seeing trash results.”
— Adam Boileau, 23:03 - “Linux on the desktop is actually usable, whereas dnssec honestly is mostly just about causing outages.”
— Adam Boileau, 50:54 - “It's the volume game over, the super precise precision game. And I think the reality is we need to do both.”
— Scott Kufa, 60:26
Timestamps for Key Segments
- [00:00–12:58] FFmpeg, AI, and the changing norms of disclosure
- [12:58–22:53] AI transforms bug hunting, SAST, and bug bounties
- [22:53–36:37] Insider threats, ransomware operator busts, and leaker updates
- [36:37–38:31] Spyware vendor transparency and controls
- [36:53–40:30] Teams vulnerabilities and logistics/freight crime
- [41:35–47:04] Law enforcement in cybercrime and WSUS update issues
- [47:04–50:59] DNSSEC, TLS certificates, and real-world operational friction
- [52:46–64:21] Sponsor interview: Scott Kufa (Nucleus Security) on vulnerability management’s future
Tone & Style
- Engaged, fast-paced, straight-talking, skeptical of hype, and culturally aware
- Willingness to challenge prevailing cybersecurity dogmas while appreciating the realities of operational work
- Wry humor and industry insider banter throughout
For risk professionals, engineers, and managers, this episode presents a timely snapshot of infosec’s multifaceted challenges: technical, cultural, and operational. It’s a must-listen to understand not just what’s happening—but why—and where the cracks are forming as AI reshapes the landscape.
