CyberWire Daily – “Closing cracks before hackers do.”
Date: November 12, 2025
Host: Dave Bittner (N2K Networks)
Featured Guest: Bob Maley, CSO at Black Kite
Episode Overview
This episode delivers a comprehensive snapshot of the latest cybersecurity developments, covering critical security patches, high-profile lawsuits against cybercriminal groups, significant breaches, and the disruption of a notorious info-stealer operation. Centerpiece of the episode is an interview with Bob Maley, CSO at Black Kite, discussing the growing pains of AI risk assessment and the launch of the open BKGA3 AI risk assessment framework. The episode ends with coverage of the collapse and prosecution of a massive crypto-laundering empire.
Key News Segments & Insights
1. Patch Tuesday Roundup – Addressing Critical Flaws
[02:00]
- Microsoft: Over 60 security vulnerabilities patched, including a “race condition and double free bug” that allows low-privilege attackers to corrupt kernel memory and obtain full system privileges.
- “Chaining it with other flaws could enable full system compromise, credential theft and ransomware deployment.”
- Windows GDI Vulnerability: Critical remote code execution bug (CVSS 9.8) — triggered via crafted image file uploads, emphasized as a top priority for all internet-facing systems.
- Industrial Control Systems: Siemens, Rockwell Automation, Aviva, and Schneider Electric post advisories after new vulnerabilities discovered in ICS and OT offerings, highlighting risk of unauthorized access and potential operational disruptions.
- Adobe: 29 vulnerabilities addressed across multiple suite products; none currently exploited in the wild.
- Intel: 30 advisories on 60+ vulnerabilities affecting core processors, drivers, and firmware, some enabling privilege escalation.
- Ivanti & Zoom: Both patched multiple high-severity vulnerabilities; Ivanti's affect all Endpoint Manager versions before 2024 SU4.
2. Noteworthy Threat & Legal Actions
[07:00]
- Google vs. Lighthouse: Google sues a China-based “phishing-as-a-service” network for orchestrating global SMS scams via 32,000+ fraudulent sites, targeting the US Postal Service and Google impersonation.
- “Google's goal isn't prosecution, but deterrence, seeking a court declaration that Lighthouse's infrastructure is illegal to help other platforms shut it down and protect users from future phishing campaigns.”
- Google Private AI Compute: New platform ensures AI (Gemini models) can process user data in a sealed, hardware-secured environment — delivers “cloud AI with the privacy of on-device processing.”
- Hyundai Data Breach: Hyundai’s digital arm notifies vehicle owners after breach exposing names, SSNs, and driver license details; 2,000 of 2.7M users affected after nine days of unauthorized access.
- “The breach underscores growing industry concern over how automakers protect driver data.”
- Amazon AI Bug Bounty: Amazon invites select researchers to probe its Nova LLMs for vulnerabilities like prompt injection, jailbreaking, or potential for WMD development aid.
- Aim: “To strengthen AI safety across its ecosystem, which powers services like Alexa and AWS Bedrock.”
- Radamanthis Infostealer Disruption: Law enforcement likely takes down the malware’s infrastructure; users report lost access, researchers suspect multinational Operation Endgame’s involvement.
- Initial Access Broker Guilty Plea: Russian national Alexei Olegovich Volkov pleads guilty to selling stolen creds to ransomware groups; agreed to pay $9M in restitution.
- “His case highlights the growing specialization within ransomware operations.”
Feature Interview: AI Risk Assessment with Bob Maley, CSO at Black Kite
Segment begins at [14:04]
The Pressures of Third-Party Risk Management in AI
- “Extreme. In one word, that's all you need to think about. It's such a rapidly changing industry that it's hard to keep up...” – Bob Maley [14:05]
- Key pain points: Technology evolves too fast for organizations to keep up, especially for third-party assessments.
Why Existing Risk Frameworks Fall Short
- Maley notes that frameworks lag because AI’s pace is exponential:
“A lot of people think that AI is a completely separate entity...and that is a total misconception. A lot of the underpinning infrastructure that AI runs on, you're already assessing, but there are obviously new components about it.” [14:50]
- The landscape is highly fragmented across industries and regions, increasing complexity and confusion.
The Analogy: A City with No Unified Building Code
- Maley likens AI risk frameworks to a chaotic city where every block uses different building codes:
“Imagine a city where every city block, they've developed their own slightly different building code.” [16:30]
- He estimates over 50 frameworks out there — with new ones emerging monthly.
The Genesis of BKGA3 – Towards an Open Standard
[18:22]
- Frustration with static, non-agile legacy approaches inspired Maley to develop the BKGA3 framework — designed for adaptability and openness.
- “The reason why I look at a third party and I’m assessing them for risk is I want to understand and reduce the amount of surprise in that relationship. Surprise is the unknown uncertainty and I don’t want to be surprised.” [18:42]
- Black Kite’s DNA is about openness — “making things better for the global community.” [20:19]
- Critique of closed, paid frameworks:
“Some of the frameworks that are maybe de facto standards, they're not free. You have to pay...that really goes against...making things better for the global community.” [20:44]
Building the Framework: Reviewing 50+ Standards, Simplifying Complexity
- Used AI to semantically analyze and categorize requirements, identifying overlaps:
“Eight or nine frameworks...looking at the same control, the language is a little different, but ultimately they're trying to get to the same point.” [21:51]
- Black Kite’s team scaled up Maley’s approach, continuing tradition of leveraging AI in security document analysis.
Surprises and Evolution in AI Risk
- Speed remains the biggest surprise:
“The only thing unexpected is it's growing faster than I even imagined.” [23:15]
- Maley recounts experimenting with different LLMs to assess bias — and publishing AI-generated articles in a playful contest between models.
Keeping BKGA3 Relevant
- Plans include leveraging automation and AI for continuous updates, tracking both framework changes and emergent AI risks.
- “It will grow...That's one of the A's, adaptability to be able to adapt to that ever-changing world, whether it’s from the compliance or from bad actors, and updating it on a much more frequent basis than most frameworks get done.” [25:15]
Responsible AI Risk Management
- Maley acknowledges that “responsible AI” means different things to different people, but for him it centers on reducing bias as much as possible.
- “I don’t think that we'll ever be able to completely remove all bias because it's a fundamental human frailty...” [27:00]
Open Access and Community Involvement
- The BKGA3 framework is to be released as a free resource via Black Kite’s website, reflecting the company’s mission to openly share security tools and research:
“The research has a far greater value if you put it out there for everybody else so they can take advantage of it as well.” [28:27]
Final Headline: “Bitcoin Queen” Crypto-Laundering Empire Collapses
[29:20]
- London’s Southwark Crown Court sentences Jimin Quan (aka Yadi Zhang), the “Bitcoin Queen,” to nearly 12 years in prison after laundering $7.3B from a Chinese crypto scam.
- “Police eventually seized 61,000 bitcoin worth 5.5 billion pounds, the largest cryptocurrency haul ever recorded.”
- Case highlights law enforcement's growing reach into digital finance crimes:
“...while money may talk in crypto, it also leaves a paper trail—just with fewer trees.”
Notable Quotes
-
Bob Maley on risk management:
“I want to understand and reduce the amount of surprise in that relationship. Surprise is the unknown uncertainty, and you know, I don't want to be surprised.” [18:42]
-
On the pace of AI:
“AI is a technology that has been advancing faster than anything we've ever experienced before...now it seems like every week a particular LLM is coming out with something new and better.” [15:46]
-
On openness:
“Everything that Black Kite does is based on open standards. And since there was no common open standard for AI, why not create one and put it out there...?” [20:44]
-
Industry adaptation:
"The bad actors are very agile. They're using AI. So being able to produce something that help an assessment process become more agile and keep up." [25:25]
-
On responsible AI:
“Responsible AI is to reduce the biases as much as possible. I don’t think that we'll ever be able to completely remove all bias because it's a fundamental human frailty...” [27:00]
Important Timestamps
- 02:00 – Patch Tuesday summary: major vulnerabilities and advisories
- 07:00 – Google’s anti-phishing lawsuit and Private AI platform
- 09:10 – Hyundai data breach, Amazon bug bounty program, Radamanthis disruption, ransomware broker guilty plea
- 14:04 – Bob Maley interview on AI risk and the BKGA3 framework:
- 14:04 – 21:00 – Landscape and fragmentation of AI risk frameworks
- 21:00 – 23:15 – Using AI to consolidate frameworks
- 23:15 – 26:31 – Evolving capabilities/keeping frameworks updated
- 26:31 – 27:42 – Responsible AI risk management
- 27:42 – 28:56 – Open resource access and community value
- 29:20 – “Bitcoin Queen” sentencing and largest ever crypto seizure
Conclusion
This episode highlights the expanding complexity, scale, and urgency of cybersecurity — spanning critical infrastructure, AI safety, global cybercrime, and cutting-edge risk frameworks. It emphasizes the importance of adaptable, open standards in AI governance and risk management, underpinned by the ethos of transparency and collaboration within the cybersecurity community.
For further information on the BKGA3 framework, listeners are directed to the Black Kite website, where updates and free resources will soon be available.
