CyberWire Daily (N2K Networks)
Episode: Where encryption meets executive muscle — December 19, 2025
Host: Dave Bittner
Guest: Nitei Milner, CEO of Orion Security
Episode Overview
This episode delivers a brisk, comprehensive rundown of the day’s most pressing cybersecurity developments—from legislative moves and global threats to technical vulnerabilities and the evolving landscape of data loss prevention (DLP). The centerpiece is an interview with Nitei Milner, CEO of Orion Security, exploring urgent concerns about data leaking into AI tools and how modern DLP is transforming to keep up. The episode makes clear: encryption and executive action are converging as never before at every level of cybersecurity practice and policy.
Key News Segments & Insights
1. National Defense Authorization Act (NDAA) for 2026 [03:09]
- Summary: President Trump signed a $901 billion NDAA with significant cybersecurity measures.
- Maintains dual-hat leadership structure of US Cyber Command and the NSA, shielding it from Pentagon budget cuts.
- $417 million allocated specifically to Cyber Command for digital operations and HQ maintenance.
- Mandates secure, encrypted mobile devices for Defense Department leaders.
- Orders Pentagon to review foreign-sourced infrastructure and streamline security requirements.
- Notable Quote: "The bill authorizes record defense spending and preserves the long-debated dual-hat leadership of US Cyber Command and the National Security Agency by barring Pentagon funds from weakening the Cyber Command commander's authority." [03:09]
2. Evolving Global Threats [04:44]
- Denmark-Russia: Danish officials accuse Russia—via groups Z Pentest and Noname O5716—of cyber-attacks on critical infrastructure (notably water utilities) and election-focused DDoS.
- China-Aligned Group ("Long Nosed Goblin"): Targets government bodies in SE Asia and Japan, abusing Windows tools to harvest browser data and deploy backdoors.
- Kimwolf Android Botnet: Infects 1.8M devices, facilitating enormous DDoS attacks. Uses encrypted DNS to evade detection and is part of the Turbo Mirai class.
- Notable Quote: "Officials said the cyber activity is part of a wider influence effort to undermine Western backing of Kyiv, with elections used to attract public attention." [05:25]
3. Vulnerabilities & Industry Response [06:33]
- WatchGuard Firebox Flaw: Urgent patch due to actively exploited remote code execution vulnerability in Firebox firewalls. All versions of Fireware OS at risk; attackers need no authentication.
- Amazon vs. North Korean IT Worker Scam: Over 1,800 suspected operatives blocked. Tactics leverage AI-generated resumes, deepfakes, and hijacked LinkedIn accounts to launder wages back to NK.
- CISA Advisories: Nine new advisories for vulnerabilities across major industrial/OT products (Siemens, Schneider, Advantech, etc.).
4. Regulatory & Legal Moves [08:54]
- Deepfake "Take It Down Act": US Sentencing Commission seeks input on punishments for non-consensual AI-generated intimate imagery. Up to two years in prison for adults, three for minors.
- ATM Jackpotting Conspiracy: 54 indicted for a campaign using Plautus malware, linked to Venezuelan syndicate Tren d' Arugua. Losses: $40.7 million.
Featured Interview: Nitei Milner, CEO of Orion Security
Topic: The crisis (and opportunity) of data leaking into AI tools: How LLMs are reshaping DLP
The Traditional DLP Challenge [14:25]
- Legacy DLP Tools:
- Policy-based, deterministic, with endless tweaking for every use case (credit cards, PHI, etc.)
- High false positives led to alert fatigue, low effectiveness
- "You had to define a policy for every use case... and then what usually would happen is you had to tweak it over and over again because you'd get a lot of false positives... These tools were known for being, to say the least, not very effective for enterprise companies." — Nitei Milner [14:25]
Why Anomaly Detection Failed for People [15:37]
- UEBA (User & Entity Behavior Analytics):
- Works for predictable behavior (malware/processes), but not humans whose handling of sensitive data changes daily.
- Led to even more false positives and didn’t reduce analyst workload.
Enter LLMs: A New DLP Paradigm [16:40]
-
Modern DLP with LLMs:
- LLMs empower DLP with "human cognition," bringing analyst-like context to every data exfiltration attempt.
- LLM analyzes: the individual acting, data type, source, and intent—not just rigid rules.
- "What is trying to be done right now... is to use LLMs, basically human cognition, the missing piece of DLP, and... think like a security analyst for every data exploitation." — Nitei Milner [16:40]
-
Tangible Impact:
- Reduces false positives dramatically (from ~90% to 5–10%), slashing manual review needs.
- "You can do it with 20% of an FTE's time... a real game changer." [18:31]
-
Nuance & Learning:
- LLMs can be trained with internal company feedback, adapting context to avoid repetitive false alerts.
- Human reviewers now focus on edge-cases, not drowning in noise.
- "You can teach it... the LLM model adds it to his context in this specific company. Next time it will use this context to reduce the false positives." — Nitei Milner [19:19]
Privacy Concerns with LLM DLP [20:44]
- Local LLMs:
- Organizations can keep models isolated/on-premises to ensure customer data isn't inadvertently used to train global models.
- Data privacy remains paramount for adoption in sensitive industries.
The AI Double-Edge for DLP [21:13]
- AI as Threat and Solution:
- Ways AI increases DLP risks:
- Employees feeding sensitive data (financial reports, presentations) into public AIs like ChatGPT.
- AI agents (e.g., email bots) potentially leaking data via automation.
- Enhanced internal searchability exposes data to more employees than intended.
- "AI can be looked upon as a threat, but also as a huge enabler for creating 100x better solutions with one tenth of the operational cost..." — Nitei Milner [22:40]
- Ways AI increases DLP risks:
The Coming Generational Shift [23:22]
- The landscape is poised for a generational leap: better solutions, lower costs, but new, complex threats from AI-enabled data exfiltration.
- "Our mission... is to make sure that people can access these benefits as fast as possible. And I think that we're going to have a very interesting few years in front of us when it comes to data security and data protection." — Nitei Milner [23:22]
Riot Games & BIOS Cheating [25:16]
- Issue: Cheaters penetrated Riot's protections via flaws in motherboard BIOS from major OEMs (Asrock, Asus, Gigabyte, MSI).
- Impact: DMA-based cheats bypassed IOMMU defenses, operating at a hardware level.
- Response: Firmware updates now required before launching some titles; upends the cheating arms race, raising barriers for would-be cheaters.
Memorable Quotes & Moments
- "The fix is less glamorous than a ban wave, but more effective. Motherboard makers have released BIOS updates, and Riot's Vanguard Anti Cheat may now insist players install them before launching Valorant." — Dave Bittner [25:47]
- "AI giveth and AI taketh away, right?" — Dave Bittner [23:05]
- "Exactly. 100%. 100%, Dave." — Nitei Milner [23:08]
Timestamps for Key Segments
- NDAA News and Policy Moves: 03:09–04:40
- International Cyber Threats: 04:44–07:00
- Vulnerabilities, CISA Advisories, Amazon/North Korea: 07:00–08:54
- Deepfakes, Legal/Crime Updates: 08:54–13:00
- Main Interview (Nitei Milner, Orion Security): 14:13–23:57
- Riot Games, BIOS Cheating Discovery: 25:16–27:00
Takeaways
- Government action and executive initiatives are rising to meet cyber threats at scale—encryption mandates and DLP reviews are now high-priority.
- Legacy DLP is giving way to context-aware, LLM-driven models that promise to dramatically cut noise and catch real risk—even as AI creates exciting new attack surfaces.
- AI’s future in cybersecurity will be both as solution and adversary. Wise organizations must prepare for both sides of the coin.
- Hardware-level vulnerabilities remain a potent blind spot, as evidenced by Riot Games’ discovery.
