CyberWire Daily – "When Hackers Go BIG in Cyber Espionage"
Date: October 16, 2025
Host: Maria Varmazes (in for Dave Bittner)
Featured Guest: Manoj Nair, Chief Innovation Officer, Snyk
Episode Overview
This episode focuses on large-scale cyber espionage and the rapidly evolving risks in AI security. The news briefing covers significant breaches, ransomware activity, phishing campaigns, and critical vulnerabilities affecting major organizations. The centerpiece is an in-depth interview with Snyk’s Manoj Nair, who explores AI adoption, emerging security risks in AI, and actionable guidance for practitioners.
Key Cybersecurity News and Analysis
F5 Networks’ Long-Term Breach and Nation-State Espionage
[02:10]
- Incident Details: F5, a leading cybersecurity company, disclosed that state-sponsored hackers gained persistent access to its networks, stealing portions of Big IP source code and information about undisclosed vulnerabilities.
- Duration: Hackers were inside F5’s networks for at least 12 months, with sources attributing the attack to China.
- Risks:
- Attackers may now have a technical advantage to find and exploit new vulnerabilities in F5 devices and software.
- The U.S. CISA has ordered federal agencies to update and inventory F5 devices by October 22.
- Quote:
- "The threat actor's access to F5's proprietary source code could provide that threat actor with a technical advantage to exploit F5 devices and software." (Maria Varmazes, [03:15])
PowerSchool Breach and Sentencing
[05:00]
- Attack: Matthew Lane, 19, hacked education software provider PowerSchool, stealing information on over 70 million individuals.
- Outcome: Sentenced to four years in prison, ordered to pay $14 million restitution, and fined $25,000.
Cisco Firewall Vulnerabilities and Senate Scrutiny
[06:00]
- Issue: Senator Bill Cassidy demands answers from Cisco over firewall flaws exploited in the Arcane Door espionage campaign.
- Action: Agencies ordered to patch, audit, and retire affected devices within 24 hours.
Phishing Campaigns Targeting Password Managers and Job Seekers
[07:00-08:30]
- Password Manager Impersonation: Attackers send fake breach alerts for LastPass and Bitwarden, tricking users into downloading remote monitoring tools that result in a complete compromise.
- "To be clear, LastPass has not been hacked. This is an attempt on the part of a malicious actor to draw attention and generate urgency…" (LastPass statement, [07:55])
- Credential Phishing: New scams mimic Google Careers pages, using realistic domains and dynamic tactics to evade traditional detection. Recommendations include domain monitoring and robust MFA.
Exposed Data and Ransomware Update
[09:00]
- Data Exposure: An unprotected Elasticsearch cluster leaked 6 billion records from previous breaches—underscoring risk from aggregated data exposures.
- Ransomware - Killin Group:
- Expanded attacks on organizations across France, Italy, and the U.S.
- Known for double extortion: encrypting data and threatening leaks of proprietary, employee, and customer information.
- "Pressure tactics are intensifying with shorter deadlines and more aggressive leak strategies." (Maria Varmazes, [09:32])
Featured Interview: Manoj Nair (Snyk) on AI Security
State of AI Security
[11:55–12:50]
- Main Point: The field is in its "early innings" – understanding AI security risks is only beginning, and responses are still catching up to rapid adoption.
- Quote:
- "Security is usually following the actual technology innovation... people are starting to understand the risks, but kind of early in understanding what do you do about the risks." (Manoj Nair, [12:15])
Comparing AI Security Adoption to the Cloud Era
[13:00–13:58]
- Security professionals are more proactive compared to earlier tech shifts:
- "For the cloud era, it took several years after cloud became mainstream… I don't see that with AI. I see a leaning in of the security teams… to enable the business, understand the risks, understand that they cannot be just saying no." (Manoj Nair, [13:20])
Concrete Risks in AI Adoption
[14:12–16:55]
- Main Use Cases:
- Widespread usage of chat, video, voice AI in personal life; rapid adoption of generative AI tools in coding and business.
- Tools from companies like Cursor, Windsurf, Anthropic, and OpenAI rapidly changing the developer landscape.
- Security Risks:
- Hallucinations in LLMs: Especially dangerous in code generation; can lead to vulnerabilities like SQL injections, inclusion of nonexistent (malicious) packages, and supply chain risks:
- "Hallucinations are really bad for security. And so… that security risk is actually profound in that it's much higher than human produced code." (Manoj Nair, [15:38])
- Emergence of Typo-squatting/Package Risks: LLMs can recommend or use fake open-source packages, accidentally propagating malware.
- Hallucinations in LLMs: Especially dangerous in code generation; can lead to vulnerabilities like SQL injections, inclusion of nonexistent (malicious) packages, and supply chain risks:
Productivity Returns & Guardrails for Developers
[17:22–19:17]
- Productivity Gains: AI boosts productivity, especially for "citizen developers," enabling new contributors to the development process.
- Guardrails Required:
- "That doesn't mean I'm going to just deploy that in production without a lot of checking and security guardrails. So the pain moves somewhere else." (Manoj Nair, [18:15])
- Security cannot be an afterthought—guardrails (like AI tools for code review and testing) must evolve alongside productivity tools.
Advice for CISOs/Security Leaders
[19:28–21:45]
- Start with Visibility:
- Do you know what gen AI/LLM apps and models your developers are using?
- Shadow AI is rampant—development teams often deploy models unknown to security.
- Manoj recounts a story: "You ask the question, how many models do you have in production? One goes, 'We don't deploy AI.' ... the other build site team goes, 'Thousands.'" (Manoj Nair, [20:10])
- Collaboration is Essential:
- Security cannot simply say no; must enable business and innovation.
- Governance, tool approval, and continuous monitoring are crucial.
- "Finding tooling that can move at the pace of AI is key… AI is really code and it's being built and it's being downloaded." (Manoj Nair, [21:11])
- Continuous Learning:
- Encourage participation in industry events, e.g., the first AI Security Summit (Oct. 22–23, San Francisco, aisecuritysummit.com).
- Develop roles for "AI security engineers" to work in parallel with AI engineers.
Notable Quotes & Insights
-
On the difference with AI security adoption:
- "I see a leaning in of the security teams wanting to know what to do different, wanting to be close to the business… that for me is a pretty marked differentiation." (Manoj Nair, [13:30])
-
On LLM risks:
- "Especially the junior developer tends to think that anything that the machine produces is accurate. So that is a huge set of capable risks emerging from that." (Manoj Nair, [16:08])
-
On security productivity trade-offs:
- "The pain moves somewhere else... to truly get the full productivity benefit... you do need to have the proper expansion of both the technology guardrails... AI that can secure AI might be a simple way of thinking about it." (Manoj Nair, [18:22])
-
On addressing AI risks:
- "Find who are the providers who are being very innovative... The problem is AI is really code and it's being built and it's being downloaded." (Manoj Nair, [21:11])
Additional Segment: AI Bias in Facial Recognition
[24:15]
- Issue: Wired reporting highlights how facial recognition systems exclude people with facial differences, deepening inequities due to lack of algorithmic diversity.
- Expert Warning:
- "It's a solid reminder that when AI systems fail to include everyone, they can deepen longstanding inequities and isolation." (Maria Varmazes, [25:03])
Takeaways
- Attacks are growing in sophistication and scope, often exploiting supply chains and trusted platforms.
- AI development brings both large productivity gains and new security challenges; balancing enablement and risk is essential.
- Traditional security tools aren’t enough—organizations must invest in visibility, governance, and continuous, AI-native controls.
- Diversity, inclusion, and transparency remain critical not only in technical controls but in the very design of new AI systems.
- Continuous education, community collaboration, and adopting emerging security roles and practices are key to staying ahead.
Timestamps
- [02:10] – F5 breach and nation-state espionage
- [05:00] – PowerSchool ransomware case and sentencing
- [06:00] – Cisco vulnerabilities and Senate inquiry
- [07:00–08:30] – Phishing campaigns: password managers and Google Careers
- [09:00] – Data exposure: Elasticsearch breach and ransomware update
- [11:42] – Manoj Nair interview introduction
- [11:55–19:17] – Risks and productivity issues in AI security
- [19:28–21:45] – Strategic advice for security leaders
- [24:15] – AI bias in facial recognition and inclusion failures
This episode provided an expert snapshot of evolving cyber threats and practical, forward-thinking insights for navigating the complexities of AI security.
