CyberWire Daily – Research Saturday
Episode: Data leak without a click
Date: September 13, 2025
Host: Dave Bittner (N2K Networks)
Guest: Amanda Russo, Principal AI Security Researcher, Stryker
Theme: Exploring how agentic AI systems can enable zero-click exfiltration of data through prompt injections—without user interaction.
Episode Overview
This episode of Research Saturday investigates cutting-edge risks in the intersection of AI agents and cybersecurity. Amanda Russo shares insights from Stryker's research on "silent exfiltration": how malicious actors can leverage prompt injection attacks—embedded in something as innocuous as an email—to trigger AI agents into leaking sensitive data from services like Google Drive or email, all without direct user action. The discussion covers the unique dangers of agentic AI, the scale of potential attacks, recommendations for mitigation, and the future landscape of agentic AI security.
Key Discussion Points & Insights
1. Understanding Agentic AI and "Silent Exfiltration"
- Agentic AI refers to AI agents with the autonomy to perform tasks across multiple applications (e.g., accessing web, emails, file systems).
- "Silent exfiltration" describes attacks where AI agents, when exposed to cleverly crafted content (like a prompt injection hidden in an email), can process and leak data without any clicks or explicit user interaction.
- Amanda Russo [03:16]: "This is not something that the agent intended to do. This is kind of part of that excessive agency... It'll automatically look at the content there and try to run Python's code, try to use the prompt."
2. Zero-Click Attack Vectors
- The zero-click aspect is especially alarming: simply asking an AI agent to "summarize all my email" can activate a hidden malicious prompt, causing inadvertent data leaks behind the scenes.
- Amanda Russo [04:26]: "Even though you weren't intending to access that malicious email... it'll automatically exfiltrate either all your email, all your Google Drive documents out to a C2 or collection server without your knowledge. It'll all do it on the back end where the agent is processing."
3. The Security Gaps of Agentic AI
- Traditional cybersecurity relies on boundaries or detection rules (like firewalls or antivirus), but agentic AI operates via "coercion," parsing failures, and can be deceived through creative prompt engineering that bypasses those mechanisms.
- Agentic AIs can span multiple connected services, making them uniquely vulnerable.
- Amanda Russo [05:18]: "There's no rules, there's no rule detection. Like there is like a WAF or something. It's all about coercion... relying on its parsing failures, trying to deceive it..."
4. Email as a Powerful Attack Vector
- Emails are unvetted input, arriving from many sources and regularly processed by AI agents for functions like summarization.
- Existing email filters are not designed to spot prompt injection attacks, increasing risk.
- Amanda Russo [06:03]: "You are at the mercy of the email filtering for that particular provider... I don't think we're at the point yet where we have email rules that filter prompt injections. Not yet."
5. Potential Scale and Scope of Attacks
- An attack’s reach depends on what capabilities the AI agent has—web search, file access, code execution, sending/deleting emails, etc.
- Security hardening and permission boundaries are still underdeveloped.
- Amanda Russo [07:06]: "You know, is it possible to do like a Python interpreter breakout through just like a prompt injection from an email and then you could own the whole backend system..."
6. Real-World Testing and Non-Speculative Risk
- Stryker’s findings are drawn from active red-teaming and lab experiments, not just theoretical risk.
- Amanda Russo [08:18]: "A lot of this comes from real life attacks that we've administered ourselves."
7. Protective Measures and Recommendations
- Controls must be layered:
- Guardrails around agent actions and logs of agent activity.
- Hardened tool environments (e.g., code interpreters in sandboxes).
- Configurable limits on agent capabilities—request, access, deletion rights, etc.
- Logging and auditability for incident response.
- Amanda Russo [08:45]: "Have you really taken a look at hardening the tools that it's using? Are you putting your code interpreter into a properly hardened sandbox or container?"
8. Future of Agentic AI Security
- The field is nascent—likened to the "wild, wild west."
- The rapid pace of agentic AI adoption means new threat vectors are emerging faster than security frameworks.
- Amanda Russo [14:03]: "I feel like it's the wild, wild west, to be honest. It's a lot of traditional security concerns in this type of infrastructure."
9. Adoption by Risk-Sensitive Users
- High-value targets (like executives) are likely early adopters—making these security gaps particularly worrisome.
- Amanda Russo [15:52]: "Yes, yes, it does save you time... but at the same time, you know, with speed, there's also going to be a cost for security."
10. Current State of Protective Tools and Industry
- There's no universal solution yet; security tooling is fragmented between traditional cybersecurity practices and new AI-specific protections (prompt injection filters, agent guardrails).
- Larger platforms may build robust guardrails, but for smaller, do-it-yourself solutions, users are left exposed.
- Amanda Russo [17:57]: "When you get into the smaller, like I'm going to roll my own Ollama GPT, OSS or Llama 2 or whatever... you don't have that capability to put in guardrails. Like what do you do then?"
11. Takeaway Message for Security & AI Practitioners
- Security must adapt to multimodal, multi-turn attacks—not just simple prompt injections.
- Defensive layers must cover every stage and capability of an AI agent.
- Amanda Russo [18:43]: "It's not just prompt injections anymore. It's going to be multimodal multi turn attacks... you just have to put a layer on every part of it. A layer of security on every part."
Notable Quotes & Memorable Moments
- Amanda Russo, on the essence of the new risk:
"It's now like conversation. Social engineering, traditional social engineering with an AI agent..." [11:45] - On the growth of this new security challenge:
"This is the era of where agentic security is just blooming." [15:52] - On staging defenses:
"I think it's going to be layered. It's going to be a mix of all those things. You're going to have to have some type of logging. If you're an incident responder, how are you going to figure out how the exfiltration happened?" [13:08] - On the research's real-world grounding:
"A lot of this comes from real life attacks that we've administered ourselves." [08:18] - Industry readiness:
"I don't know if there's a true end all solution yet because it's just starting." [16:24] - On continuous discovery in the field:
"It's just an unexplored field. And I feel like there can be a lot of new research and new discoveries in this area." [15:03]
Timestamps for Key Segments
- 02:10 – Overview of agentic AI and multimodal prompt injection risks
- 03:11 – Defining "silent exfiltration"
- 04:26 – Explaining the "zero-click" attack scenario
- 06:03 – Email as a particularly vulnerable attack vector
- 07:06 – The scale of possible attacks based on agent capabilities
- 08:18 – Real-world testing vs. speculation in research
- 08:45 – Recommendations for defensive controls
- 11:45 – Social engineering in the age of agentic AI
- 14:03 – The “wild west” state of agentic AI security
- 15:52 – Target profile: high-level executives as early adopters
- 16:24 – Available tools and the industry’s state of readiness
- 18:43 – Final takeaway: the need for multilayered, persistent security
Further Resources
- Stryker AI: More research and contact information at straiker.ai
- Research focus: "Silent Exfiltration: Zero click Agentic AI hack that can leak your Google Drive with one email"
- Link: Provided in the show notes
Summary prepared for CyberWire Daily by an expert podcast summarizer.
![Data leak without a click. [Research Saturday] - CyberWire Daily cover](/_next/image?url=https%3A%2F%2Fmegaphone.imgix.net%2Fpodcasts%2F00fa13ea-8ff3-11f0-87c6-b715183ba2f5%2Fimage%2F95b72a93c2ffaf8ff900d662a9bd3735.png%3Fixlib%3Drails-4.3.1%26max-w%3D3000%26max-h%3D3000%26fit%3Dcrop%26auto%3Dformat%2Ccompress&w=1200&q=75)