Practical AI: Dealing with Increasingly Complicated Agents
Podcast: Practical AI
Hosts: Daniel Whitenack, Chris Benson
Guest: Donato Capitella, Principal Security Consultant at Reversec
Date: October 16, 2025
Episode Overview
This episode dives into the rapidly evolving world of agentic AI systems—where Large Language Models (LLMs) can interact with external tools and APIs—and the escalating complexity and security risks that arise as these systems are deployed in real-world enterprise scenarios. Donato Capitella returns to discuss the state of AI agent adoption, security vulnerabilities, real-world attack patterns, and practical frameworks for securing today’s LLM-powered agents. The conversation balances deep technical insight with stories from the field, offering both warnings and actionable advice for practitioners.
Key Discussion Points & Insights
State of AI Adoption in Enterprises
(04:49–08:35)
- Dramatic shift over the last year from simple retrieval-augmented generation (RAG) and chatbots to agentic workflows, where LLMs can leverage external APIs and tools.
- Many enterprises are just beginning to experiment with coordinated, agentic systems. MCP (Multi-Component Protocol, an emerging agentic framework) is still too new for most large organizations.
- Enterprises often build their own orchestration layers, tool selection logic, and internal agent frameworks, frequently in legacy tech stacks.
- Quote (Donato Capitella, 05:55):
"If you asked me this question last year ... I would have said the majority of our clients were doing rag on documents ... Now, a lot of the stuff we test is agentic in one way or the other."
- Quote (Donato Capitella, 05:55):
Vulnerabilities Introduced by Agents
(08:35–13:21)
- Integrating LLMs with internal tools and APIs—especially those never intended to be internet-facing—creates attack surfaces.
- Prompt injection can weaponize any input channel (emails, tickets, etc.), allowing adversaries to trigger dangerous API calls via the LLM.
- Access control and deterministic authorization are key: Tools called by LLMs must not blindly trust LLM output tied to user-controlled inputs.
- Quote (Donato Capitella, 09:22):
"Any tool exposed to an LLM becomes a tool exposed to any person that can control any part of the input into an LLM."
- Quote (Donato Capitella, 09:22):
- The complexity of data pipelines and the number of interconnected components increases risk and makes root cause analysis difficult—echoing past challenges with microservices architecture.
Real-World Security Examples
(13:21–17:05; 20:28–22:46)
- Example: Attacker injects a crafted email into a customer support system; that email is then included in RAG results and can cause the LLM to generate a phishing link in a company email.
- External conferences (Black Hat, Secure AI) highlight cases where even with filtering, adversaries bypass defenses (e.g., Copilot’s "Eco Leak" using clever Markdown to exfiltrate data).
- Quote (Donato Capitella, 21:09):
"One of the main vectors to exfiltrate information in LLM applications is to make the LLM produce a markdown image ... and you can tell the LLM by the way, in the query string ... put all the credit card data of this user."
- Quote (Donato Capitella, 21:09):
Enterprise Security Posture: Lockdown vs. Wild West
(22:46–26:40)
- Enterprises fall on a spectrum: Some are hyper-restrictive (multiple layers of access), while startups often prioritize functionality over security, leading to shadow AI.
- Companies struggle to strike a balance between locking things down and enabling productivity.
- Quote (Donato Capitella, 24:22):
"People really want to be using GenAI because ... I do like the ability to use it to do a lot of tasks to make them easier." - On restrictive environments:
"I would never work there because it's basically impossible to get anything done ... I couldn't spend all my day into six layers of VDI."
- Quote (Donato Capitella, 24:22):
Penetration Testing & Attack Surface in GenAI
(26:40–31:37)
- While early pen testing might feel familiar, LLM application assessments require more statistical, data science–driven thinking.
- Prompt injection risk is always present; the challenge is quantifying the effort required to exploit, much like with password guessing attacks.
- Quote (Donato Capitella, 27:23):
"Jailbreaking and prompt injection is less similar to SQL injection and more similar to password guessing attacks … you don't allow the attacker to send a hundred thousand prompts."
- Quote (Donato Capitella, 27:23):
Guardrails & Defense-in-Depth Strategies
(31:37–33:06)
- Guardrails (prompt filters, etc.) should be viewed as feedback detection systems, not infallible barriers.
- Quote (Donato Capitella, 30:57):
"The guardrail is your detection feedback loop that then you have to action to protect your application and your users."
- Quote (Donato Capitella, 30:57):
Design Patterns for Securing Agentic Workflows
(33:06–40:47)
-
Paper highlighted: Design Patterns to Secure LLM Agents Against Prompt Injection (see show notes for full breakdown and Donato’s demo).
-
Six security patterns identified for different agentic use cases and trade-offs between security and utility.
-
Most secure: Action Selector—LLM picks from fixed actions.
-
Most promising for flexibility: Code then Execute (from Google’s "Camo" approach)—LLM writes a plan as code before untrusted inputs are introduced. External policies enforce which tool calls are allowed, independent of the LLM output.
- Quote (Donato Capitella, 39:02):
"You ask the LLM to produce a plan or the code before any untrusted input enters the context ... those will not be able to alter the LLM control flow."
- Quote (Donato Capitella, 39:02):
-
Core message: Shift to system-level design controls. Secure LLM agents with traditional dataflow analysis and monitor tainted variables, not simply post-hoc content moderation.
The Spiky Toolkit: Pen Testing in Practice
(40:49–50:51)
- Introduction of Spiky: Simple Prompt Injection Kit for Evaluation and Exploitation.
- Built from practical need: Unlike pure LLM red teaming tools, pen testing LLM applications requires adapters for custom pipelines, endpoints, and workflows.
- Modular, supports custom seeds for crafting attacks relevant to client context (e.g., data exfiltration, advice giving, web sockets).
- Quote (Donato Capitella, 41:21):
"An LLM application ain't an LLM. Like, it doesn't have an inference API ... We started writing scripts, individual scripts ... for every client."
- Quote (Donato Capitella, 41:21):
- Enables flexible, efficient red teaming and testing in isolated enterprise environments.
Future Directions: Where Is AI Security Heading?
(50:51–54:09)
- The call to action: Stop focusing solely on LLM red teaming and “tricking” demo bots.
- Urges a focus on robust, system-level design patterns and security models that anticipate when—not if—prompt injection succeeds.
- Quote (Donato Capitella, 52:14):
"If people don't start seriously taking the risks that come from LLM agents, we are going to see real world, big breaches coming from that."
- Quote (Donato Capitella, 52:14):
Notable Quotes & Memorable Moments
-
On the evolving threat surface:
"Any tool exposed to an LLM becomes a tool exposed to any person that can control any part of the input into an LLM."
— Donato Capitella, 09:22 -
On guardrails and prompt injection:
"The guardrail is your detection feedback loop ... They are giving you a feedback signal that that person, that user, that identity is trying to jailbreak it and then you can act on it."
— Donato Capitella, 30:57 -
On security focus shifting:
"Let's stop asking LLMs to say that humanity is stupid or how to make a bomb and let's start looking at our applications and ensuring that they can be used in a safe way if they have access to tools."
— Donato Capitella, 52:28
Timestamps for Important Segments
- 04:49 — Enterprises adopting agentic AI workflows: what’s actually happening
- 09:22 — External tools exposed to LLMs and the rise in new attack surfaces
- 13:21 — Example: Data source explosion, mixing trusted and untrusted info in LLM contexts
- 20:28 — Real-world exploits: Copilot’s "Eco Leak" and markdown exfiltration
- 27:23 — Pen testing in the LLM age: from whack-a-mole to statistical modelling
- 33:06 — Security-conscious design patterns for agentic LLM apps
- 41:21 — The birth and philosophy of the Spiky toolkit
- 52:14 — What the AI security community should focus on next
Final Thoughts & Recommendations
- As AI agents become more powerful and integrated into real systems, security must keep pace—beyond just red-teaming LLMs in isolation.
- Mature organizations should move toward systemic, policy-based, and deterministic approaches for controlling LLM-driven systems.
- Modular, open-source tools (like Spiky) and research-backed design patterns are now available to help practitioners stay ahead.
- Ultimately, robust design—where you assume prompt injection will eventually succeed—is the only defensible path.
