Threat Vector by Palo Alto Networks
Episode: Inside AI Runtime Defense
Date: September 10, 2025
Host: David Moulton (Palo Alto Networks Unit 42)
Guest: Spencer Thillman, Principal Product Manager (AI Runtime Security)
Episode Overview
This episode delves into the rapidly evolving field of AI security, exploring how organizations can safeguard both employee usage of generative AI tools and the internally-developed AI applications (apps, models, agents) that are transforming the enterprise landscape. Spencer Thillman shares his unique perspective at the intersection of technology policy and hands-on AI defense, offering practical frameworks, real-world threats, and actionable guidance for security teams facing the AI gold rush.
Key Discussion Points & Insights
1. Spencer Thillman’s Journey to AI Security (02:40–03:44)
- Background: Spencer comes from an academic and policy background, researching AI policy with top UK and EU institutions before pivoting to product leadership at Palo Alto Networks.
- Core Belief: "Every policy objective eventually becomes a security problem." (03:44, Thillman)
- Relevance: The "mental models" and risk assessments from earlier AI policy research are still crucial for today’s generative AI security challenges.
2. The Two Pillars of Enterprise AI Security (03:55–04:53)
- Pillar 1: Securing employee use of generative AI SaaS apps (e.g., ChatGPT, Grammarly).
- Pillar 2: Protecting internal AI-driven apps, models, and agents running in cloud environments.
- Quote: "You can break enterprise AI security down into basically two pillars." (03:55, Thillman)
- Takeaway: Organizations must have visibility and control over both external SaaS usage and internal AI assets.
3. The Scale and Growth of AI SaaS Usage (05:13–07:33)
- Explosive Growth: Number of catalogued enterprise AI applications leapt from 800 (Dec 2024) to 2,800 (May 2025)—a 250% increase.
- Workforce Impact: Over 50% of enterprise employees use generative AI SaaS daily; 10–30% of what’s sent is sensitive (IP, source code, patient data, etc.).
- Quote: "It's likely the biggest challenge in cybersecurity today..." (05:13, Thillman)
- Security Concern: Many apps may fine-tune on user inputs, creating risks of data leakage.
4. Keeping Pace with the Rapid AI Evolution (08:31–11:16)
- Hands-on Learning: Primary research—actively testing and red-teaming AI models—is the best way to understand emerging threats.
- Best Practice: Security professionals should use and probe AI tech firsthand rather than relying solely on secondary sources.
- Quote: "Start by doing and then by reading." (08:31, Thillman)
- North Star: Security should enable faster, safer AI development rather than impede it.
5. Practical Governance & End-User Coaching (11:17–14:10)
- Problem: Many organizations' AI governance policies lack enforceability—guidelines alone are insufficient.
- Solution: Back governance with real enforcement—track, monitor, and correct deviations (e.g., using browser agents to warn users in real time).
- Education: Most risky behaviors stem from lack of awareness, not malice.
- Quote: "People are just trying to get their job done... they don’t even know that a chatbot is running kind of like outside of the network boundary." (13:16, Thillman)
6. Five Pillars of Internal AI Security for Enterprises (14:31–16:33)
- Model Scanning: Check model files for threats (malware, insecure deserialization) before deployment.
- Posture Management: Ensure AI apps/agents/models don’t have excessive permissions.
- AI Red Teaming: Simulate attacks (e.g., prompt injections) to find vulnerabilities.
- Runtime Security: Monitor and filter inputs/outputs for prompt injection, data leaks, malicious payloads, etc.
- Agent Security: All of the above, plus the unique risks introduced by AI agents’ autonomy and tool access.
- Quote: "Agent security is primarily broken down into runtime security and posture... a superset of large language model security." (16:33, Thillman)
7. The Distinction Between Chatbots and Agents (16:40–19:17)
- Chatbots: Passive Q&A interfaces—users drive interaction.
- Agents: Autonomous applications that reason, access tools, and take actions in pursuit of user goals.
- Risks: Agents need autonomy, memory, and API interaction—each introduces new threat surfaces such as tool abuse and "cascading hallucinations."
- Example: Overly-permissive agents (e.g., those that could delete Salesforce data) are especially dangerous.
- Quote: "It's that autonomy that make agents profoundly powerful." (18:23, Thillman)
8. Real-World AI Attack Scenarios and Defenses (19:39–23:47)
a) Prompt Injection (19:39)
- Attackers use crafted prompts to override guardrails, access sensitive data, or manipulate models.
- Multilingual threat—attacks may work in languages other than English.
- Quote: "We detect many types of prompt injections—28 across eight languages..." (20:18, Thillman)
b) Sensitive Data Leakage (21:19)
- Users inadvertently share PII, credit card numbers, etc.; models may memorize and leak this data cross-user.
- Countermeasure: Scan model inputs and outputs in real-time for sensitive data patterns.
c) Brand and Data Protection (21:19–23:47)
- Prevent third-party chatbots from leaking organizational data.
- Implement DLP (Data Loss Prevention) and enforce model isolation where possible.
- Metaphor: "Wrapping the model in a kind of halo—to ensure good things go in and only good things come out." (23:37, Thillman)
9. How AI Changes Cloud Application Architecture (23:47–25:27)
- Similarity: Underlying cloud security basics still apply (workloads, VMs, containers).
- New Risks: Novel bidirectional exchanges (inputs/outputs with models) and data poisoning require fresh playbooks.
- Strategy: Extend known cloud security principles, and add new controls for model- and inference-specific risks.
10. Emerging and Underestimated Threats (25:36–29:19)
- Unwanted Topics: Chatbots must be constrained (e.g., not providing political or financial advice, or recommending competitors).
- Internet-Facing Agents: Risk of exposure to extremist, adult, or off-brand content—must strictly control agents’ access.
- Toxic Content and Malware: Prevent AI models from emitting offensive content or code used for harm.
- Quote: "With agents... give it a circle of freedom... just big enough to achieve its goal, but not larger." (27:55, Thillman)
11. The Importance of Red Teaming for LLMs/Agents (29:19–30:00)
- Even public chatbots can be coaxed into generating malware if attackers can discern a path around guardrails.
- The threat curve is abrupt—once bypassed, escalation is rapid.
- Quote: "Once they were able to figure out the pattern to get past the guardrails, it was a quick, slippery, downhill slide..." (29:41, Moulton)
Notable Quotes & Memorable Moments
-
On Hands-on Security:
"What I try to do... is that nothing can ever replace primary research." (08:58, Thillman) -
On Governance:
"If you have a governance process, make sure that it’s backed by technology that can actually enforce and track it." (14:01, Thillman) -
On Agent Autonomy and Risk:
"An agent shouldn’t be able to go drop tables in Salesforce. Right? Because the impact of that could be destructive." (18:16, Thillman) -
On Topic Control for LLMs:
"I want to ensure that that chatbot only speaks about shoes… if someone coerces it into speaking about politics... the chatbot doesn’t go there. It’s a very hard problem." (25:36, Thillman) -
On Security-Enabling Innovation:
"Our goal is for security to not feel like a weighted blanket... Ideally, with great security tooling, we’ll enable our customers to actually ship better AI apps and agents faster." (09:40, Thillman)
Timeline of Key Segments
- 00:26–01:10 – Introductions; episode theme and importance of AI security.
- 02:40–03:44 – Spencer's background and philosophy.
- 03:55–04:53 – Core pillars of enterprise AI security.
- 05:13–07:33 – Statistical scope of enterprise AI SaaS usage and risks.
- 08:58–11:16 – Staying current; hands-on security mindset.
- 11:47–14:10 – Governance, enforcement, and end-user coaching.
- 14:31–16:33 – The 5 pillars framework for AI security inside enterprises.
- 16:40–19:17 – Differentiating chatbots and agents; risks of agent autonomy.
- 19:39–23:47 – Prompt injection and sensitive data in practice.
- 23:57–25:27 – Cloud architecture evolution with AI.
- 25:36–29:19 – Additional risks: topics, Internet interaction, toxic content, malware.
- 29:41–30:18 – Real-world LLM red teaming results.
- 30:18–30:34 – Episode wrap-up and key takeaways.
Final Thoughts
This episode paints a comprehensive picture of where AI security stands—and where it’s going. The key message: Enterprises need to urgently and proactively adapt their security strategies to handle both external SaaS risks and the sprawling, fast-evolving frontier of internal AI-developed apps and agents. Hands-on experimentation, enforceable governance, and holistic risk control frameworks are essential for future-proof security in the AI-augmented enterprise.
