Threat Vector by Palo Alto Networks
Episode: Human in the Loop for AI Security
Date: September 18, 2025
Host: David Moulton, Palo Alto Networks
Guest: Brett Kinsella, General Manager, Fuel IX at Telus Digital
Episode Overview
In this episode, David Moulton interviews Brett Kinsella about the security challenges and risks associated with generative AI systems, particularly the implications of "shadow AI" (unauthorized AI tool usage in organizations). They explore concerns such as data leakage, AI hallucinations, defense strategies, the critical role of human judgment, and testing beyond the AI model to the entire interconnected system.
Key Discussion Points & Insights
Brett Kinsella’s Journey & Ultrarunning (04:08–06:28)
- Background: Brett shares how he became an AI leader by moving through various tech roles, engaging in SaaS and AI products, developing his own research and hosting 400+ podcasts on AI innovation (02:29–04:08).
- Ultrarunning Parallel: The perseverance, problem-solving, and adaptability needed for ultramarathons are compared to working in cybersecurity:
"Their life is about pain and problem solving and an occasional sense of victory and relying on others and problem solving again and dealing with new conditions every day. There's a lot of parallels." — Brett Kinsella (06:06)
Shadow AI: Hidden Risks of Consumer AI in Enterprise (07:00–09:04)
- Prevalence: Over half of employees in large organizations use generative AI, with 2/3 admitting to inputting sensitive company or customer data.
- Motivation: Employees use these tools not maliciously, but to improve productivity:
"Because they want to do their job better. It's not because they want to disseminate customer data." — Brett Kinsella (07:12)
- Key Risks:
- Data leakage: proprietary code, financial info, and customer details entering third-party AI systems.
- Existing tools and policies are often inadequate; users may bypass them for familiar or more effective AI solutions.
Security Vulnerabilities & Desired Behaviors (09:04–11:42)
- Healthy Approach: Leaders should embrace secure AI usage, provide approved tools, set clear usage policies, and maintain governance.
- New Threat Vectors with RAG:
- Retrieval Augmented Generation (RAG) increases risk by requiring large document uploads—potentially making IP, contracts, or customer data accessible to vendors and model providers.
- Broadened Risk Surface: More access points mean more opportunities for leakage and compromise.
Hallucinations: Accuracy & Organizational Risk (11:42–13:24)
- LLM "Hallucinations": Studies show 40–70% of LLM outputs can be inaccurate, which creates organizational risk when these become part of company knowledge.
- Brand Impact: Confident, inaccurate responses—delivered with authority—can damage credibility if not caught.
"This confident tool that took your data out for you and gave you new things that don't make any sense." — David Moulton (12:41)
User Psychology and AI Patterns (13:24–15:36)
- Preference for Long, Confident Answers:
- Users are more satisfied with longer, decisive responses, even if they are inaccurate.
- Emergence of Flattery:
- Flattering language and "anti-patterns" driven by user engagement show how interface and experience design can subtly subvert accuracy:
"We used to call that an anti-pattern, and yet it seems to be pervasive..." — David Moulton (14:48)
- Flattering language and "anti-patterns" driven by user engagement show how interface and experience design can subtly subvert accuracy:
Emerging Vulnerability Patterns in AI Chat Systems (16:05–17:54)
- Layered Defense:
- Security shouldn’t just rely on models or cloud providers; guardrails, prompt engineering, and thorough vulnerability detection are needed.
- Current Gaps:
- Relying solely on red teams and immature tools is not scalable:
"Today, it's a needle in the haystack problem..." — Brett Kinsella (16:23)
- Prevention, not just intervention, needs more focus.
- Relying solely on red teams and immature tools is not scalable:
Human in the Loop vs. Automated Detection (17:54–20:38)
- Optimal Balance:
- AI excels at detecting patterns in vast datasets; humans are crucial for ambiguity, creativity, and catching outliers.
- Automated systems need human interpretation and oversight (the "human in the loop") to catch issues missed by AI:
"We're really big on this idea of AI elevating human capability and ingenuity... just helping humans be better, more consistent, have more reach." — Brett Kinsella (18:55)
- Why Logs Matter:
- Regular log reviews are still essential for finding missed or ambiguous events.
Cross-Platform AI Risks & Governance (22:04–24:38)
- Current State:
- Most enterprise AI usage is still simple (document transformation, summarization, search) with restricted data flows.
- Imminent Complications:
- As multi-agent systems and new protocols like MCP come online, data will touch many more systems:
"You don't need to test the model... you need to test the system. Because it's not just the model, it's not just the provisioning cloud provider. It's all those other things you connect to it." — Brett Kinsella (23:04)
- As multi-agent systems and new protocols like MCP come online, data will touch many more systems:
- Standardization Need:
- Security protocols and communication standards for cross-system interactions are lagging.
Emerging AI Threats to Prepare For (24:38–28:39)
- Information Violations:
- Outputting data in noncompliance with policy/codes-of-conduct.
- Data Poisoning:
- Corrupted input data leading to persistent issues.
- Model Exfiltration:
- Attackers probing systems to better attack or copy the model.
- Agent Risks:
- AI agents acting autonomously could become "super users"—creating risk if they're compromised or misused.
- Testing Needs:
- Both unit and system testing, with potential for third-party audits and "purple teaming" (engaging end users, red teamers, blue teamers together).
"Test your system, which includes the agents, but you should be testing your agents independently as well." — Brett Kinsella (25:29)
- Both unit and system testing, with potential for third-party audits and "purple teaming" (engaging end users, red teamers, blue teamers together).
Technology Reflecting Human Behavior (28:39–29:43)
- Tech Mirrors Humanity:
- Security failures often stem from unanticipated user behaviors:
"When people don't use it in the way that we expect, I think that's where we run into security issues." — David Moulton (28:42)
- Security failures often stem from unanticipated user behaviors:
- Lesson:
- Designs must anticipate human creativity—good and bad—in interactions with technology.
Final Takeaways (29:43–31:17)
- Framework Shift:
- AI safety and security require thinking beyond traditional, deterministic frameworks—probabilistic, open-ended systems bring new risks.
- System-level testing (not just model-level) is crucial.
- Combine intervention tools (guardrails), comprehensive testing, and active governance.
- As environments change, build in adaptability, imagination, and continuous improvement.
"Every time we introduce a new tool or agent or something like that, or new connection to third party, we might need new tools in order to understand what the capabilities are, what the exploits are, because our expectations have [to] have enough imagination to understand how they might actually be compromised." — Brett Kinsella (31:00)
Memorable Quotes & Moments
- On Shadow AI:
"They might even use [consumer AI tools] if you provide something... If it's not as good as what they're used to using at home, they might just use it anyway." — Brett Kinsella (08:06)
- On Human/Audit Role:
"If you don't look at the logs, you miss stuff." — Brett Kinsella (19:39)
- Systemic vs. Model Testing:
"You don't need to test the model. You need to test the system." — Brett Kinsella (23:04)
- Human Creativity as Double-Edged Sword:
"When people don't use it in the way that we expect... that's where we run into security issues." — David Moulton (28:42)
Major Segment Timestamps
- [02:29] Brett’s path in AI & technology
- [04:08] Ultrarunning and cybersecurity parallels
- [07:00] Shadow AI and organizational risk
- [11:42] LLM hallucinations and information discipline
- [16:05] AI chat vulnerabilities and defense layers
- [17:54] Human in the loop vs. full automation
- [22:17] Cross-AI platform security & need for protocols
- [24:38] Emerging AI risks and agent testing
- [28:39] Technology and human behavior
- [29:43] Final thoughts on frameworks and security outlook
Summary for Security Professionals
This episode delivers a nuanced, expert-level discussion on how AI security demands a systems perspective, blending cutting-edge technology, robust governance, and, above all, empowered human oversight. Leaders in cybersecurity are urged to adapt policies and technical frameworks, focusing on holistic system testing, continuous learning, and red-teaming collaborations to identify fast-evolving risks. The call to action: Embrace AI’s benefits—but don’t lose sight of the new, complex threat surfaces it introduces.
