Threat Vector – "Securing the AI Supply Chain"
Date: January 8, 2026
Host: David Moulton (Senior Director of Thought Leadership, Unit 42, Palo Alto Networks)
Guest: Ian Swanson (AI Security Leader, Palo Alto Networks; Founder, Protect AI; former AWS & Oracle executive)
Episode Overview
This episode of Threat Vector delves into the critical topic of securing the AI supply chain. Host David Moulton engages Ian Swanson in a comprehensive discussion around the hidden risks in enterprise AI adoption, the importance of securing AI systems, and practical measures leaders can take to mitigate vulnerabilities — from model risks and runtime attacks to employee-driven "shadow AI" and organizational blind spots.
Key Discussion Points and Insights
1. The Urgency of Securing AI in the Enterprise
- AI’s Potential & Risks:
- AI can transform businesses by automating processes, improving products, and reducing costs.
- However, insecurity in AI can lead to severe consequences: malware generation, data exfiltration, attacks, and reputational damage.
- Quote [00:24, Ian Swanson]:
"You shouldn't put any AI live in any enterprise use case without securing it first ... Even though AI is so impactful, it can be so amazing in many different ways. We need to make sure that it's safe, that it's trusted and that it's secure, and that there really should not be any AI in any enterprise without security of AI."
- Quote [00:24, Ian Swanson]:
2. Defining the AI Supply Chain
- Data as fuel, Models as the engine:
- The supply chain in AI consists of not just data, but the models (often from open-source repositories) and the artifacts integrated into AI-driven applications.
- Many organizations vastly underestimate how many models they have in production.
- Quote [06:03, Ian Swanson]:
"Oftentimes when I meet with the CISO, I say, how many machine learning models do you have live? ... The real answer is tens of thousands. And we have many customers that have hundreds of thousands of models that are live in production."
- Quote [06:03, Ian Swanson]:
3. MLSecOps: Bridging Gaps in AI Security
- MLSecOps Explained:
- Modeled after DevSecOps, MLSecOps involves securing AI through every phase of its lifecycle—design, development, deployment, and operation.
- Quote [04:51, Ian Swanson]:
"MLSecOps stands for machine learning, Security Operations ... We released frameworks around how you should think about securing AI all through the development lifecycle ... all that sat underneath this banner of ML SecOps."
- Quote [04:51, Ian Swanson]:
- Modeled after DevSecOps, MLSecOps involves securing AI through every phase of its lifecycle—design, development, deployment, and operation.
4. Common AI/ML Vulnerabilities and Real-World Attacks
- Open Source Model Risks:
- Name-squatting attacks and malicious models prevalent in public AI model hubs.
- Example: A malicious model mimicking a well-known company, designed to steal cloud credentials upon deployment.
- Quote [08:09, Ian Swanson]:
"We found a model pretending to be from a well known healthcare life sciences company. It was a name squatting attack ... at the point of deserialization, one of its core goals was to steal and exfiltrate your credentials on your cloud."
- Quote [08:09, Ian Swanson]:
- Risks are not just theoretical—attacks now mirror and extend traditional software supply chain compromises.
5. The Challenge of Speed vs. Security
- Balancing Innovation and Protection:
- The pressure to rapidly innovate with AI often leads to shortcuts in validation and security, but leadership must maintain a balance.
- Quote [10:16, Ian Swanson]:
"It's a healthy dialogue ... it's truly a team sport of how we develop AI that drives true value, that is secure."
- Quote [10:16, Ian Swanson]:
- The pressure to rapidly innovate with AI often leads to shortcuts in validation and security, but leadership must maintain a balance.
6. Attacks at Runtime and the Importance of Guardrails
- Threat vectors occur during both the build phase and at runtime (e.g., prompts to chatbots designed to exfiltrate data).
- Need continuous security checks for both inputs and outputs to prevent data leaks, PII exposure, or brand harm.
- Quote [11:42, Ian Swanson]:
"Attacks are also happening at runtime ... we see malicious actors that are trying to fool these AI systems and manipulate them to leak sensitive data ... you need to be have constant guardrails looking at all the inputs and outputs."
- Quote [11:42, Ian Swanson]:
7. Vibe Coding: A Double-Edged Sword
- Definition & Risk:
- Vibe coding refers to use of AI for code generation — a huge productivity boon (often 30%+ efficiency), but risky if not checked.
- LLMs can introduce vulnerabilities or malicious code.
- Quote [13:15, Ian Swanson]:
"Vibe coding is a slang term ... where a developer is able to utilize AI for code generation ... Now the challenge is how do we make sure again that the AI doesn't go off, you know, off the rails and introduce malicious content, malicious URLs ... you have AI on the side that is making plans, it's perceiving, it's executing steps, and you need to make sure you have controls there."
- Quote [13:15, Ian Swanson]:
- Analogy:
- Giving powerful tools to inexperienced users (like a Formula One car to a new driver) without proper controls is dangerous.
- Quote [15:22, Ian Swanson]:
"...if I give a Formula one car to my daughter, who just learned how to drive as she's 16, she'll probably crash it in the wall ... But if we give a Formula one car to a highly trained driver, they're going to just smash it on the race course..."
- Quote [15:22, Ian Swanson]:
- Giving powerful tools to inexperienced users (like a Formula One car to a new driver) without proper controls is dangerous.
8. Red Teaming as a Security Best Practice
- Essential for pre-production and ongoing security.
- Palo Alto Networks uses static attack libraries mapped to frameworks (NIST, MITRE ATLAS) to create scorecards and inform policies.
- Quote [18:48, Ian Swanson]:
"Red teaming is incredibly important ... before any application goes live, we need to test it ... It’s continuous and it’s integrated ... running a series of attacks ... that scorecard right now is being used by our customers as they make go no go decisions for these AI applications."
- Quote [18:48, Ian Swanson]:
9. Shadow AI: The Hidden Landscape
- Definition & Discovery:
- "Shadow AI" refers to unsanctioned or hidden AI assets—tools, models, or practices outside governance processes.
- These assets can proliferate across devices, SaaS platforms, and cloud environments.
- Quote [21:35, Ian Swanson]:
"Shadow AI takes form in many different places ... Number one is employee usage of AI ... The other side is as you build, train, deploy models, AI applications, agentic workflows, we need to figure out where all those live and we need to bring that to light."
- Quote [21:35, Ian Swanson]:
- First Step: Visibility:
- Begin by cataloging both employee use (e.g., browser-based AI apps) and internal AI artifacts (models, agents in storage).
10. Lessons from M&A: Protect AI’s Integration with Palo Alto Networks
- The partnership allowed for a unified, more powerful AI security platform covering both development and runtime.
- Quote [25:58, Ian Swanson]:
“We completely integrated all of Protect AI's offerings into what we call Prisma Airs ... Palo Alto had amazing capabilities ... it was a truly better together scenario ... [Now] we go all the way back into how [AI] is being developed all the way through it being in production."
- Quote [25:58, Ian Swanson]:
11. The 2026 Blind Spot: Agentic AI Workloads in SaaS
- Emerging Concern:
- Rapid rise of autonomous AI “agents” (in platforms like ServiceNow, Salesforce) increases risk where organizations may lack visibility or control.
- Quote [27:58, Ian Swanson]:
“Top of mind for all these CISOs are these agentic workloads and specifically where they start to lose control and visibility as it relates to building within SaaS offerings ... The biggest risk ... is agents, because we've given AI arms and legs to go carry out tasks. We need to make sure they don't go rogue."
- Quote [27:58, Ian Swanson]:
- Rapid rise of autonomous AI “agents” (in platforms like ServiceNow, Salesforce) increases risk where organizations may lack visibility or control.
12. Actionable Advice: Education & Collaboration
- Start With Education:
- Closing the knowledge gap is the first defense, as siloed teams (AI researchers vs. security) often aren’t aware of mutual risks.
- Quote [29:13, Ian Swanson]:
"It starts with education ... The AI teams might not understand security risks. The security teams might be blind to all the AI that's already in development ... Even though it's really simple, I think we need to start with internally at a company. Let's catalog and let's understand all the AI that's being built. And that needs to happen through conversation across all these teams."
- Quote [29:13, Ian Swanson]:
- Closing the knowledge gap is the first defense, as siloed teams (AI researchers vs. security) often aren’t aware of mutual risks.
- Memorable Metaphors:
- Building AI without oversight = “pulling in grenades and pulling pins and you don't even know it.” [30:00]
- Unmanaged AI is a “baby tiger” that can grow dangerous quickly. [31:06]
Notable Quotes & Memorable Moments (with Timestamps)
-
On the Scale of the AI Supply Chain:
"Many customers have hundreds of thousands of models that are live in production ... we need to scan machine learning models for risk."
[06:03, Ian Swanson] -
On Malicious Models and Name Squatting:
"We found a model pretending to be from a well known healthcare life sciences company. It was a name squatting attack ... downloaded tens and tens of thousands of times."
[08:09, Ian Swanson] -
On Vibe Coding & Security:
"You have AI on the side that is making plans, it's perceiving, it's executing steps, and you need to make sure you have controls there, otherwise these processes can perhaps go rogue."
[13:15, Ian Swanson] -
Formula One Analogy for AI Tooling:
"...if I give a Formula one car to my daughter... she'll probably crash it in the wall ... But if we give a Formula one car to a highly trained driver, they're going to just smash it on the race course."
[15:22, Ian Swanson] -
On The Team Approach to Security:
"This is truly a team sport... we need to make sure that these teams are having a discussion, that they're both being educated on the risks, but also the opportunity of AI."
[10:16, Ian Swanson]
Important Timestamps
- 04:51 — Explanation of MLSecOps
- 06:03 — Underestimating the number of live AI models; visibility blind spots
- 08:09 — Example of supply chain attack in open source AI models
- 11:42 — Explaining runtime/inference attacks in deployed AI systems
- 13:15 — Definition and risks of Vibe coding
- 15:22 — Striking balance; Formula One car analogy
- 18:48 — Red teaming as a continuous process in AI security
- 21:35 — Shadow AI: Definition, discovery, and control
- 25:58 — Integration of Protect AI into Palo Alto, and benefits
- 27:58 — 2026’s likely biggest blind spot: SaaS agentic workloads
- 29:13 — Starting with education for cross-team understanding
Conclusion
This episode provides a candid, expert-level perspective on the security realities facing today’s AI-driven enterprises. Swanson and Moulton emphasize that while AI delivers transformative value, its adoption without security is perilous. Leaders are urged to prioritize end-to-end visibility, continual collaboration between teams, and to leverage frameworks like MLSecOps, red teaming, and robust controls at every stage—especially as AI tooling and agentic architectures proliferate inside and outside formal IT governance. Knowledge, conversation, and visibility are underscored as the foundation for safe, scalable enterprise AI.
