Transcript
A (0:02)
You're listening to the Cyberwire Network, powered by N2K.
B (0:13)
Welcome to ThreatVector, the Palo Alto Networks podcast, where we discuss pressing cybersecurity threats and resilience and uncover insights into the latest industry trends. I'm your host, David Moulton, Senior Director of thought leadership for unit 42.
A (0:26)
And I think by getting hands on with threats, you develop a much deeper understanding because if you've done it right, there's this kind of visceral element to it that goes beyond the surface. So that's always what I recommend is like start by doing and then by reading. Right. Go find people that are, you know, voices in the space. I like to follow app builders, so people who are building agents today and also new kinds of AI experiences because they're kind of at the, at the forefront of this.
B (1:10)
Today I'm speaking with Spencer Thillman, Principal Product Manager at Palo Alto Networks where he focuses on AI runtime security. Spencer has a Master's in Philosophy in Technology Policy from the University of Cambridge and and works at the intersection of technology policy and cybersecurity. At Palo Alto Networks, he leads the development of products that ensure real time protection for AI systems against evolving threats, helping enterprises stay ahead in a rapidly changing environment. Today we're going to talk about how enterprises should think about their AI security strategy and explore the mental models that make the biggest difference. With AI adoption surging across every business function, we organizations are confronting a dual challenge. First, securing how employees use generative AI apps, and second, safeguarding the AI models, apps and the agents that enterprises build themselves. Why is this important? Because AI is transforming cloud architectures, threat models and business velocity. But it's also expanding the attack surface. Getting AI security right means protecting intellectual property, preserving trust, and preventing brand damaging incidents before they happen. Spencer, welcome to Threat Vector. I've been excited to have you here. I've been dying to have this conversation with you for weeks.
A (2:37)
So happy to be here. Looking forward to it.
B (2:40)
So let's start with your journey. How did you end up at the forefront of AI security? Right. This space is so new, but you've already been shaping it.
A (2:50)
So I have an academic background in this space. I was a researcher in AI policy at the University of Cambridge very early on, before large language models. This was in 2019. So I worked with a lot of the branches of the UK government, the eu, et cetera, to kind of understand the threat surface for AI and also kind of as a consequence of that, what principles need to put in place to encourage AI use within the United Kingdom and the European Union, but minimize risk. And a lot of those mental models that we were working on then ultimately are still applicable in this generative AI world that we live in now. So that's kind of how this came to be. I started on the policy side, but it's my view that ultimately what's written in policy needs to be codified. And how is it codified? Through security policies. So every policy objective eventually becomes a security problem.
