Transcript
A (0:02)
You're listening to the cyberwire network. Powered by n2k, It can have hidden attacks which when they enter into your system and get executed, all your data might get exfiltrated. Be careful into what tools you use and make sure you scan them through security software.
B (0:48)
I'm David Moulton and this is Threat Vector. Today I'm speaking with CLS Mistra about the security risks that come with a new class of AI tools, ones that don't just answer questions but take action. We're talking about autonomous agents that have persistent memory, access to your credentials and the ability to send messages, execute commands and interact with the web on your behalf. Celesh has spent years working at the frontier of AI security, from Uber's Advanced Technologies group to building and acquiring AI companies, and now at Palo Alto Networks has a clear eyed view of what we're building, what we're risking and what needs to change.
A (1:29)
Foreign.
B (1:34)
Welcome to Threat Vector. Really glad to be able to talk to you, especially after reading your recent article.
A (1:39)
Thank you so much, David. It's a real pleasure having this discussion with you.
B (1:44)
Before we get into the meat of today's conversation, I want you to talk to me a little bit about that path from Uber's advanced technology group to building AI security companies. Well, I know that you've had this, this front row seat to the evolution of AI, right? From scaling the human in the loop data pipelines and then growing AI security companies. What has that journey taught you about the relationship between AI capability and risk? I think you were starting to go there and I'm curious where you compare those two things.
A (2:20)
I guess earlier, especially with avs, we used to have a lot of discussions and a lot of papers were published around the ethics of aviation. We used to have different models and different diagrams that would show you, like if an AV is actually going on two different tracks, what should it do? That was kind of an open question that was still being grappled on. But when you think of AVs, when you think of self driving cars, risk was built into the capability. And that's what I was alluding to. It's about navigating tightly parked roads is actually a capability, but the undertone there is don't hit a car that's on the roadside. Your capability is to be able to predict what an object that you've actually seen from a distance. You should be able to predict the next second movement of that particular object. But if you're unable to do it, yes, it becomes a risk because you might end up colliding with that object. But that becomes a capability definition and not pure play risk definition. That's how these models were being built, these capabilities were being built. But now what we have actually seen is something different. We have started building models and agents that are super, super capable. They can write code, they can write emails, they can probably control your home alarm systems, but are they really safe and secure? So suddenly what we are now seeing is in this new world of AI agents, we're actually starting to see capabilities and risks being delineated a little bit. And that's where the difference starts to be.
