Transcript
A (0:00)
What type of novel security threats are emerging as AI advances? Let's find out with Wiz co founder Inon Kostika, who is here in studio to speak with us about what's happening. Inon, great to see you. Welcome to the show.
B (0:13)
Thank you, Alex.
A (0:14)
All right, so AI is doing amazing things. It is producing lines and lines of code for engineers and helping people build things faster than they ever could before. The other side of it is it's helping, I imagine, bad guys produce lines and lines of code and attack faster than they ever could before. Now, you are the co founder of Wiz, which is in the middle of a sales process or selling to Google for 32 billion.
B (0:42)
Correct.
A (0:42)
Okay, so you're the perfect person to have to discuss this because Wiz is a cybersecurity company. We've never had a cybersecurity expert like you on to talk about what's happening, especially as generative AI rises. So just give us a little bit of a state of play here in terms of what this explosion of the ability to code has done to cybersecurity.
B (1:05)
Yeah, it's interesting. I think the ability to code is just one aspect of AI. When we think about AI as a whole. First, I'm thinking about a whole new stick that is created. We are now at an era that is the big bang of technologies. And you are reinventing a whole array of capabilities, technologies that are being brought in play, whether it's the prompt, the model, the infrastructure, the platforms, and they're all playing together in order to allow customers to leverage AI. AI can be used, let's say, as a employee base, like a ChatGPT query. It can be as part of a SaaS, like, you know, in cursor, in GitHub, copilot, and it can be your own developed AI. Like as an enterprise, you're starting to develop applications, or all of these are leveraging these new technologies. Now, as with any new technology, it's based on software, and software by itself obviously can have vulnerabilities. So when we think about AI, first, we need to understand that it's code, and code has vulnerabilities like any other software that we have shipped before. And it's interesting, just a few weeks ago, there was pontoon. You know, you know, pontoon. It's an amazing event where they bring together the best researchers. And this year, they, for the first time, the AI category. What does it mean? AI category, they are basically doing a context to find vulnerabilities in certain technologies. And the more impactful the vulnerability is the bigger the bounty you get back. So this time we had for the first time in this spawn to own event, the AI category and six technologies were presented. Out of these six technologies, four were actually researched and found to be vulnerable at what we call the highest impactful vulnerability, which is remote code execution rce, which means that you can do anything with that technology. The learning that we have from here is that AI is very new as a software and the fundamentals exist. It can be vulnerable and you can actually use it in order to just run your own code on it like any other technology and software we have used to ship. So that's the first layer before you.
