Transcript
Dina Temple Raston (0:03)
From Recorded Future News and prx, this is Click here. Hey there, it's Dena. When we look back at Cyber in 2024, it was the year of two big things law enforcement takedowns and the advent of AI. From robocalls to deepfakes, artificial intelligence is already playing a role in the 2024 election. A finance worker in Hong Kong was duped into handing over millions of dollars after being tricked by deepfake technology.
Lee Clearidge (0:44)
Russia is increasingly carrying out a hidden war, targeting Europe and the UK with cyberattacks.
Dina Temple Raston (0:51)
Even with all this activity, we only saw the faintest indication of how AI might really be weaponized in the future. Which is why we thought you'd love to hear this episode from our friends over at the Shift podcast. They're all about breakthroughs in frontier technologies and artificial intelligence, and they recently did a deep dive on how AI is enabling cybercrime. Stay with us.
Tim Harford (1:22)
Do nice guys really finish last? I'm Tim Harford, host of the Cautionary Tales podcast, and I'm exploring that very question. Join me for my new miniseries on the art of fairness. From New York to Tahiti, we'll examine villains undone by their villainy, monstrous self devouring egos, and accounts of the extraordinary power of decency. Listen on the iHeartRadio app, Apple Podcasts, or wherever you listen to podcasts.
Dina Temple Raston (1:51)
This is FBI Director Christopher Wray has been warning for some time about how AI will enable cybercriminals to up their game.
Lee Clearidge (2:04)
Right now, where it's most dangerous is essentially taking junior varsity bad actors and bringing them to the varsity level. But in fairly short order we're going to be seeing AI taking the varsity level athletes and taking them to a whole nother level of dangerousness.
Dina Temple Raston (2:21)
And that has required people who are fighting these criminals like Lee Clearidge, to keep upping their game in response. Lee is the Chief Product Officer at Palo Alto Networks and he was part of the team back in 2008 that built the industry's first next generation firewall. And these days he's focused on how Palo Alto is leveraging artificial intelligence to make its products safer. He spoke with the Shift podcast about that. Take a listen.
Lee Clearidge (2:51)
With AI, it's going to be possible where effectively every attack is a zero day attack, where every attack is new. And this is going to dramatically change what is going to be needed from a technology perspective. And it's an overall approach to cybersecurity because that hasn't been seen before. When I first started in cybersecurity, it was all relatively simple Most organizations were actually almost disconnected from the rest of the world. And then they had just a few applications that were connected. And so what it meant to do cybersecurity was fairly focused. And over the last 20, 25 years that I've been in cybersecurity, this has completely changed. There's not a single thing that I can think of in businesses around the world that is not network connected and Internet connected. And to the point where now there's a lot of very large companies that almost don't even have a network. They run on the cloud, they run in SaaS. And that connectedness has dramatically increased the attack surface that attackers have to focus on and for defenders to have to defend. Employees Everywhere are using AI. We all see this right when ChatGPT first launched, or BART and others, employees started looking at how they could use it to write better press releases or better market materials, or developers started looking at whether they could write code faster. Similarly, companies took a look and said, how do we take AI and make our enterprise applications better and more effective? And you started seeing AI chatbots and helpers and all this other kind of stuff. So you see all these positive things, and that's mostly what you see in the news, is the positive side. But the same time that all of that was happening, attackers around the world were also taking a look to see how they could take advantage of AI. And sad, but in some ways almost sort of funny example of this. We've seen about 1000 plus percent increase in phishing emails since the launch of ChatGPT. And I sometimes joke that the phishing emails are no longer misspelled. We've seen a 3,000% increase in deepfake phishing attacks from 2022 to 2023. So we're seeing attackers sort of understand all the benefits that the rest of us are seeing, but they're using AI to do bad things. And then we think that we're on the cusp of seeing AI used by attackers to really change the types of attacks they can launch. One of the things that's happened in cybersecurity in the past is attackers will often come up with an attack and then they use it. They use it for as long as it's useful. And sometimes this is months, sometimes even years before cybersecurity companies get a handle on it, can block it reliably, and then the attackers have to shift to a new attack. Generative AI is very front and center for a lot of people. AI in my mind, though, also encompasses machine learning and deep learning. Approaches to AI. Now, these approaches are a bit different. They're not going to write a memo for you, they're not going to write a poem for you, they're not going to generate a song. But they have very important applicability in cybersecurity. And so our approach to AI is to leverage machine learning, deep learning and generative AI for their strengths and then to try to not leverage any one of them so much that we actually start to work into the weaknesses. So, for example, machine learning is incredibly good for its accuracy. Deep learning is sort of somewhat in between. It can extend beyond some of the sort of tight training capabilities that machine learning has, but it's not going to go all the way toward the generative AI aspect of things. And generative AI is sort of the most creative, but part of that creativity, it can hallucinate, it can make errors and other things like that. So we've been using machine learning for 10 plus years, have been using deep learning more recently over the last few years, and then generative AI has opened up a set of new use cases. We've been using generative AI effectively the way that attackers do. Our research teams are using it to generate new attacks. Obviously we never release these, we do these in very tightly controlled labs. But we generate these attacks so that we can retrain our detection models for the types of attacks that we anticipate attackers are going to launch in the future. I'll give you an example. So we generated about 10,000 web based phishing attacks, again in our lab, fully controlled, never to see the light of day. But what we did with that is because these were effectively attacks that attackers might generate in the future. We took those and we retrained our security capabilities such that if we ever saw those in the future, we would be able to prevent them versus having to go through a detect and response cycle. And we did that. We retrained our detection models for these web based phishing attacks and then we started to see, as soon as we rolled these into production systems, the protections. We saw about a 5% increase in attack detection and prevention after this. Meaning what we had generated was we had already started to see the start of this from attackers. And so we believe that we can use this to get ahead of the next attacks and be more proactive, whereas cybersecurity often in the past has been very reactive. Do you think about what a cybersecurity administrator goes through? They're trying to protect every employee, every application, every part of the network. You can imagine sort of the complexity that comes with making sure that their cybersecurity capabilities are fully deployed, everywhere they need to be deployed and that they're configured correctly, they haven't missed anything, et cetera. And so this has traditionally been a fairly manual process that can be very people intensive, but even with a lot of people it's still prone to error. And so this notion of using generative AI to simplify cybersecurity is to effectively build an AI powered assistant copilot into our products that can automatically analyze the customer's deployment, the customer's environment to make sure that the right protections are applied everywhere they need to be applied, the users are configured correctly, that everyone has reset their password and has a good strong password and there's no way to bypass that. Multifactor authentication has been turned on, that cloud security has been applied correctly. So that is, you can imagine just how complex that is when it's manual and how AI can dramatically simplify that, particularly when combined with some of the automation capabilities we have. That's sort of taking that use case and making it a lot more tangible. Right. The biggest surprise to me in cybersecurity has been I guess, sort of the call it the infiltration of this mindset that maybe cybersecurity isn't possible. When I started, everyone believed that cybersecurity is possible. Everyone believed that if done correctly, we could stop attacks. And somewhere along the way, and I think that the time, at least in my mind, where this switched was following the target breach. It was the first of the really, really big credit card theft attacks. 100 million plus credit cards I believe were stolen as part of the attack. And a number of executives, including the ciso, the chief information security officer, were fired as a result. And I saw this shift at that time from geez, if it's going to happen and we're going to get, someone's going to get fired, then maybe we should shift our focus from preventing attacks to like compliance. And you know, it's not our fault basically. And I understand that the mindset, it's just that it surprised me because even to this day I fully believe that cybersecurity is possible. We just have to get it right. In many cases we do. And when we don't get it right, it should be a learning process of the next time we get it right. But to this day that is still something that we contend with, is sort of giving up on the idea that cybersecurity is possible. One of the things that from my perspective when it comes to AI, that's maybe not as obvious, but in cybersecurity it becomes very clear is the connection from AI to automation. Automation is at least in cybersecurity. I think this is true across a lot of high tech and beyond. Automation is a key to how we not only be more efficient as organizations, but how we also start to remove human error from things that can't afford to have error. And in my mind, that is very closely tied to AI. Like AI becomes a way in which we can enable automation to happen with more intelligence and more confidence. So AI adoption to automation, and then that ties back into the notion of platforms delivering more of these capabilities because then you have even native integration which leads to more native automation. And so all of those in my mind add up to how do we simplify the comprehensive adoption of cybersecurity in a way that actually is simpler than it's been in the past?
