C (17:00)
Yeah, no, I think, I mean, somebody said the other day, and I think it was an Interesting statement, but the era of typos is over. While it was never our primary go to for trying to detect phishing, you could pick up a little bit of noise and certainly, and you know, targeted attacks were always, you know, pretty good in terms of that. But yeah, no, I think that, I mean when we go back to like traditional phishing based attacks and that side of things, if we move beyond, if we think beyond those. When you think about LMMs, I think one of the things that is possibly also potentially a concern is the amount of data that businesses put on the Internet and some of that is intentional because they want to be out there and be known. We're all about selling products of some sort or whatever. But there's also a lot of information that's out there that you don't intend to get out there. It's been actively linked or individuals have put it in their social media media or LinkedIn and with the power of LLMs to be able to pull all that information together and know a lot more about a business than you would want them to necessarily know and how you're structured in your organizations, that will definitely, you know, enable you to, you know, craft more or more effective phishing because you know who you're going after. You've got an insider information effectively that you wouldn't have traditionally or if you got that information, it would have taken days, if not weeks and months of investigative work kind of thing. So yeah, that's definitely something. I think when you think about LLMs, beyond the traditional creating better emails, it's the fact that they can have that intel, they can generate that intel that you wouldn't. I think as well with AI and LLMs is the, the ability to just, and especially, I mean, you know, again, phishing attacks, we think of emails, but they can also be audio these days. Right. And you know, it's been reports of where they're able to pick up on hesitation and drop back in the conversation or change the conversation to realize that that person starting to pick up something there, which is, you know, again, quite kind of worrying because traditionally that might not happen. And dare I say it, even some of those call centers based wherever and trying to dupe people into whatever, they're often following a script. And generally even when you, you know, for a minute you're duped into thinking this could be legitimate and you realize it's not, it's because they're off on a script and they follow. Whereas an MM might be more easy because it was able to, if you empathize with the individual if want if a way of describing it, you know, in some way. But yeah, no. So I mean, I think, yeah, it is definitely interesting in terms of using about phishing attacks. I think, you know, when we think about AI, particularly in the way it's sort of being used in attacks generally, I think the bit that is really significant to me is that it doesn't typically generate any new attacks. There's nothing you hear about where it's an AI generated idea in some way. It's all about being more efficient, faster, and actually probably the most important thing is cheaper. And there's always been this sort of, I guess, old adage that you don't keep your underwear in your bank vault. And security has always been about a balance between you pay for as much security to protect the valuables to make sure that they're protected in some way, and businesses invest in cybersecurity for the same thing. It's not, maybe not the crown jewels, but it's their data these days that they want to protect. And there was a certain assumption that you paid certain amount, you were a big company, you had a bigger amount of budget you could pay for best technology, best vendor out there to protect you, and that was all you needed to do. And there was some element of truth in that because of course attackers have to invest their money, even if that's time, in attacking you. And the harder you make it, the more likely either going to go to someone else or they're going to give up. And when you think about AI, if it makes things cheaper, it enables them to more effectively attack either more candidates or even go after bigger candidates. And again, when we think about 2025, I mean, there's been some very, very big names obviously reported. And again, this is just in my head, but you start to wonder if that is influenced by the fact that AI is making some of these things cheaper and more easier to do at scale. So yeah, that's something that's in there. And again, we're thinking beyond the phishing scenario. I mean, there's something that was talked about from my colleagues the other day and again, there's parallels to old stuff that's just been rehashed and that's malicious. When you think about malicious iframe injection where gone by websites were compromised, typically the bottom of the page. I don't know why it was always at the bottom of the page. I guess it's easy to insert it there. That puts an iframe hidden from the person browsing the website. But it would basically run the script in the background. These days when you talk about AI, they talk about malicious indirect prompt injections. What that is. It's very similar technique, but what you're trying to do is poison the LMM effectively, because what you're doing is you're injecting into the web page hidden content in exactly the same way, but it's instructions for the LMM and it's to try and steer the MLM when they're using the web page in a different way than it would want to or it was intended to. And there's been interesting reports where they're trying to, if you like, push the LMM away from legitimate products to fraudulent products, for example. And I mean, supposedly one page reported with actually 24 different attempts to, if you like, inject some content into a page to try and poison LLM. I think in that particular case it was just trying different ways because obviously people that are actually using these LMMS are going to become aware of this. But again, it's another interesting thing that's happening now in that space. Sorry, I've gone a little bit of a run here.