B (30:07)
Yeah, I would, I would say, you know, there's, there's good news and bad news, right? And we, we talked already a lot about the bad news. I think the, you know, the, the, the ability for adversaries to operate quicker and enhance their ability to successfully leverage some not new techniques, right, like social engineering, et cetera. I mean, that, that, that's the bad news. The silver lining, which, which I'll mention before I pivot to the good news that Adam alluded to, the silver lining is when we talk about these different, you know, techniques and scenarios, everything we discuss, right, using AI to identify vulnerabilities, et cetera, these aren't fundamentally new concepts, right, in cyber security. These aren't brand new adversary tradecraft, ttps, et cetera. It's a acceleration of existing threats that we've been dealing with as an industry for years, for decades now. It, arguably, one of the things that I have talked about frequently over the past two, three years is that probably one of the biggest, you know, impacts that AI has had for adversaries is it raises the, the bar, right, the, the skill of the average operator. What it, what it doesn't do though, right? The silver lining that I mentioned is it doesn't, you know, today with, with contemporary AI technology, right, like Transformer LLM based technology. We're not yet. Maybe we will be someday. Maybe it'll be different technology that gets us there. But we're not at a place of, you know, true artificial superintelligence, where we have AI systems that are doing things that humans have never thought of, dreamed of or done before. That that would truly be a scary scenario. We're at a place now where, just like all other domains, AI is taking things that humans have already done and figured out and is able to replicate it, you know, very cheap, very fast, etc. Etc. So while the bar may be getting raised for the average adversary and the speed of things we've seen before are accelerating, we're not yet at the place where we're seeing a completely novel things that no human defender has ever encountered, right? So now we go, now we go to the good news, right, which Adam alluded to, which is what, what does this technology bring to the defender? Well, if the adversary is getting faster, right, the humans that the defenders of course, need to speed up as well. And, and what better technology to enable that, to empower them to do that than AI, you know, and again, Adam kind of alluded to this, but one way to think about that defender's dilemma is from an economic perspective, right? Like the cost to an adversary of tweaking their attack and trying one more time and modifying something a little bit historically has been relatively low, right? They just have to be right once it's the defender who's got to invest traditionally a significant amount of time, resources, human energy, historically to deal with all of this at great cost. And they have to be right every single time at the wrong one. So you have a potential for a breach. So stepping away from cybersecurity for a second, you know, what is one of the key benefits of AI? It brings down the cost of labor, right? The marginal cost of looking at one more thing is dropping to zero. So now for maybe the first time, we have a tool that really helps the defender kind of level the playing field in that respect. I'll give you one very, very concrete example because I know I've been talking in a fairly abstract manner, okay. One of the areas that we, for example, as we at CrowdStrike, as we've been introducing new forms of generative AI and agentic technology into the hands of our defenders. One of the biggest early successes that we've had as part of our Charlotte agentic product line is this idea of detection triage or agentic triage, where we're able to use our own AI system that we've built to review alerts that are coming in, determine if they're true positive, false positive, and if they're true positives, kick off and recommend next course of action. That's one of the most historically for a human soc operator, time consuming things that they do. So when we released our detection triage system, one of the reasons we were so excited is because we had real benchmarks to back up its Effectiveness and I don't have exactly in front of me, I think it was 98.5 or 98.6% accuracy rate, meaning it agreed with our expert human defenders, our managed services teams, 98.6% of the time, which is a fantastically high number. So the difference is, of course it takes a lot out of a human operator. It takes time and energy and you know, they sleep, they go on vacation. You can only find and hire afford so many of them. The marginal cost of having our agentic system triage of detection is next to nothing, right? So now think about that for a second, but back up now. If I have a system that can virtually, you know, close to 99% of the time determine if something is a true positive or false positive, besides saving, you know, time on the, on the, on the analyst side of having to triage that and what are the other implications there? Well now false positives all of a sudden might become less of a problem, right? Like one of the reasons why false positives are an issue. The primary reason besides, you know, taking inadvertent action is fatigue on the human operators, right? Well, if I have a system that can clear out false positives at a virtually 99%, you know, effective rate with virtually no cost and overhead, why wouldn't I just start creating a lot more and a lot noisier detections? Because even something that is wrong 90% of the time, it's right 10% of the time. So if I can filter out all the noise but still get that benefit, I'm actually going to increase my catch rate at the end of the day and not have to worry about that one of those other defenders dilemmas. So that has a, you know, a pretty transformational effect on how you can approach detection engineering that just wouldn't exist without some of these new technologies.