Loading summary
Patrick Gray
Foreign and welcome to another Risky Business soapbox episode. My name is Patrick Gray. For those of you who don't know these soapbox editions of the show where we talk to a sponsor about some big topic and today that topic is AI. These soapbox editions are wholly sponsored. And that means everyone you hear in one of these shows paid to be here. And today we're speaking with Josh Camdu, who is one of the founders of an email security company named Sublime Security. And we're going to talk all about AI, what's hype, what's not, why every single day, Josh, I get an announcement land in my inbox saying, hey, guess what? We've now got agentic AI into our product.
Josh Camdu
You're not interested in agentic hot dog or not.
Patrick Gray
Yeah, yeah, yeah, exactly. It's like hot dog, not hot dog. Was literally talking about that the other day with a friend. That's a Silicon Valley reference for those who might be confused. But like, it just seems like every single vendor right now, they're doing some sort of agentic AI LLM based thing. You look at a lot of it and you're like, okay, you've done this engineering work for the press release. This isn't a real thing. But I have to say that's the minority. If I look at what most people are doing with large language models, it actually seems pretty sensible. So before we talk about what you're doing, what's your general take on our industry's approach to using this stuff? Because I got to say, I, you know, most of the time I'm looking at what people are doing and I'm like, okay, that makes sense, that's, that's useful.
Josh Camdu
Yeah, I think it makes a lot of sense for a lot of use cases. The way I think about this is there are, there are problems that we have as an industry or if you are like a security analyst or a security engineer or a detection engineer, there's, there's problems that you face every single day as an organization and there are problems that AI agents, LLMs are genuinely good at augmenting your workflows or automating a lot of that work. And there are some use cases that doesn't make as much sense, but for the most part I think it's a very good thing for the industry.
Patrick Gray
Yeah, I mean, one of the things we were talking about before we got recording just in a little sort of pre briefing chat, was that One thing that LLMs have done mercifully is put a bullet into the head of the idea that people need to use scripting languages. Right. Like the idea of someone introducing a new siem product in 2025, 2026 and saying, yeah, it's really awesome, you just got to learn this query language. Everyone's just going to tell them to go shoot themselves in a ditch basically.
Josh Camdu
Yeah, I mean it turns out LLMs are very good at developing, generating detections, whether it's, I mean you've seen, I'm sure, like Claude code if you, if you're doing like Python or go. And so if you give an LLM enough context and documentation and tooling and knowledge on if it's an, if it's some sort of scripting language or a DSL or whatever it might be, it can be extremely good at, at doing that work.
Patrick Gray
Yeah, but that's the interesting thing, right, which is that the scripting language will still exist. It's just the user won't have to have anything to do with it anymore.
Josh Camdu
Yeah, yeah, yeah, exactly. And I think there's a lot of capability, like raw horsepower behind some of these toolings and I think enabling, it's basically like all the good without any of the sort of like, you know, learning curves that you might have to take on otherwise. And that's just like one sliver of the use cases. Like there's, there's the, the generation. These are more kind of like GPT esque or LLMs, but then you get into really more agentic workflows where we're talking about alert triage and like post processing and analysis and there's like this whole kind of ecosystem of use cases that may make sense or don't make sense to apply to.
Patrick Gray
Yeah, so let's get into what you're doing with all of this stuff because for those who are not familiar, Sublime Security is, you know, it's an email security firm much in the same way, you know, it's just a modern take on the proof points of the world. On what are the other ones? I mean, you've got. Oh, what's that one? I always forget what they're called. Mimecast. They're still a thing, right?
Josh Camdu
I mean, yeah, there's about, there's 150, maybe more.
Patrick Gray
Yeah. But there's like a handful of ones that just like are really big. So you'd say that would be proof point. I think Abnormal is one of the sort of newer ones that, that's quite popular, so very much in that vein. Right. Which is filter out business email compromise, filter out malware, filter out phishing and whatnot. I Guess one of the things that makes Sublime different is every installation is different. You know, basically you use machine learning language so that your product adapts to every environment. Um, it's efficient. It allows people also to do detection engineering across their, their email infra, which a lot of these services don't. Right. Like stuff will be slipping through and you send them an email and say hey, would you mind catching messages like this? And maybe someone gets to it in a couple of weeks. With Sublime you can kind of crack the hood. So it's very much like an email security platform where a security team can go mess with it. So I can already tell that there's going to be opportunities to do like stuff with agents, with AI agents in. That seems a pretty wide open space though, right? So why don't you tell me where you actually chose to focus and what you did there.
Josh Camdu
Yeah, well, maybe a couple of words on like why it's even important to even like solve these use like solve these problems. Like I mean I'm sure the audience is probably tracking all this stuff but the what we're seeing in the threat landscape in the email space, I mean we're seeing this in not just in email space, but we're seeing adversary adoption of generative AI tooling to make their attacks more efficient. Like there's all of these really rad use cases that we can use on the defensive side. Obviously any good thing an adversary is going to see if they can leverage for to make their operations more efficient. So you know, Google Threat Intelligence Group put out this report on, on because they have, they have vantage point of Gemini so they can see adversaries using and abusing Gemini to conduct targeted spear phishing operations and iterate on malware and do recon and things like that. So we're seeing more and more sophisticated attacks, we're seeing more rapid adaptation, we're seeing more tailored attacks. It's basically like spear phishing which previously was like super manual and time intensive and low volume and now it's like mass. It's happening everywhere, it's automated and so being able to adapt to the threat landscape is super important. It's one of the key things as we actually, as we see the threat landscape evolve more and more rapidly. And so the way that we're applying these capabilities on top of our platform, because you mentioned like the way that we are designed, we're architecturally different than every other email security solution. So we have a core detection engine that's, that is actually like a purpose built domain specific language and so that can describe complex attacker behavior. And like you mentioned, it's customized and tailored and it learns on a per organization basis over time. And because it's a language that computers can speak, it's a ripe use case for, for AI agents effectively to autonomously or even semi autonomously with human review, improve efficacy over time. So you see a new technique. Well, the traditional approach to this has been you file a ticket and you wait and like maybe at some point in the future, weeks, months, some time passes and maybe there is a hand wavy like yeah, we close this. Well, if you can have an agent speak this language. And this is like one of the really, really cool things that we've built and we're rolling out to our customers as we speak is our Autonomous Detection Engineer or ade. And it can basically just like monitor our customer environments and autonomously improve the efficacy and respond to changes in the threat landscape.
Patrick Gray
Well, how does it do that? Because you mentioned earlier, right? Oh well, you know, some of it's autonomous and some of it requires human review.
Josh Camdu
Yeah.
Patrick Gray
And this is where, this is where the rubber meets the road when we're talking about agentic stuff. Right. Is because there are some things that like LLM driven agents can do where you can be completely hands off, you don't have to touch them, it's fine. Right?
Josh Camdu
Yeah.
Patrick Gray
But when it starts getting into more subtle, nuanced, context dependent stuff, everyone, especially around like creating detections and things like that, everyone tells me the same thing, which is it gets you 90% of the way there.
Josh Camdu
Yeah.
Patrick Gray
So at what point, you know, where do you draw the line between what features you release? Like how autonomous versus semi autonomous does it need to before you ship it out there to customer? Like I just want to get an idea of your thinking there.
Josh Camdu
Yeah, yeah. So we have, we have two agents and so we have our autonomous security analyst. And the, the way that we think about how like what makes sense for an agent to solve and how we solve it. We were talking about this before we, we, you know, we, we started on the show was like what are the functions that humans do today that we, we think an agent could be effectively like trained or guided or given the right tools and knowledge to perform really, really well. So the first is our autonomous security analyst basically acts as a Tier 1, Tier 2 analyst to investigate triage attacks in depth and then take actions. And one of the really just really like valuable things that ASA has access to is, is the context of the environment because it knows prior communication patterns, it knows what's Normal what's not. It also has a really deep knowledge base. So we have basically fed it our entire detection repository. We basically told it this is how you build detections in Sublime. And we've also given basically all of our like machine learning functions and enrichments. And so it can basically go and, and do these investigations. It can take actions like quarantine a message, it can reply to an end user and say hey, thanks for reporting this. And so that's agent one. And I'll come back to how we think about efficacy and automation.
Patrick Gray
Well and straight away, like you mentioned that it can thank a recipient of a bad message, it can thank him for reporting it. So I can imagine that if you set up a workflow which is someone reports a message, then all of these things should happen, but there needs to be a little bit of investigation and these things are usually either obvious, obviously really bad or not. So like I can understand how you would get an, an agent in place that would just handle that for you. Like that makes sense to me what you described.
Josh Camdu
Yes, yes, 100%. And I think there are certain classes of problems in security that are, are actually like not solvable by agents today. Like we call this. I've been kind of on the roadshow giving this talk called like machine versus Machine. And one of the things that I've been talking about is the security AI agent trilemma. So I have officially coined this term. So if you hear it around, you heard it here first and it's basically this trilemma that there is a trade off in terms of what you can actually in terms of the constraints that you have with agents. So you basically like if you. The trilemma involves basically speed, cost and efficacy. So if you want something that is.
Patrick Gray
Low latency, it's a pick two out of three sort of thing, right?
Josh Camdu
Yeah, it's pick two out of three. Yeah, exactly. So if you want it to be really fast and really cheap, then it's not going to be effective basically. And so Alert Triage turns out to be one of those use cases that is actually really perfectly kind of fit for this for, for, for using agents because it doesn't have to be like real time. And when we're talking real time, we're talking like milliseconds so you can take a little bit of time to do it. And the volume is like relatively low. When we talk about real time detection systems, we're talking very high volume. We're talking like, like an email security system that's analyzing every message. There's millions and millions of messages. So you can't apply an agent to every one of those messages because it's going to be far too expensive or the latency is going to be too high. So alert triage, it's like relatively low volume. So it's just like, it's like a great use case for that.
Patrick Gray
I remember a paper coming out, I don't know, 15, 20 years ago about a concept called near real time detection and the context was absolutely around like sandbox blowing up attachments in sandboxes. Right. And they called it near real time detection because if you detect something, something a few seconds after it hits someone's inbox, that's fine. Yeah, you can just nuke, you know, you can go and nuke that message. Right. So I'm with you. And also alert, just on alert triage. I mean there's entire businesses that exist and I know you know Ed wu over at DropZone. Right. And like that's all they do is like alert triage in a seam using agents. Because it does, you're right. It works really well because it is that mind numbing work.
Josh Camdu
Exactly.
Patrick Gray
That people just don't want to do anymore. And this is going to keep coming up as a theme as well. Right. Which is like you kind of have to think about these AI agents instead of thinking them about computer science lecture building blocks. They're much more about like you have to think of them as like kind of people with limited capacity.
Josh Camdu
Right, exactly. And I think there's a couple key things when it comes to efficacy and evaluating efficacy. It's like if you try and solve every problem or every, let's say every alert, let's just stick with the alert triage problem. If you try and have the agent basically like force a verdict and force a decision on every single one of those, you're going to get some misclassifications. And so one of the things when we're talking about efficacy in Sublime is that specifically with our autonomous security analyst is a around transparency of the of verdicts and chain of thought but also allowing you to passively basically see what it would have done. So instead of taking an action you build confidence over time. And then I think the really key thing also is not just like a layer, just like an analyst in the SoC, like a Tier 1, Tier 2 analyst where at the end of their investigation if they don't know the answer they are going to escalate that to tier three. Right. And so if you just. So for asa, if ASA isn't confident in its verdict, then it will actually render an unknown judgment. And so that is at the point where we want a human review. And obviously you can customize that too and say, hey, I don't want any, like in this unknown case, insert a warning banner instead and just like, you know, mitigate some of that risk. But I think it's important to kind of understand what are the limitations and then be able to account for that in the decision making process.
Patrick Gray
Yeah, I mean I, I think it was again going back to Ed, right. Because he's very deep in this stuff. I did an interview much like this one talking to him about these agents and he said you really just have to think about these agents as being really like 14 year olds who are really eager to please. Right. And they will lie to you.
Josh Camdu
Yeah.
Patrick Gray
If, if they think they're going to tell you something you want to hear, because they don't, they haven't yet grasped that lying is bad, you know.
Josh Camdu
Yeah, yeah, yeah, yeah, exactly. So that's asa. Now the agent that I'm really, really excited about is ade, our autonomous detection engineer. And this is the first time that we are talking about this publicly, so hopefully my marketing team won't kill me for this. But we are starting to roll this out to our customers and ADE will basically be able to take any sort of misclassification and tailor and autonomously build a fix for that misclassification within the context of our customer's environment. And you asked about like efficacy and kind of like how do we know if something is good? So one of the, one of the problems with agents is the, the lack of predictability in some cases. Right. You can't guarantee certain outcomes. And so for us, we're able to combine a day with our underlying detection language to make it predictable. And so when we're talking about tuning and misclassification, we first take an attack. Let's say there's a new technique. Your QR codes were big about a year ago. There's all kinds of new techniques now. SVG smuggling is big. And so the first thing we do is we pass it, we give it to asa actually. So we've got like a multi agent system. These agents are communicating with one another. So ASA will actually produce a report initially and get its verdict and get its analysis and summary of what's suspicious about the message. Pass that to ade. And ADE basically has this knowledge base. It has access to the dsl, this really specialized toolkit. It will generate a new detection and then it will backtest that Detection across historical messages. So we've got a backtesting and retro hunting capability that you mentioned earlier. So it'll run that retroactively and for every one of those results we'll run those through asa. So to assess, hey, are these false positives or are these actually attacks? And so it'll basically iterate on that detection and ultimately until it gets to a highly effective detection and then it'll output that like this final result, along with full explainability on how it got there. And so then you can accept the new detection after that.
Patrick Gray
So you can accept the new detection after that. I guess once it's already kind of run, it's, it's showing you what's working, right? It shows you the output. So I think that's a little bit different because what a lot of people are doing right now, and this is still early days, right, but they're using an agent to generate some sort of detection as code, right. They, they crap that out into a window and then people kind of have been reviewing it line by line, making sure that it works before they deploy it to prod. So this is a bit different, right, in that you're getting, the whole thing has, has gone through. You're seeing what the, what the output of it is.
Josh Camdu
Yes.
Patrick Gray
Before you're actually choosing to approve it. So that, that should make it actually from a workflow point of view, work pretty well is do you have like beta users of this already?
Josh Camdu
Yeah, and this is how we get to fully autonomous, to be clear. Because once you, like, if you're just generating a new detection and yoloing it, right, like you have to, you're not actually doing the work of a detection engineer. And so a detection engineer is going to validate the efficacy of a detection before they publish it. And so that's exactly what ADE is doing. And so what we're moving towards right now, we call it, you know, semi autonomous because at the end of it we still require human review to actually like accept, you know, review the results and accept it and push it forward and then deploy it live. But once you, you can establish criteria, like efficacy criteria, after you build confidence and say, hey, if this comes back and we ran it over 30 days of retro data and it flagged 10 messages and ASA said all 10 of those were an attack, then I want to automatically approve that, that rule to go live in my environment. And that's how we get to fully autonomous. And that's ultimately how we, I think we can solve this. Like, you know, we've Talked about how real time classification at high volume is not a problem that can be solved directly by agents because the cost is too high because of that security trilemma. But in this architecture where you're basically, you have the DSL at the, @ the core and then you have this agent ecosystem, I think we're effectively doing that where we can basically autonomously improve over time.
Patrick Gray
Yeah. Now this begs the question, like, this sounds awesome, right? But like, what. Where does it not help? Where do the human detection engineers who are working with Sublime? Because the whole point of your platform, right? The whole point of it is you can crack the hood, you can mess with it yourself, right? Like that's the point of Sublime. But now it sounds like what you're building is like, it just goes full auto mode. Now, I get that there's a level of inspectability that is not there in other products, so that's really cool. But like, is it getting to the point now where the detection engineers won't really have to do anything except if something slips through?
Josh Camdu
Yeah.
Patrick Gray
Say some other control, some other control detects that an attachment got through to an endpoint or something, EDR flags it and you're like, whoa, what, you're just going to go tell these agents, hey, you messed up, go fix. And it just goes and fixes. Is that sort of the future there?
Josh Camdu
That, that is like the reality that we're living in now with Sublime. Is that the, the, the threat hunting and the kind of more advanced use cases, those are like, that's like an on demand functionality where you want to pop open the hood and you want to do something very specific, very bespoke and you know, like maybe it's an IR use case or whatever it might be, or maybe ADE failed to generate an effective rule. That is one thing that we like. It's not gonna. Right. Like there is like, you know, some sort of feedback and validation where at the end of the day, if it doesn't, if it doesn't work, then we're not going to push it live, right? And so we are, we're now working with customers who are never touching any more advanced functionality of Sublime, right. They're just deploying it. It's largely set and forget and it's just, it just, it just works really, really well.
Patrick Gray
But because you built this with the scripting language to begin with, it's given the agents something to use, right? Because it's like it's you, you actually build something with an architecture that was well suited to have AI bolted onto it in the end. So I mean, you kind of, you kind of pulled a homer on this one, right?
Josh Camdu
It works really well because like we get to do, we get to work. Do everything. We get to work with, you know, we're working with one university who is 100,000 mailboxes, one person. IT and security team, right? They never look at Sublime, ever. It's full autopilot. And then on the other hand, we're working with, you know, like some really sophisticated organizations like Netflix and Spotify who are in the weeds building detections or doing threat hunting. And so these days there's, you know, Sublime works for all of these types of companies really, really well.
Patrick Gray
I mean, you know, you just mentioned ik they're doing this really sophisticated stuff around threat hunting. I mean, you and I both know Damien, right? Building a company called Nebula, which is. And I can talk about it because like at the time of recording their stealth. But they're going, they've announced they're going live next week, which is. And I know you know them, so it's fine. We can talk about it.
Josh Camdu
Oh yeah.
Patrick Gray
So by the time this goes live, they will have, they will have announced, but they are doing like automated threat hunting. Right. So at like, at what point does their agent like ring up your agent. Yo, hey, incident responder, you know, incident responder slash threat hunter here. I need, I need some data. Can you help me? Right?
Josh Camdu
Yeah, yeah, yeah, yeah. These are. So we've actually been thinking about this like agent to agent architecture. So we're. Sublime is basically like a multi agent architecture where we have got many of our own agents, our agents can spawn other agents, they can work together. And then the next evolution of where I think we are going as an industry is you're going to have agents of other companies talking to other agents. And so Sublime agent, ASA or ADE or maybe our incident response agent or whatever is going to need more information. Did this execute on the endpoint? Okay, let's go talk to the Nebula agent.
Patrick Gray
Let's go talk to. Yeah, let's go talk to Mr. Crowdstrike.
Josh Camdu
Exactly.
Patrick Gray
Mr. Crowdstrike agent or. Yeah, exactly. Crazy.
Josh Camdu
So that's. That's where we're headed 100%. That's where we're headed. Yeah, yeah.
Patrick Gray
And I mean, I guess the question is at what point do we develop some sort of standardized method for these agents to exchange information? Right. Because they could just do it by talking to each other, but it's not exactly computationally efficient. Right. And do we develop that method of interchange or do we let them work it out? Because they probably can, yeah.
Josh Camdu
I mean there's MCP now, but for many of these use cases the APIs are well documented and the data is like pretty kind of like it's not that crazy where you can just give it the data or, and you can just give it the data model or the schema of how to make an API call and it can just figure that out. So I'm sure we'll see more adoption of like standards.
Patrick Gray
Oh, you want some data, do you? Here, here's my schema. Go have at it. Right, I guess it's like that, right? Oh, do you need, do you need help using my API?
Josh Camdu
That's right. A little clippy. Clippy for AI agents. That's what we need.
Patrick Gray
Yeah. It looks like you're trying to query a database.
Josh Camdu
That's right. It looks like you're doing incident response. Do you need a query, a hash?
Patrick Gray
But I mean, you know, just look, something interesting has popped up here, right? Which is that any sort of agentic, anything worth its salt is multi agent now. Right. Like the idea that you're just going to do like one agent, you know, And I think that's why, I think that's why people are skeptical still about some of this stuff is because if you go and start messing around with like ChatGPT or Gemini or whatever, like it's a frustrating experience because you're dealing with these single models that are trying to be all things to all people.
Josh Camdu
Yeah, yeah.
Patrick Gray
You know, and it's, they're bad. Like, let's just say like as a replacement, you know, they're frustrating to use. But I think when you're talking about like specific models that are designed to only be given bite sized tasks that they can actually chew, that's when it starts getting genuinely useful. Right. But you have to scope it properly. And part of scoping it properly is using a bunch of models together. And each one of those models knows what its job is and its job is small.
Josh Camdu
Yes, yes. So this is a really important point, especially for anyone who's like thinking about doing this right, Is that these models, like a, a better model, like a model that quote unquote, performs better generically will perform worse at a specific task than a lesser model that's given the right context, tools and knowledge. And that's something that like we have valid. And that's why ASA works so well, is that the extent of the contextual information it has in the environment, the tooling that's available to it with Our dsl, the knowledge base that we provide it. We are literally, I mean we have so much knowledge that we are basically as part of like, you know, like the context window and the prompt that we give it. It's all, there's so much deep knowledge and domain expertise. And so if you like those are the things that are super key to making them really effective.
Patrick Gray
Now what you're describing, right, in terms of what Sublime is now and it's interesting, right, because I'd imagine that there's a lot of these other large email security providers who like, they just can't do this, right, because they don't have the dsl, they don't have the site specific context. Right?
Josh Camdu
Yes. It requires like this next, it's like a next generation of architectures that I think that this is like the architecture of the future for real time detection systems at large, like not just for email security. I think that this is like the future of real time detection and prevention. Yeah, yeah.
Patrick Gray
But the way you've laid it all out, I mean it really does sound at full disclaimer here, right? Like I'm an advisor to Sublime, so they do well, I do well. But it does pretty much sound like the way you're describing it, like it is kind of the holy grail, right, of like a system like this. Like what, what more, what more can you do here to like make it, make it better and like easier to use and more effective? Like I'm sure in 5, 10 years someone will think of something but like right now, like it's, you know, at what point are you just like, ah, I'm done, I'm done, I'm, I'm out, I gotta resign. Like, but like what, you know what, where do you go from here? Okay, so there's a question, where do you go from here with it right now that you've, now that you've nailed down this sort of agentic stuff and it's, it's like a self sourcing pudding.
Josh Camdu
Yeah, well, the way that we think about this, I mean there's, there's plenty of things that we're thinking about, but just in the agentic space, like we're just getting started even within, like, even for, for our existing, for our, for our customers today. So the way that we think about it are like what are the, what are the things that humans do on teams today? And so there's, hey, we review alerts, like we review user reports, we tune detections, we threat hunt, we do ir. And so the way that we're Thinking about it is what are these like different things that humans are doing today and how can we take that workload off of them and augment them? So we want them focused on the most high leverage things. We don't want them doing the, the, the menial things like that that should just be automated away and we want them focusing the most high leverage stuff.
Patrick Gray
But I mean it sounds like you're, it sounds like you're there already with most of it. Like what's left to automate away?
Josh Camdu
I, I don't want to, I don't want to give, I don't want to give too, too much away. Things like, you know, there's like, I, what do you, there's like a bunch of things that you do in like ir, you know, like you, you, you want to maybe correlate campaigns and you want to, you know, we do a bunch of that already today, but I think there is opportunity there is this.
Patrick Gray
For customers to do or is this for you to do back at Sublime HQ to get yourself like that God view of like malicious actors, like within a customer's environment. Okay, just within their environment so you're not sitting back like pulling in some metadata across all customers and you know.
Josh Camdu
Well, we do, we do federated threat intellig. But what I'm talking about, like I think that's ripe for automation too in agentic use cases. But there's also like at the per customer environment level, when you receive a campaign, you if like large campaigns might hit, you know, for a really large organization it might hit a thousand people, ten thousand people. And so now what we're seeing is more and more diversity in those campaigns. And so with Sublime today, we have a fuzzy grouping technology. We have fuzzy grouping under the hood where we do our best to correlate similar messages in a campaign together so that it's one click to remediate or it's automatically remediating all of them at once. But as we see more and more diverse campaigns, that fuzzy grouping problem may get harder. And so from an IR perspective, you want to go and you want to understand the full impact of a campaign, what other users receive this. And so today you might be doing some manual operations via your sword, like you know, search for a wild card in the subject and you're all these techniques to find similar messages. Well, that's something that could potentially be automated.
Patrick Gray
Yeah, no, I'm, I'm with you. I'm with you. So it's like, it is that sort of threat hunt use case where you've got another agent that is just. You might, it might get a tip from yet another agent.
Josh Camdu
Right.
Patrick Gray
Like, this is a bad message. Go and go and investigate, see if there's anything else that could possibly be clustered with this. Like, I get what, I get what you mean.
Josh Camdu
Yeah, yeah, yeah. And like, I'm not saying that that's the next thing like that, that's where even doing that. It's just like, these are, this is how we're thinking about it. That there's a bunch of just like manual things that people do today. How can we augment them? How can we let these agents work together more efficiently and to. To solve the problem, Solve these problems for our customers.
Patrick Gray
Yeah. Now, before we wrap it up, I got to ask, what is the most spectacular, like agentic fail you've seen developing this stuff where it just went absolutely haywire and did something insane in a dev environment? What's the best one I can see from your smile? There's like many, many cases, but like.
Josh Camdu
RM dash RF on the prod database. Uh, no, I'm, I'm just kidding. Did you see that Twitter thread recently on this? I think it was like replit. Replit agent went rogue and like completely wiped some dudes, like, entire prod database. And he was just like, just, just developing and it just went off and just like wiped all of prod. It's like, holy, dude.
Patrick Gray
I don't know. You know, it's hard to know if those, if those stories are real or if it's just like social media.
Josh Camdu
This one was very real.
Patrick Gray
Yeah, man, you don't need that. But I mean, you know, so maybe not rmrf, but was there anything, you know, anything you could think of where it just, I mean, I'm guessing most of the time it's just like, oh, well, that didn't work really well. Or that classification was wrong. But can you think of any, like, funny ones? You know, I'm mining for comedy here.
Josh Camdu
Yeah, like, there hasn't been anything to that degree. Like, because the way that we, like, these are very constrained problems. Like we don't have agents going off and like going into these other systems doing crazy. It's like very much a constrained problem where you're, you have an output, you've, you know, you're rendering a verdict, you're rendering a judgment, you're building into a detection, you're threat hunting. It's kind of very constrained problem. So we haven't had anything crazy like that.
Patrick Gray
Yeah, that's a shame. That's a shame. I, I expect. I expect to be the first to know.
Josh Camdu
Should that happen, I will let you know for sure.
Patrick Gray
All right, Josh Camdrew, thank you so much for joining us for that discussion. Very interesting stuff. Yeah, it's cool. It's just cool. This stuff is like, you know, I'm a bit. I. I'm a believer, right? As much as it annoys me to say it, I'm a believer.
Josh Camdu
I never thought, you know, like, as I'm a security guy to my core and, you know, I would have been a skeptic like, a couple of years ago, but I am. I am a believer now.
Patrick Gray
Yeah. I mean, I'm not a believer in the whole, like, all these. These generic agents. I mean, I think they're an efficiency tool. I stand by my initial take, which is that they're an efficiency tool. I think the generic agents are only ever going to get so good. But, yeah, I mean, I caught up with a friend in Melbourne a few weeks ago who's like, one of those people who loves to poop on new tech, and I hadn't seen him in years, and I'm like, hey, what have you been up to? He said, man, I've been playing around heaps with AI and it's amazing. And, like, seeing this guy, like, Captain Cynical, it's really sick.
Josh Camdu
It really is.
Patrick Gray
Yeah, he's like. He's like the grumpy cat of Australian infosec. And seeing him sitting there just, like, raving about, like, all of the cool stuff he's been doing with AI. Anyway, anyway, we'll wrap it up there. Josh. Kamju, great to see you as always, my friend, and I'll look forward to chatting to you again soon.
Josh Camdu
Thanks, Pat. Me too, man.
Podcast Summary: Risky Business – "Soap Box: Why AI Can't Fix Bad Security Products"
Episode Information:
In this episode of Risky Business, host Patrick Gray engages in an in-depth discussion with Josh Camdu, co-founder of Sublime Security, about the role of Artificial Intelligence (AI) in the field of information security. The conversation centers around the effectiveness of AI in enhancing security products and the challenges associated with integrating AI into existing security infrastructures.
Patrick Gray initiates the conversation by addressing the proliferation of AI claims among security vendors, particularly the buzz around "agentic AI" and large language models (LLMs). He expresses skepticism about many vendors' AI implementations, suggesting that much of it may be more about marketing than actual functional advancements.
Notable Quote:
"[...] it just seems like every single vendor right now, they're doing some sort of agentic AI LLM based thing. You look at a lot of it and you're like, okay, you've done this engineering work for the press release. This isn't a real thing."
— Patrick Gray [00:50]
Josh Camdu provides a balanced perspective, acknowledging that while some AI integrations may be superficial, many applications of LLMs in the security industry are genuinely useful and enhance workflows for security professionals.
Notable Quote:
"There are problems that you have as an industry [...] and AI agents, LLMs are genuinely good at augmenting your workflows or automating a lot of that work."
— Josh Camdu [02:19]
The discussion delves into how AI, particularly LLMs, are revolutionizing the way security teams operate. Patrick Gray highlights how LLMs have lessened the reliance on scripting languages, making detection engineering more accessible and reducing the learning curve for new security tools.
Notable Quote:
"[...] LLMs have done mercifully is put a bullet into the head of the idea that people need to use scripting languages."
— Patrick Gray [02:50]
Josh Camdu agrees, emphasizing that LLMs can effectively handle the generation of detections and streamline complex tasks without requiring users to engage directly with scripting languages.
Notable Quote:
"If you give an LLM enough context and documentation and tooling and knowledge [...] it can be extremely good at doing that work."
— Josh Camdu [03:21]
Patrick Gray introduces Sublime Security as a modern email security platform that stands out due to its adaptability and efficiency, allowing security teams to delve deeply into their email infrastructure. He raises the question of integrating AI agents into such a platform, prompting Josh Camdu to elaborate on Sublime's innovative solutions.
Notable Quote:
"We have an autonomous security analyst or ASA, and an autonomous detection engineer or ADE. They monitor environments and autonomously improve efficacy."
— Josh Camdu [08:58]
Josh Camdu describes ASA as a Tier 1 and Tier 2 security analyst that can investigate, triage, and take action on threats autonomously. ASA leverages a domain-specific language (DSL) tailored to describe complex attacker behaviors and uses extensive context from the customer’s environment to enhance accuracy.
Notable Quote:
"Our autonomous security analyst acts as a Tier 1, Tier 2 analyst to investigate triage attacks in depth and then take actions."
— Josh Camdu [10:00]
ASA can perform tasks such as quarantining malicious messages and communicating with end-users, significantly reducing the manual workload for security teams.
Patrick Gray and Josh Camdu explore the concept of the "Security AI Agent Trilemma," a term coined by Camdu to describe the trade-offs between speed, cost, and efficacy in AI-driven security solutions.
Notable Quote:
"If you want something that is really fast and really cheap, then it's not going to be effective basically."
— Josh Camdu [12:33]
This trilemma underscores the challenges in deploying AI agents that are both cost-effective and highly accurate, especially in high-volume, real-time detection systems.
Building on ASA, Josh Camdu introduces ADE, an AI agent designed to autonomously generate and validate new security detections. ADE iterates on misclassifications by creating new rules, backtesting them against historical data, and refining their accuracy before implementation.
Notable Quote:
"ADE will basically be able to take any sort of misclassification and autonomously build a fix for that misclassification within the context of our customer's environment."
— Josh Camdu [16:36]
ADE represents a step towards fully autonomous security operations, allowing detection engineers to focus on more sophisticated threat hunting and incident response tasks.
The conversation shifts to the future of AI in security, emphasizing the importance of multi-agent systems. Josh Camdu envisions a landscape where different security agents from various companies can communicate and collaborate, enhancing overall threat detection and response capabilities.
Notable Quote:
"We're going to have agents of other companies talking to other agents. [...] your agent ring up another company's agent for additional data."
— Josh Camdu [25:14]
Patrick Gray highlights the need for standardized communication protocols among agents to ensure efficiency and interoperability.
Notable Quote:
"At what point do we develop some sort of standardized method for these agents to exchange information?"
— Patrick Gray [26:01]
While the AI-driven agents offer substantial automation, both Patrick and Josh acknowledge the necessity of human oversight to handle nuanced and context-dependent scenarios. They emphasize that current AI agents can handle routine tasks efficiently but require human intervention for more complex decisions.
Notable Quote:
"They will lie to you if they think they're going to tell you something you want to hear."
— Patrick Gray [16:29]
Josh Camdu reiterates the importance of transparency and human review in maintaining the efficacy and reliability of AI-driven security solutions.
Josh Camdu shares insights into how Sublime Security's AI agents are currently benefiting customers, ranging from large organizations with minimal human intervention to sophisticated companies engaging in active threat hunting. This versatility demonstrates the adaptability and effectiveness of their AI-driven approach.
Notable Quote:
"We're working with one university who is 100,000 mailboxes, one person IT and security team [...] it's just full autopilot."
— Josh Camdu [23:57]
Despite the successes, the conversation acknowledges ongoing challenges, such as handling diverse and complex threat campaigns and enhancing agent-to-agent communication. The future direction involves refining AI agents to take on more intricate tasks without compromising on accuracy or efficiency.
Notable Quote:
"We have a fuzzy grouping technology [...] as we see more and more diverse campaigns, that fuzzy grouping problem may get harder."
— Josh Camdu [32:07]
As the episode wraps up, both Patrick Gray and Josh Camdu express a strong belief in the potential of AI to revolutionize information security. They acknowledge the current limitations but remain optimistic about future advancements and the continued integration of AI agents into security operations.
Notable Quote:
"I never thought [...] but I am a believer now."
— Josh Camdu [36:20]
This episode of Risky Business provides a comprehensive look into the practical applications and challenges of integrating AI into information security. Josh Camdu’s insights on Sublime Security’s innovative use of autonomous agents highlight both the current successes and the future potential of AI-driven security solutions. The discussion underscores the importance of balancing automation with human oversight to achieve optimal security outcomes.
End of Summary