Risky Business Podcast Summary
Title: Soap Box: AI has entered the SOC, and it ain't going anywhere
Host: Patrick Gray
Guest: Ed Wu, Founder of Dropzone
Release Date: June 16, 2025
Introduction and Guest Background
In this special soapbox edition of the Risky Business podcast, host Patrick Gray welcomes Ed Wu, the founder of Dropzone. Dropzone offers an innovative AI platform designed to function as a Tier One Security Operations Center (SOC) analyst. Patrick discloses his role as an advisor to Dropzone, highlighting his vested interest and longstanding collaboration with Ed Wu. Ed’s expertise stems from his tenure at Extra Hop Networks, where he played a pivotal role in transforming the platform from a network-oriented product to a security-focused solution. Patrick emphasizes Ed’s credibility by noting the endorsement from one of Extra Hop’s founders, who is now an investor in Dropzone.
Patrick Gray [00:00]: "Ed, thank you for joining me. I thought today what we could really talk about is not just about Dropzone and what it does in the SoC. Obviously, we'll touch on that. But I wanted to talk about the use of AI in cybersecurity more generally."
The Evolution of AI in the SOC
Patrick initiates the conversation by addressing the integration of AI within SOCs, noting that AI tools like Large Language Models (LLMs) are increasingly being adopted for tasks such as log processing, alert analysis, and triaging.
Patrick Gray [00:00 - 02:09]: "Do you think that's a fair statement?"
Ed Wu agrees, drawing parallels between the evolving acceptance of AI in software development and its growing role in cybersecurity operations.
Ed Wu [02:09]: "With any new technologies, there's always hesitation and skepticism. But over time as the early adopters start to see return, words get spread out and then the rest of the community start to pick up."
He references Microsoft’s Security Copilot as an early example and observes significant maturation in AI technology over the past two years, leading to widespread production use in SOCs.
Challenges and Concerns with AI SOC Agents
Patrick raises a critical concern regarding the delegation of alert handling to AI agents: the risk of AI misclassifying true positives as false positives.
Patrick Gray [07:56]: "Now I bet already some people are listening to this and saying, well hold on buddy...how can you assuage those fears?"
Ed acknowledges the inherent limitations of LLMs, particularly the issue of hallucinations, where AI might inaccurately dismiss genuine threats.
Ed Wu [08:41]: "There will always be a degree of hallucination. There is no way to completely remove all hallucinations from the large language models."
Mitigating Risks: Accuracy and False Positives
Ed emphasizes Dropzone’s commitment to minimizing false negatives, ensuring that benign alerts are accurately identified while acknowledging the impossibility of completely eliminating errors.
Ed Wu [09:34]: "Our goal is to build a system...at or above the accuracy compared to a typical human tier one security analyst."
Patrick highlights the importance of benchmarking AI performance against human analysts, to which Ed confirms Dropzone’s participation in comparative evaluations, often demonstrating parity or superiority in accuracy.
Ed Wu [11:49]: "When you run through exercises like these, the accuracy of an AI like our AI SOC analyst is definitely on par, if not sometimes meaningfully better than the human team members."
Multi-Agent Systems and Task Decomposition
Patrick explores the concept of multi-agent AI systems, where supervisory models oversee investigative models to enhance accuracy and reliability.
Patrick Gray [12:51]: "...where you almost have an AI that can play that role of being a supervisor to the core LLM that's doing most of the work."
Ed elaborates on Dropzone’s use of multiple specialized personas within their AI system, enabling complex workflows and self-reflection to improve task accuracy.
Ed Wu [13:49]: "Using multimodal, like different prompts, different temperatures, different models to generate the same output and then compare and contrast...like voting."
He likens this approach to dividing responsibilities among human specialists to manage complex investigations effectively.
Extending AI Applications Beyond SOC
Shifting focus beyond SOCs, Patrick inquires about other cybersecurity domains ripe for AI disruption. Ed identifies penetration testing and code reviews as prime areas where AI can automate laborious and high-quantity tasks.
Ed Wu [18:32]: "Pen testing...manual fuzzer...code reviews...a lot of code commits...benefit from security reviews."
Patrick adds insights on AI’s potential to revolutionize Static Application Security Testing (SAST), highlighting the diminishing moats as commodity models become adept at code analysis.
Patrick Gray [20:17]: "...a friend...endedustry'sclearsast ...turn into a startup because it's all done with commodity models."
Ed acknowledges the strengths of LLMs in understanding code but points out challenges with large, complex codebases requiring deep contextual understanding.
Ed Wu [20:58]: "Large language models are very good at understanding code...complex interactions with internal libraries or proprietary libraries or APIs."
Limitations and Expectations
Patrick underscores a common misconception: AI cannot compensate for inadequate detection stacks or poor data quality. Effective AI SOC agents require robust, comprehensive logging and context.
Patrick Gray [22:16]: "...an AI SoC analyst is only going to work well when you've got a detection stack that's pulling in the right information to begin with."
Ed concurs, providing examples where missing logs render AI investigation impossible and highlighting Dropzone’s capability to recommend detection rule adjustments to reduce noise.
Ed Wu [22:56]: "It's technically impossible to investigate those alerts if there are no logs at all...our technology will propose recommendations like tweaks on the detection rules."
The Future of AI in Cybersecurity
Patrick references a recent paper by Apple critiquing large reasoning models, prompting Ed to discuss the importance of task decomposition in leveraging AI effectively.
Ed Wu [25:12]: "The art of using large language models is task decomposition...our AI SoC agent is looking at an alert and trying to make sense of this alert and investigate it, on average, our system makes close to 100 distinct large language model invocations."
He envisions AI as force multipliers for human security professionals, enhancing their capabilities rather than replacing them.
Ed Wu [28:25]: "We want to uplevel human security engineers and human security analysts to be like the generals and special forces where they have an army of AI middle schoolers or AI foot soldiers."
Patrick concurs, emphasizing the evolving understanding of AI’s practical applications and limitations in the cybersecurity landscape.
Patrick Gray [30:18]: "We're getting a better idea of how to use it, what it's good at, what it's not so good at yet."
Conclusion
Patrick and Ed conclude their insightful discussion by acknowledging the transformative potential of AI in SOCs and broader cybersecurity domains. They emphasize the importance of realistic expectations, robust data infrastructure, and the collaborative synergy between human expertise and AI capabilities.
Patrick Gray [30:44]: "Ed, we're going to wrap it up there. Always a pleasure to chat to you...we learn a lot."
Ed Wu [30:44]: "Thank you for having me."
This episode of Risky Business delves deep into the integration of AI within SOCs, exploring both the advancements and challenges posed by AI-driven security operations. Ed Wu provides a nuanced perspective on leveraging AI as a powerful tool to augment human analysts, while also candidly addressing the limitations inherent in current AI technologies.
