Podcast Summary: CISO Series Podcast – "Not Enough Hallucinations? Let’s Outfit Your LLM with Another LLM"
Hosts: David Spark, Eddie Contreras (Frost Bank), Andy Ellis
Guest: Anthony Kandeas (CISO at Weight Watchers)
Date: July 8, 2025
Episode Overview
This episode delves deep into the evolving chess match of cybersecurity in the age of AI and LLMs (Large Language Models). The conversation centers on the practical and philosophical challenges of deploying AI agents in cybersecurity, how to treat and supervise automation, pitfalls in current AppSec training, and the increasing complexity (and sometimes circular logic) of using AI to manage other AI systems.
Key Discussion Points and Insights
1. Cybersecurity as an Evolving Chess Match
- Opening thoughts: Cybersecurity is a never-ending game of adaptation as both threats and defenses continuously evolve.
“Threats are evolving, which requires defenses to evolve. This is a continuous cycle with no end in sight and it makes cybersecurity so much fun to work in.” – Anthony Kandeas [00:04]
2. Food at Cybersecurity Conferences – A Lighthearted Intro
- Bad conference food pushes attendees out of vendor-sponsored areas, hurting vendor visibility and engagement.
- Insight: The experience and environment around conference food matters—plain trays and lackluster options send people elsewhere, impacting event ROI for sponsors.
- Quote:
"If you don't have an experience built around the food, it's going to go and eat and you're not going to be able to get them back in." – Eddie Contreras [03:18]
3. Treating LLMs Like (Junior) Team Members
- Main question: Should AI agents/LLMs be treated as junior team members, given some recommend onboarding and educating them similarly?
- Eddie draws parallels to the familiar "90-day onboarding" for new hires but notes it's more nuanced with AI.
- You can't just drop AI into an environment; context matters for both people and machines.
- Context adaptation is essential.
- Anthony’s take: AI shouldn’t be credited as "junior team members" but rather as "tireless interns," stressing a practical, low-expectations approach.
- Supervising AI is about scalable quality assurance – trust but verify.
- Quote:
"Treating them as junior team members is giving them too much credit...they are tireless interns." – Anthony Kandeas [08:05]
- Output QA is critical; creating audit logs and artifacts are essential for trust.
- On measurement:
“How do you understand that these outputs that these AI models are coming to... are accurate?” – Anthony Kandeas [09:09]
4. Hiring for Potential, Not Just Credentials
- Prompt: How do you uncover if a candidate truly has a willingness to learn and problem-solve?
- Anthony values ingenuity and evidence of active learning (e.g., following Brian Krebs) above rote knowledge.
- The how of solving unfamiliar problems, setting up safety nets, and readiness to fail and pivot is more important than technical definitions.
- Eddie: Looks for comfort with feedback and the ability to read the room.
- Hiring shouldn't just be about the technical resume; dialog, feedback, and problem-solving instincts matter just as much.
- Quote:
"Don't always fall in love with the smartest resume...it's the conversation and the dialogue in the interview around constructive feedback." – Eddie Contreras [13:15]
- Anthony expands:
"Did they create any safety nets in their solution?...That shows a level of sophistication higher than other candidates." [14:45]
5. What's Worse? Security Skill vs. Trust
(Game Segment: "What's Worse?")
- Scenario 1: Highly skilled, effective security professional no one trusts or wants to hear from.
- Scenario 2: Popular, well-liked, but technically incompetent, can't improve company security posture.
- Both guests agree: The first scenario’s isolation and lack of influence is worse—for morale and for the company’s ultimate security posture.
- The technically skilled person at least can respond in crisis, but being powerless is deeply unpleasant and organizationally dangerous.
- Memorable exchange:
"In the latter, ignorance is bliss...until the day the S hits the fan." – Anthony Kandeas [21:03]
"If they really love me that much and something really did hit the fan, I'm gonna get funding to bring in a third party..." – Eddie Contreras [22:28]
- “I told you so” culture is unhelpful. Real change requires top management buy-in and awareness, which isn’t always there—even after a breach.
- Quote:
"If you look at every breach that's been out there, there's always artifacts where this was being discussed." – Eddie Contreras [23:26]
- Quote:
6. AppSec Training: Compliance Box-Checker or Valuable?
- Reddit’s critique: Is AppSec training just checking a compliance box?
- Eddie: It's what you train on that matters. Make it relevant and engaging; tie it to business outcomes and actual code practices.
- Cookie-cutter, detached content falls flat.
- Quote:
"If you're just pulling stuff off of NIST and OWASP and saying these are the things that we want to talk about, it can be dry. But if you're making it relevant to your organization...that's when you start to talk about, okay, this is energizing, it's engaging." – Eddie Contreras [25:49]
- Anthony: Threat modeling and immediate feedback are better for awareness.
- Questions the real ROI of mandating hours of generic AppSec training for all developers – significant opportunity cost.
- Envisions "vibe security code coach" as an IDE agent that provides real-time feedback, fitting the workflow and passions of developers.
- Quote:
"Why isn't there this instant feedback in the IDE as these developers make mistakes, learn how to get feedback immediately, and get reinforced on secure coding practices?" – Anthony Kandeas [27:17]
- Quote:
7. AI to Solve AI Problems? (Stacking LLMs)
- Discusses Google DeepMind’s new tool that increases transparency between APIs and LLMs.
- Should we use one AI to check or moderate another, or is this just compounding complexity?
- Anthony: Disagrees with “don’t use AI for your AI problems.”
- Argues the solution is domain-specific LLMs—not just stacking general ones.
- Observability/logging is critical. Specialized, focused models can validate outputs from broader ones.
- Quote:
"We should be using AI to solve our AI problems 100%. But...it shouldn’t just be these broad, large language models...they need to be intentional." – Anthony Kandeas [31:17]
- Eddie: Compares it to "Inception"—stacking layers on layers can be valuable if each has a clear, bounded purpose.
- Understand what AI is for—detection, quality control, repetitive work, etc.—and integrate accordingly.
- Quote:
"AI is a part of the ecosystem...it's really understanding how to make the most out of it, as opposed to just putting it back on the shelf and pretending like that the spinner stopped..." – Eddie Contreras [32:34]
Notable Quotes & Memorable Moments
- On AI supervision:
"AI has the ability to have context, it has the ability to understand environments...so you can't just launch an army of agentic AI agents and say it's going to perform just like my team." – Eddie Contreras [07:11]
- On problem solving in interviews:
“Did they create any safety nets in their solution?...That shows, I think, a level of sophistication higher than other candidates.” – Anthony Kandeas [14:45]
- On training effectiveness:
"Everyone's vibe coding...where is the vibe security code coach in this stratosphere?" – Anthony Kandeas [27:17]
Timestamps for Key Segments
- Opening Cybersecurity Chess Match & Conference Food: [00:00–04:59]
- LLMs as Team Members/Interns, Supervision, QA: [05:05–10:30]
- Hiring for Learning and Problem-Solving, Interview Approach: [10:30–15:38]
- Game: “What’s Worse?” Skill vs. Likability: [17:22–22:49]
- AppSec Training: Value or Checkbox: [24:47–29:51]
- AI to Solve AI Problems, LLM on LLM Debate: [29:51–34:20]
Conclusion
The episode weaves together practical strategies and high-level thinking on managing both human and machine actors in cybersecurity. The hosts and guest emphasize contextual understanding for both people and AI, the importance of tailored training and real-time feedback, and a pragmatic embrace (rather than rejection) of AI complexity—so long as it’s supervised and intentional.
Final Takeaway:
You can’t just plug in new talent—whether human or AI—without attention to context, culture, and continuous supervision. Security effectiveness hinges on aligning technical capability, learning agility, and clear-eyed management of both tech and team dynamics.
For more episodes and show notes, visit cisoseries.com
