CISO Series Podcast: Imagine Scaling Mistakes 5x Faster. Thank You, Automation! (LIVE in NY)
Hosts: David Spark, Matt Southworth (CISO, Priceline), Leslie Nielsen (CISO, Mimecast)
Date: January 6, 2026
Location: Mimecast Elevate Conference, New York City
Episode Overview
This special episode of the CISO Series Podcast, recorded live in New York, explores the consequences and opportunities brought by automation and AI in security. The hosts and live panel—David Spark, Matt Southworth, and Leslie Nielsen—dive into real-life mistakes, scaling dysfunction with automation, the challenges posed by AI-written code, balancing leadership with collaboration, handling “brilliant jerks,” overlooked security controls, and evolving security awareness training for deepfakes. Blending peer advice, interactive games, and audience Q&A, the episode offers insights into human and technical aspects of cybersecurity.
Key Discussion Points & Insights
1. Opening: Security Mistakes and Toxic Talent
- Biggest Security Mistake
- Leslie Nielsen shares her regret of tolerating a "brilliant jerk" on her team, which led to organizational toxicity.
- Quote:
"The biggest mistake I made was letting him hang around, be toxic to the other people...when I left the company, he became somebody else's problem."
— Leslie Nielsen [00:04]
2. Security & Enjoyment—The Sora AI Video App
- Can Security Pros Enjoy Sora?
- Immediate reactions are fear, given the risks of deep fakes.
- Matt Southworth:
"I can try to enjoy it. No, it terrifies me." [02:57] - Leslie Nielsen: Advocates "leaning into it"—use new AI creatively for awareness, then educate about the risks.
- Quote:
"Deep fakes are there and they're only going to get worse or literally better, but worse."
— Leslie Nielsen [03:15]
3. Automation: Scaling Bad Processes Faster
- The Tool Amplifies the Process
- Referencing Anton Chuvakin, tools can automate and thus amplify dysfunctional processes.
- Key Question: How to assess if your processes are ready for automation or AI?
- Matt Southworth: Tools don’t fix prioritization issues; broken processes indicate misaligned priorities.
- Leslie Nielsen: Understand current ("as is") vs. desired ("to be") processes; document, measure, and improve before automating.
- Quote:
"People who automated a bad process ended up with a bad automated process rather than an improvement."
— David Spark paraphrasing Anton Chuvakin [03:53]
- Exposing Hidden Brokenness
- Tools can reveal problems, e.g., "splunk getting too much money due to excessive alerts" [06:28].
- Matt: Teaching others (and AI) your process is the ultimate way to understand it.
4. Examples: AI Improving or Highlighting Security Gaps
- Matt: Using LLMs to review every pull request for secrets and bad code, catching things overlooked by humans.
- Leslie: AI enrichment accelerates alert triage, helping respond to incidents faster [08:09–08:26].
5. Leadership: Navigating "Brilliant Jerks" and Collaboration
-
Avoiding “Tech A-hole” Leadership
- Referencing Will Klosofsky and Andy Ellis: Start by listening to teams, not pushing pre-made solutions.
- Leslie: "It's a way, not the way"—collaborative leadership trumps command-and-control [10:19].
- Matt: Seek out collaborators, let others lead, and observe where people excel [10:49, 11:34].
- Both agree: Role reversal in incident table-tops and “no Matt/Leslie” scenarios are critical for resilience [12:09–13:40].
-
Can “Brilliant Jerks” Be Saved?
- Leslie: Sometimes; requires collaborative culture and feedback, but not always possible.
- "80, 90% of the time, you should be able to bring them out of it." — Leslie Nielsen [13:57]
- Matt: Let junior staff speak first; avoid jargon and ensure communication lands with the audience [14:27–14:48].
6. Game Time! "What's Worse?" and "What Are These Security Pros Talking About?"
- What's Worse? No Data Governance vs. Rogue AI
- Matt: No data governance is worse—can’t assess damage without governance [17:47–18:13].
- Leslie: Disagrees—rogue AI is riskier due to rapid proliferation and lack of control [18:33].
- Audience sides with Leslie. [19:38]
- Security Pro Guessing Game (selected highlights):
- Common misconceptions: “You can be 100% secure.” [20:38]
- If you could get users to do one thing: “Patch their stuff, not click everything in email.” [21:12]
- Pet peeves: “Compliance does not equal security.” [21:54]
- AI in security: Streamline detection, proactive risk identification [22:35]
7. Boring Controls That Dramatically Improve Security
- Overlooked but Effective
- Leslie: Introducing friction, especially around provisioning and access [24:24].
- Matt: Aggressive retention policies shrink attack surface (e.g., emails deleted after 30 days) [25:04].
- Deleting old exceptions is invaluable: "Turn them off and see what breaks...Just don't do it at year-end." — Matt [26:54–27:05]
- Leslie describes orphaned systems cleanup via a "system eviction notice" [27:06].
8. AI-Generated Code—A New Risk Frontier
- Fundamental Differences?
- Matt: What matters is the code’s use and context, not its source; AI often writes longer, not necessarily better, code [28:15–29:04].
- Leslie: Secure SDLC prevails—good architecture, threat modeling, pen testing are essential, regardless of code origin [29:04].
- Vibe Coding:
- For non-coders creating apps through natural language, security awareness is even more critical.
- Use tools to reverse engineer, then rigorously pen test output [32:03–32:37].
- Matt: Educate vibe coders by asking the “five whys” and focusing on guardrails.
9. Audience Speed-Round Q&A Highlights
-
#1 AI Fear
- Matt: Uncontrolled development and deployment; loss of RBAC and secrets management disciplines [34:41].
- Leslie: Leveraged for more convincing social engineering (deep fakes, phishing) [35:06].
-
Best AI Governance Policy
- Matt: Promote transparency—Slack channel for tool users to discuss openly [36:00].
- Leslie: Simply having an AI governance committee is a big initial step [36:16].
-
Employees Failing Phishing Tests—Fire Them?
- Leslie: Yes, after progressive discipline and training; chronic failure makes them a risk [36:55–37:15].
- Matt: Not a security team decision alone; must involve HR and consider business context [37:15].
- Special case: If a mechanic fails but doesn’t need email, re-engineer access rather than fire [38:21–38:52].
-
Human Tasks OK for AI to Supplant
- Leslie: Routine SOC enrichment, enhanced triage, and code suggestions—with secure process oversight [39:21].
- Matt: Boring/error-prone tasks. Pet peeve: AI-written communication lacks human voice [39:43–40:24].
- Leslie's tip: “Delves” is a ChatGPT giveaway word [40:31].
-
Security Awareness Training & Deepfakes
- Leslie: Bite-sized, timely, nudge-based content; keep it relevant and actionable [40:57].
- Matt: Tailor deepfake scenarios to high-risk groups (AP, helpdesk); use real CEO mimics to boost realism [41:18].
- Leslie: "Our CEO will never ask you for Amazon gift cards—never going to happen over WhatsApp." [41:43]
Notable Quotes & Memorable Moments
-
On AI & Automation Risks:
"If you buy that fancy SOAR or SIEM, you'll automate your corporate dysfunction at machine speed." — David Spark [03:53, paraphrasing Anton Chuvakin] -
On Leading Technical Security Teams:
"It's a way, not the way...Take a collaborative approach to solve the problem altogether." — Leslie Nielsen [10:19] -
On Fixing Broken Processes:
"You can throw an LLM at it...but it's not going to change the organization's priorities." — Matt Southworth [05:03] -
On Deepfakes & Social Engineering:
"Deep fakes are there and they're only going to get worse or literally better, but worse." — Leslie Nielsen [03:15] -
On Overlooked Security Controls:
"Turn them off and see what breaks. Don't do it at the end of the year." — Matt Southworth [26:56]
Important Timestamps
- 00:04 – Security horror stories & brilliant jerks
- 02:49 – AI video app Sora: Enjoyment vs. existential dread
- 03:53–07:28 – Automating broken processes & process gravity
- 08:09 – Real-life AI process improvement examples
- 10:19–11:49 – Brilliant jerks, collaborative leadership, and team empowerment
- 13:40–14:48 – Dealing with brilliant jerks constructively
- 17:43–19:38 – “What’s Worse?” Game: No data governance vs. rogue AI
- 20:38–23:14 – Security pro misconceptions and pet peeves
- 24:24–27:29 – Effective “boring” controls (provisioning friction, retention)
- 28:15–34:02 – Securing AI-written code, vibe coders, and automated SDLC tools
- 34:12–41:56 – Audience speed-round: AI fears, policies, phishing discipline, deepfake awareness
Takeaways for Security Practitioners
- Automation accelerates dysfunction if fundamentals aren’t fixed
- Process understanding (as-is/to-be) is essential before AI-enablement
- Cultural change—collaboration and empowering teams—is as crucial as technical controls
- Aggressive elimination of legacy risk (old exceptions, orphaned systems) offers outsized benefit
- AI-generated code needs at least as much, if not more, scrutiny than human code
- Security awareness must evolve for deepfakes, targeting specific high-risk roles and using timely, relevant content
- Governance, transparency, and open dialogue are cornerstones for responsible AI adoption in security
For Further Info
- Contact:
- David Spark: david@cisoseries.com
- Mentioned Companies:
- Priceline (Matt Southworth), Mimecast (Leslie Nielsen)
- Job Openings:
- Priceline (booking.com, Amsterdam), Mimecast (see LinkedIn)
- Sponsor:
- Mimecast—Integrated AI-powered collaboration and email security
[End of Summary]
