Podcast Summary:
GRC & Me
Episode: CISO to CISO—Let's Get Real About AI
Date: November 13, 2025
Host: Jane Totaro (LogicGate)
Guests: Jake Bernardis (Anecdotes, CISO), Nick Kathman (LogicGate, CISO)
Episode Overview
This episode dives deep into the complex world of AI through a CISO-to-CISO lens, exploring the current landscape, challenges, and future of AI governance, risk, and compliance (GRC). Host Jane Totaro facilitates a refreshingly candid conversation between two CISOs—Jake Bernardis and Nick Kathman—who discuss regulatory turbulence, agentic and shadow AI, the critical role of AI literacy, and practical approaches for risk management in an era of rapid innovation and uncertainty.
Key Discussion Points & Insights
1. CISO Personalities: Breaking the Ice
- Timestamps: [01:15]–[02:40]
- Jake revealed his claustrophobia discovered in Jerusalem’s Ezekiel’s Tunnel—"That's one of the rare things that's not on my LinkedIn."
- Nick shared his lifelong tech obsession, with a basement full of Raspberry Pi clusters and other microcontrollers, always finding a way to blend hobbies with cybersecurity.
2. GRC Mythbusters: "All AI Is the Same"
-
Timestamps: [02:57]–[06:42]
-
Nick debunks the myth, detailing huge variety across AI models, behaviors, and capabilities.
"If you look just across all of the different models, whether they be proprietary or open source, the differences in them is so extreme." (A, [03:16])
-
He provides hands-on examples:
- Models like Gemini reject certain prompts, while ChatGPT provides recommendations; others excel in code, chat, or data parsing.
-
Jake adds depth, warning that the chasm between AI models—especially around ethics and regulation—will only grow. He points to OpenAI’s move from non-profit to for-profit and the varying degrees of what models are willing to reveal.
"It's quite easy to socially engineer a model and start to push it in direction it's not supposed to go into." (C, [06:10])
3. AI Regulation & Strategy—EU AI Act and U.S. Differences
-
Timestamps: [06:42]–[14:55]
-
Jake compares the EU AI Act to GDPR, calling out how regulations are "super vague" and challenging to operationalize or monetize:
"They're really hard to interpret to logical controls to actually apply and maintain the act." (C, [07:50])
-
He predicts the US will never match the EU’s regulatory rigor due to different privacy values and fierce lobbying; compliance remains risk-based and “subjective.”
-
Nick warns of the risk of fragmentation, likening emerging AI regulations to the patchwork of US state privacy laws:
"I'm really, really hoping…we're not chasing down 50 different AI regulations for different states." (A, [10:09])
-
Both CISOs highlight that regulation can’t keep pace with AI’s speed of change, making meaningful, enforceable standards elusive.
4. AI Literacy for GRC & Empowering Teams
-
Timestamps: [15:24]–[18:25]
-
Jake: CISOs must proactively “own AI” and be “risk aggressive”—if security lags, users will circumvent controls:
"You have to be doing this stuff. Like you have no right to talk about AI regulation in your company if you're not building your own suites." (C, [16:38])
-
Nick: Describes real-world examples of employees sidestepping policies, e.g., banking executives using ChatGPT for sensitive communications via creative workarounds, even before regulators provide clarity.
5. Agentic AI: Definitions, Opportunities, and Risks
-
Timestamps: [18:45]–[25:37]
-
Nick uses self-driving cars as an analogy to explain levels of agency; agentic AI can fully automate or keep "human in the loop" based on context:
"There's going to be some level inside of there and some risk tolerance within there of the different processes you have within GRC." (A, [19:10])
-
Experimental insight: Agentic AIs can behave unpredictably even with identical prompts, posing operational unpredictability.
-
Jake describes how agentic AI and MCP can make risk/compliance dynamic, automating auditing, risk assessment, and compliance monitoring—moving GRC away from spreadsheets and “security theater” toward living, data-driven workflows.
6. Preparing for Agentic AI
- Timestamps: [24:27]–[26:56]
- Jake: Success starts with understanding and cleaning data—agentic AI is only as effective as the data behind it.
- Nick: Data context and granularity are king; without clean, relevant data, output is inconsistent or erroneous.
7. Shadow AI: The New Tsunami
-
Timestamps: [26:56]–[31:02]
-
Nick: The proliferation of unsanctioned AI in the enterprise—both through users and SaaS vendors “turning on” AI without warning—creates huge blind spots.
"You're constantly chasing your tail trying to figure out what new…vendors have turned on AI…You're always behind the curve." (A, [28:07])
-
Detection and control is nearly impossible in some cases, especially when data is processed server-side by vendors.
-
Jake: Shadow AI is a tsunami compared to the "surfable wave" of shadow IT—too late to stop, focus must shift to building resilient post-tsunami structures.
8. The CISO Perspective: What Keeps Them Up At Night?
-
Timestamps: [31:29]–[36:28]
-
Nick: The lack of AI security expertise and rapidly evolving attack surfaces, like steganographic prompt injection in multimodal AI, make this “a very, very hard” problem.
"There’s just not a lot of experts…and very, very hard to build meaningful security around AI for a good while." (A, [34:41])
-
Jake: Optimistic about AI’s potential but worries about the long-term erosion of traditional technical career paths for future generations of security leaders.
9. Proactive AI for Dynamic Risk Detection & Mitigation
-
Timestamps: [37:38]–[41:18]
-
Jake: AI can turn risk management from static, checklist-based processes to dynamic, real-time, evidence-driven systems.
"Let's make risk a living, breathing thing…AI allows those workflows and it becomes real and tangible." (C, [37:53])
-
Nick: Envisions risk management evolving into a data science discipline, aligning actual controls and incidents with true risk tolerance rather than industry guesses.
10. Final Strategies for Success—The Human Element
-
Timestamps: [41:18]–[43:54]
-
Nick: Organizations must evolve team mindset—classifying staff as AI deniers, acceptors, or builders. Builders will be most valuable, but all must move beyond denial.
“Everybody should be at least an AI acceptor.” (A, [43:28])
-
Jake: Calls for “risk maturity and risk aggression”—understanding AI is inevitable and must be embraced, not avoided.
Notable Quotes & Memorable Moments
- Nick: “AI is changing like on a monthly basis. You can't rewrite your policy and rewrite your standards every month and re-educate your users every month and expect them to constantly be ahead of the game on this. It’s just impossible.” ([11:06])
- Jake: “We all thought shadow IT was the worst problem in the world. I think shadow IT was a surfable wave compared to the tsunami that is shadow AI.” ([30:31])
- Nick: “I would say the AI builders are going to be the most valuable in companies going forward, but everybody should be at least an AI acceptor.” ([43:28])
- Jake: “We have to find a way to adopt, embrace, and enable AI. Risk maturity and risk aggression are key.” ([43:44])
Timestamps for Important Segments
- [03:16] Mythbusters: All AI is the same?
- [06:42] EU AI Act—CISO challenges, US vs. EU, regulation vagueness
- [10:09] Implications of AI at Scale / Regulatory patchwork
- [15:49] AI Literacy: Training teams, shadow risk
- [18:45] Agentic AI: Definition, levels, unpredictability
- [25:37] The importance of data quality in AI
- [27:17] Shadow AI: Why it's a top CISO blind spot
- [31:29] Mind of the CISO: What keeps you up at night?
- [37:53] Using AI for dynamic risk detection and mitigation
- [41:49] Strategies for success—the most critical human element for AI success
Flow & Tone
The conversation is expert, frank, and practical, blending deep technical insight with real-world governance and risk examples. Both guests offer actionable advice and cautionary optimism, consistently urging CISOs to become hands-on AI practitioners, foster risk-aggressive postures, and recognize that the wave of AI—and its attendant risks and opportunities—is unstoppable.
Useful for:
- CISOs and risk leaders navigating AI in GRC
- Security practitioners seeking pragmatic AI governance insights
- Organizations prepping for agentic and shadow AI realities
- Anyone seeking expert perspectives on the future of AI in enterprise risk
