CyberWire Daily – "AI and Cyber Practicum" [CISOP]
Date: December 9, 2025
Host: Kim Jones, CyberWire/N2K Networks
Guest Expert: Tony Goda, VP of Cybersecurity Architecture at Intuit
Episode Overview
In this unlocked episode of "CISO Perspectives," host Kim Jones and guest Tony Goda explore the profound impacts of rapid AI innovation on enterprise cybersecurity. They discuss both the benefits and operational risks of integrating AI, challenge preconceptions about automation vs. accountability, and debate what it takes—culturally and organizationally—to operationalize transformative AI in a security context. The episode delivers practical insights for CISOs facing pressure to "do AI" while managing risk, liability, and organizational culture.
Key Discussion Points & Insights
1. AI's Promised Benefits and Underlying Risks
(01:30 – 06:39)
-
AI's Benefits (as seen by optimists):
- Productivity: Automation of mundane tasks, freeing employees for creative work.
- Customer Experience: Personalization at scale via generative AI.
- Analytics & Insights: Faster, deeper pattern recognition and anomaly detection.
“AI can automate mundane tasks, allowing employees to focus more time and energy on more complex and creative tasks, thus improving the overall productivity of an organization.” — Kim Jones (01:44)
-
Risks & Challenges (protector's perspective):
- Data integrity and normalization are persistent obstacles.
- Data governance lapses could lead to data poisoning or leaks.
- Difficulty recognizing bad or “BS” AI-generated results; AI can confidently generate wrong answers.
- Accountability quandaries: Who is responsible for AI errors?
- Infrastructure and budgeting strains.
- Lack of realistic cost/benefit analysis as organizations rush to be seen as “AI enabled.”
"Whereas my friend is an innovator, my nature is that of a protector... I focused on the challenges with operationalizing AI within any environment." — Kim Jones (03:05)
-
Warning Against Hype-Driven Adoption:
Many enterprises are “leaping into implementing something, anything actually, just to say that they are AI enabled,” mirroring the past Agile adoption rush."I'm almost reminded of the 'I need Agile' phenomenon a decade ago." — Kim Jones (06:07)
2. Introduction to Tony Goda and Perspectives on Innovation
(07:01 – 14:02)
-
Tony's Background:
- Engineer-turned-innovator with deep roots in cybersecurity (fraud detection, cryptography, insider threat, startups, Intuit).
- Experienced at bridging business needs with technical innovation.
- Emphasizes people management and the challenge of motivating teams for transformative outcomes.
-
Innovation vs. Invention:
- Invention is novelty, but innovation must be useful and operationalizable without harming culture or protection.
"The difference between innovation and invention is both are new, but innovation is actually useful." — Kim Jones, quoting Frank Kim (12:46)
3. AI is a Paradigm Shift, Not a Mere Incremental Upgrade
(14:02 – 15:49)
-
Traditional Model vs. AI-Centric Model:
- Existing systems = humans at center, tech as helper.
- Future vision: AI at center for repeatable, automatable tasks; humans oversee, fact-check, govern.
- Challenge: Rethink problems and solutions entirely, not just “bolt on” AI for incremental gains.
"What I think we should be doing is fundamentally rethinking the approach... put humans into a position in which they are in some cases fact checking what the AI is doing." — Tony Goda (14:30)
4. Accountability vs. Automation – The CISOs’ Dilemma
(15:49 – 21:16)
-
CISO’s Concerns:
- Accountability doesn’t shift; CISOs remain liable (reference to SolarWinds case).
- The conundrum: Handing over control to agentic AI raises stakes for liability if errors cause harm.
"You're still gonna attempt to throw my large butt in jail... How do I balance that as a CISO?" — Kim Jones (17:06)
-
Tony’s Rebuttal:
- AI is not a magical infallible force; it needs checks, especially for high-stakes decisions.
- Lessons from other fields (e.g., self-driving cars/Waymo): Only deploy full trust when technology is mature; introduce staged, controlled use as safety grows.
5. How to Operationalize AI Safely in Security Operations
(21:21 – 27:08)
-
Tiered, Automated SOC:
- Automate “Tier 1” (routine) security operations—not by dictating details but empowering teams to define and execute.
- Give room to experiment, even to fail; psychological safety is key for learning and progress.
- Human "double checks" remain in play as guardrails.
"What you want is for them to have the psychological safety to actually go out to find the tools... and to just double check what [AI's] doing." — Tony Goda (21:16)
-
Cultural Impediments:
- Most orgs don’t genuinely support experimentation (fear of failure/career impact).
- Business leadership expects near-perfection from cybersecurity, an unrealistic standard compared to expectations in physical security.
"If the expectation for perfection in security in the IT space existed in the physical space, we would expect a murder and kidnapping and theft rate to go down to absolutely zero... which is unrealistic." — Kim Jones (26:24)
6. Who (or What) Is Accountable? Risk Management in AI-Driven Defense
(27:08 – 34:06)
-
Humans are Fallible, Too:
- Three SOC analysts might classify the same alert differently.
- AI isn't perfect, but neither are people; both need oversight.
"Humans themselves are some of the most indeterminate, non deterministic things that exist on the planet." — Tony Goda (27:27)
-
Checks & Balances, Not Blind Trust:
- Accountability = “checks and balances” via layered oversight—either human or AI.
- Example: Limits on what agentic AI can do, agentic “peer review” before high-stakes action, human in-the-loop for exceptions.
- No unrestricted destructive actions for AI (e.g., code or database deletion).
"You should not give it the ability to delete databases or to... commit code to your repository that is unreviewed." — Tony Goda (33:09)
7. Moonshots, Not Marginal Gains: The Mindset and Organizational Challenge
(34:13 – 39:44)
-
Setting Transformative Goals:
- Don’t get stuck on incremental improvement targets.
- Frame the AI journey as a “moonshot”—set ambitious, outcome-focused objectives and delegate tactical details to empowered teams.
- Provide psychological safety and resources for experimentation—even at the risk of (some) failure.
"Set the goal, don't necessarily set the tactics... empower them to make the types of decisions that need to be made." — Tony Goda (36:40)
-
Empowering the Team:
- Trust expertise at the front lines; they know where improvements are possible.
- Expect transformative, not just incremental, change.
"Trust your team. Empower them. It's literally all about... not 10% improvements, but 10x improvements, 100x improvements." — Tony Goda (38:57)
Notable Quotes & Memorable Moments
-
On AI hype and risk:
“Most organizations are whitewashing risks and costs and leaping into implementing something, anything actually, just to say that they are AI enabled." — Kim Jones (05:58)
-
On the folly of AI infallibility:
"AI is not a magical infallible force; it needs checks, especially for high-stakes decisions." — Tony Goda (21:16–22:38 summary)
-
On accountability refusing to disappear:
"So what we're talking about is a future that is inevitable... I don't think the accountability shifts at all." — Tony Goda (17:57)
"Right now CISOs are being held liable... And if it goes sideways, you're still gonna... attempt to throw my large butt in jail." — Kim Jones (16:06) -
On fast AI, faster adversaries, and the need for speed:
“The adversaries are using AI to traverse through our systems to find vulnerabilities at faster than human speeds.” — Tony Goda (29:57)
-
On cultural change:
"A culture that allows for experimentation is a culture that allows for the possibility of failure. Because if you experiment, not everything is going to work." — Kim Jones (24:46)
Important Timestamps
- AI Benefits vs. Risks Breakdown (01:30 – 06:39)
- Tony Goda’s Background and Intro to Innovation (07:01 – 14:02)
- Strategic AI Adoption: Paradigm Shift Required (14:02 – 15:49)
- Operationalizing AI – The Accountability Conundrum (15:49 – 21:16)
- Cultural Shifts and Experimentation (24:27 – 27:08)
- Checks/Balances & Guardrails for Agentic AI (27:08 – 34:06)
- Moonshots and Empowerment (34:13 – 38:57)
- The One Thing CISOs Should Do Now (38:57)
Episode Takeaways
- AI is both a massive opportunity and a massive challenge—approach it with eyes wide open.
- Operationalization is not about bolting on AI to current processes, but about reimagining workflows with AI at the center.
- Checks, balances, and layered guardrails—not blind trust—are essential for safe, scalable agentic AI.
- Transformative outcomes require moonshot thinking and a deep cultural shift toward experimentation and psychological safety.
- CISOs must empower expert teams to define and deliver ambitious AI-enabled solutions, balancing efficiency with accountability.
Final Action for CISOs (per Tony Goda at 38:57):
“Trust your team. Empower them... set the expectation that blow my mind. Like tell me where we can get... 10x improvements, 100x improvements, and you’d be surprised at the answers that you get.”
For further analysis, visit the CISO Perspectives blog (link in show notes).
