Podcast Summary: CISO Series Podcast
Episode: Remember, Every Underappreciated Risk Is Just a Crisis Waiting to Be Discovered
Date: April 7, 2026
Host(s): David Spark, Andy Ellis
Guest: Kilik Kutler (SVP CISO & IT, Expedia Group)
Overview
This episode dives deep into the evolving challenges CISOs face in a fast-changing cybersecurity landscape, focusing on risk management, vendor selection, AI’s impact on security ethics and governance, and the perennial need for alignment between cybersecurity and business priorities. The discussion is packed with practical advice, critical debate, and introspective commentary from seasoned security leaders.
Key Discussion Points & Insights
1. The Nature and Value of Quantitative Risk Management (QRM)
- Prompt: Dr. Sam Lyles criticized QRM as a “self-serving con,” arguing it makes boards feel safe without improving security. Are quant models valuable, or just “pushing paper”?
- Andy Ellis’ take:
- QRM has value for comparative analysis but is often based on data that “is made up.”
- The real danger: assuming numbers will make decisions for you, rather than facilitating necessary subjective judgment.
- “Anybody who wants to grab a copy of the free ebook on howtocso.com about risk, I talk about QRM there.” (06:14)
- Kilik Kutler’s take:
- QRM isn’t about replacing technical depth, but aligning technical understanding with enterprise priorities.
- Its true strength: creating a shared language about risk tolerance between security and business leaders.
- “Without that clarity, security defaults might be at maximum caution, and that can create friction and reality will hurt the business.” (08:43)
Notable quote:
“Security has always struggled because it speaks in vulnerabilities, not in impact.”
— Kilik Kutler [08:02]
Takeaway:
QRM, when applied thoughtfully, facilitates essential risk appetite conversations, but is no substitute for sound engineering or executive alignment.
2. How CISOs Can Spot Vendors with True “Unfair Advantages”
- Prompt: With security vendor sameness rampant, how do startups truly differentiate and how do CISOs discern real value?
- Kilik Kutler’s Five-Point Test:
- Deep Operational Understanding: Vendors demonstrate experience with real-world problems, not just slick pitches.
- Tailored Value: They adapt to enterprise context, not one-size-fits-all.
- Friction Reduction: True partners integrate seamlessly and reduce operational burden.
- Unique Data/Network Effect: Sustainable solutions leverage proprietary data or customer feedback loops.
- Business Savvy: Understanding budget cycles, change management, and partnership over mere sales.
- Quote:
“If your solution sounds identical in every room, it's not customer centric, it's product centric.”
— Kilik Kutler [11:54] - Andy Ellis:
- Reframes “unfair” as “unjust” advantage—solutions arise from deep understanding of why naive answers fail.
- “If you understand that and you can build the solution that nobody else is going to think to build because you truly understand the problem… that gives you an unfair advantage.” (14:44)
3. What (and Who) Makes for the Worst CEO Mindset for CISOs? (Game: What’s Worse?)
- Scenario:
- CEO wants zero risk.
- CEO believes cyber insurance replaces security.
- Andy Ellis:
- CEO who thinks insurance replaces security is worse: eventually “you are just going to get breached. Good luck getting your insurance paid out the second time.” (20:15)
- Kilik Kutler:
- Chooses the CEO who wants zero risk as worse.
- “The first one means a CEO that thinks that we want no risk in security, probably will want no risk anywhere else. And risks, on a strategic level… you have them everywhere. So probably they will not be as successful as maybe they could.” (23:13)
Quote:
“No risks whatsoever across the board… it's how you make money at the end of the day.”
— Kilik Kutler [23:32]
4. Embedding Ethics in AI: Governance, Not Just Oversight
- Prompt: Can we scale human values into AI? Karen Pfeiffer advocates “Humanity in the Loop”: embedding ethics at every stage, not just oversight.
- Kilik Kutler:
- AI mirrors incentives, data, and guardrails set by humans—real risk is poor governance, not AI autonomously going astray.
- Outlines three practical steps:
- Governance: Risk appetite and executive accountability before deployment.
- Monitoring: Telemetry to track bias, explainability, and drift.
- Escalation: Breakpoints for human intervention on high-stakes decisions.
- “You don’t scale ethics by adding more humans. You scale ethics by encoding clear principles into system design...” (27:32)
- Andy Ellis:
- Skeptical about the feasibility of pre-coding “ethical red lines”—disagreement is almost guaranteed.
- Warns: controversies over debiasing are illusions—bias will always exist, so the goal must be clarity about the right outcomes.
- Shares a "Moral Machine" anecdote showing how misplaced assumptions can invalidate “ethical” tests.
5. Do We Need Human Alignment Before AI Alignment?
- Prompt: Joshua Copeland argues that since organizations can’t align internally, training AI only formalizes office politics. Should alignment come before AI deployment?
- Andy Ellis:
- Suggests embracing “Byzantine AIs” that represent different perspectives—speed, safety, compliance—and simulate internal debate before final decisions.
- Kilik Kutler:
- Misalignment is natural due to healthy differences in incentives (velocity vs. control, etc.).
- Key is not perfect harmony, but explicit prioritization and enterprise objectives—so AI doesn’t just optimize for the “loudest signal.”
- “AI doesn’t create politics. AI just makes incentives visible at scale.” (37:25)
- Extra Insight:
- Andy notes external threats can exploit organizational misalignments with AI-powered attacks (e.g., complaint spamming).
- Kilik: “Security and product teams must be aligned… If security is still acting as a late-stage gate... that's not a product problem, that's a strategy problem.” (38:07)
- AI will amplify, not fix, existing incentive structures.
Notable quote:
“If security operates as a design partner with embedded guardrails, AI will scale that discipline. So… we do need evolved governance. Security must adapt to the speed of the business.”
— Kilik Kutler [39:13]
Memorable Moments & Quotes by Timestamp
- 00:03 – Kilik: “Cybersecurity doesn't reward comfort. It rewards curiosity.”
- 06:35 – Andy: “All the data is made up… the average cost of a breach, but that's not your breach.”
- 08:02 – Kilik: “Security has always struggled because it speaks in vulnerabilities, not in impact.”
- 11:54 – Kilik: “If your solution sounds identical in every room, it's not customer centric, it's product centric.”
- 20:15 – Andy: “If you replace cybersecurity entirely, you are just going to get breached.”
- 23:32 – Kilik: "No risks whatsoever across the board… it's how you make money at the end of the day."
- 27:32 – Kilik: “You don't scale ethics by adding more humans. You scale ethics by encoding clear principles into system design and continuously validating outcomes against them.”
- 33:45 – Andy: “Product teams optimize for velocity while security optimizes for control.”
- 37:25 – Kilik: “AI doesn’t create politics. AI just makes incentives visible at scale.”
Closing Notes
Kilik Kutler’s debut on the podcast is marked by thoughtful, well-prepared insights—earning high praise from both hosts. He stresses embedding security upstream, treating vendors as partners, and the necessity of evolving governance to keep up with AI and business realities. Listeners are encouraged to revisit his vendor evaluation framework and to reflect on the importance of strategic alignment.
Hiring Note:
Expedia Group is recruiting for security roles. Interested listeners are encouraged to consult the Expedia Group job board or reach out to Kilik via LinkedIn.
Catch the full episode for more lively debate, expert advice, and real-world cybersecurity perspectives.
