Podcast Summary: Digital Disruption with Geoff Nielson
Episode: Why AI is Failing: Ex-Google Chief Cassie Kozyrkov Debunks "AI-first"
Host: Info-Tech Research Group
Guest: Cassie Kozyrkov (Former Chief Decision Scientist at Google)
Date: November 10, 2025
Overview
This episode dives deep into the so-called "generative AI value gap," exploring why most enterprises are failing to realize meaningful ROI from AI initiatives despite the technology’s transformative potential. Cassie Kozyrkov, a pioneering AI advisor and former Chief Decision Scientist at Google, unpacks the flawed mindset behind the "AI-first" mantra, exposes the complexity that AI introduces (especially in measurement and management), and offers a pragmatic framework for leaders eager to harness AI’s power while sidestepping hype-driven pitfalls. The discussion moves from the theory of innovation waste and the measurement conundrum to actionable insights for building AI-ready organizations, always foregrounding the human element in digital transformation.
Key Discussion Points & Insights
1. The Generative AI Value Gap
[01:05–12:04]
- 95% of Organizations Seeing No Measurable ROI: Surveys reveal most companies deploying generative AI fail to see measurable returns. This is attributed to unclear goals, “sprinkling AI” with no meaningful purpose, and insufficient organizational tolerance for innovation-driven “waste.”
- "Some part of what's going on is that companies are really getting no ROI and there are fantastically foolish ways to just try to keep up with the Joneses in AI... and you join the no ROI bucket."
— Cassie Kozyrkov [01:25]
- "Some part of what's going on is that companies are really getting no ROI and there are fantastically foolish ways to just try to keep up with the Joneses in AI... and you join the no ROI bucket."
- Innovation Demands Waste: Companies want innovation but give employees no bandwidth for experimentation or tolerance for ‘failure,’ leading to accidental innovation at best.
- Solow’s Paradox Revisited: Productivity improvements due to AI are often invisible in traditional metrics, as with the introduction of computers in the past — visible productivity, invisible in the numbers.
- Shadow AI: Many employees use AI tools informally, leading to disconnects between perceived and measured productivity.
2. The Problem with Measuring AI’s Value
[06:40–12:04]
- From “Right Answers” to “Endless Right Answers”: Classic AI/ML measured improvement against single correct outcomes (e.g., weather forecasts). Generative AI, however, outputs from vast solution spaces where there are infinite “acceptable” answers (e.g., writing emails).
- "We've just seen endless right answers — a nightmare challenge for management."
— Cassie Kozyrkov [24:01]
- "We've just seen endless right answers — a nightmare challenge for management."
- Measurement Nightmares: Quantifying value, productivity impact, acceptance, and risk becomes far murkier with generative models, especially if humans are taken out of the loop.
3. Leadership Failures and “Sprinkling AI”
[12:04–20:17]
-
Abdication of Leadership: Leaders often abdicate responsibility—preferring to “sprinkle AI on everything” rather than clarifying business needs or KPIs.
- “When you hear leaders say, ‘We need more AI in the business, everybody should be using AI’, what's your reaction?”
— Geoff Nielson [12:54]
- “When you hear leaders say, ‘We need more AI in the business, everybody should be using AI’, what's your reaction?”
-
Cassie’s Prescription: Start With “Why”: Leaders must ask why they want AI—not just deploy it for its own sake. Sometimes “AI” projects are motivated by politics (e.g., a board member’s mandate) rather than business need.
-
AI Infrastructure Debt: Cassie highlights Cisco’s “AI infrastructure debt”—the growing risk companies incur when launching AI initiatives without foundational readiness (data, tools, processes, skills, governance).
- Only 13% of surveyed companies are able to do AI at scale, though 83% plan to deploy AI agents.
— [16:28]
- Only 13% of surveyed companies are able to do AI at scale, though 83% plan to deploy AI agents.
4. Rethinking AI Infrastructure
[20:17–28:53]
- Human & Cultural Infrastructure: Organizations often narrowly define AI infrastructure as technological (cloud, GPUs, data), but Cassie widens the lens to include leadership readiness, workforce training, and user expectations.
- “One of the things that leaders...could spend more time thinking about is how whatever AI system...will actually be taken up by the people it touches. You really have to think about that.”
— Cassie Kozyrkov [21:29]
- “One of the things that leaders...could spend more time thinking about is how whatever AI system...will actually be taken up by the people it touches. You really have to think about that.”
- Managing Organizational Change: Success depends on managing expectations, training employees, setting up comprehensive guardrails, and anticipating “long tail” edge cases that could result in high-impact failures.
5. “AI-First” and Its Multiple Meanings
[28:53–42:39]
- Different Models for “AI First”:
- Individual Worker AI: Promoting personal productivity through AI assistants (with human always in the loop). This delivers quick wins with low risk.
- Enterprise Automation: Removing human oversight (hands-off automation at scale). This requires entirely different (and more mature) risk management, measurement, and cultural adaptation.
- “When you see how easy that is, you think that kind of easiness...you want to now take this thing and scale it up somehow and let it go. It is not the same game.”
— Cassie Kozyrkov [36:49]
- Judgment Is Key: Encouraging employees to use AI for advice elevates the value of good judgment—knowing what to ask, how to ask, and how to evaluate answers.
6. Avoiding Expensive Mistakes and Finding the “Goldilocks Zone”
[42:39–48:35]
- Personal Productivity is Essential, Not Optional: Cassie likens not adopting AI personal productivity tools in 2026 to ignoring the internet in 2015.
- Strategic Leadership = Goldilocks Zone: The best approach is not top-down, all-encompassing AI “transformation” nor total inaction, but strategic leadership—identifying high-value opportunities, piloting them thoughtfully, and scaling what works.
- “Doing the work is probably faster than setting everything up for a ‘maybe I need it’.”
— Cassie Kozyrkov [47:15]
- “Doing the work is probably faster than setting everything up for a ‘maybe I need it’.”
Actionable Frameworks & Success Factors
Cassie’s Abstract Recipe for Enterprise AI Success
[49:19–57:36]
- Start with Low-risk Internal Automation: Target back office, repetitive, translation-like tasks before client or public-facing processes.
- “Automating the salesperson is an insult to humankind...What do we want to automate? Repetitive, drudgery things.”
— Cassie Kozyrkov [49:48]
- “Automating the salesperson is an insult to humankind...What do we want to automate? Repetitive, drudgery things.”
- Strategic Foresight: Freeing up staff hours with AI shouldn’t only be a cost-cutting measure—use it to retrain, upskill, and tap new business lines.
- Culture of “Here’s What it Would Take”: Foster a culture where teams articulate what’s needed to realize new opportunities, not just say “no.”
- Guardrails & Escalation Paths: Design systems to accept mistakes, with clear escalation, rollback, and retraining paths—especially in customer-facing or critical functions.
- Probabilistic Thinking: Leaders must train staff (and themselves) for a world where outputs are not deterministic. Trust, judgment, and a tolerance for manageable failure become vital.
Notable Quotes & Memorable Moments
-
On Innovation & Waste:
“Innovation day demands waste. If you are doing something that you’ve done before, you know exactly how it’s going to go... If you don’t have that tolerance for no ROI when you’re trying to innovate...just wait for everybody else to show how it’s done and follow them.”
— Cassie Kozyrkov [02:54] -
On Measurement Nightmares:
“When we think about metrics, it's about targeting a right answer and how wrong are we? This is a different paradigm... it's snuck into our workplaces without us even realizing how much of a different paradigm it is.”
— Cassie Kozyrkov [11:32] -
On AI Infrastructure Debt:
“That has a really, really high interest. That’s a really, really high interest rate credit card. So what I would say instead is...invest in capability. Set yourself up to hit the ground running and scale quickly.”
— Cassie Kozyrkov [15:38] -
On Organizational Architecture:
“If you’re not thinking of humans as now part of this infrastructure because you’re still used to wires and bits being what infrastructure is, then...at some point within the next decade or so, [you’ll realize] just how human and ambiguity-filled and unpredictable technology becomes.”
— Cassie Kozyrkov [27:21] -
On “AI-First” Misunderstandings:
“That version of AI first...as a leader, you have a strict mandate for everybody to, when it doesn’t involve confidential information, get a second opinion for crying out loud from a large language model... But every lesson there does not translate to automating with AI at scale.”
— Cassie Kozyrkov [34:59] -
On Complexity & Control:
“When it’s generative, it’s literally probabilistic. These complicated systems...are going to be put in first. Once you’ve got a bunch of these, plus a bunch of augmented employees, workers who are all augmented in their own way, that is a lot of complexity moving around.”
— Cassie Kozyrkov [73:10] -
On The Future of Work:
“My one controversial prediction that I feel pretty okay with is that we’re all going to have more to do, not less, just because of that sheer complexity and all the different ways that everything can fit together.”
— Cassie Kozyrkov [78:11]
Key Timestamps
- [01:05] – Cassie introduces the 95% “no-ROI” problem and value gap
- [11:32] – Measurement paradigm shift: from error minimization to evaluating endless “right” outputs
- [15:38] – AI infrastructure debt and strategic organizational preparation
- [21:29] – The importance of user and employee expectations (“cultural infrastructure”)
- [34:59] – Two models for “AI first” and the judgment premium
- [49:48] – Practical recipe: where and how to start pragmatic AI deployments
- [57:36] – The necessity of probabilistic thinking and cultural change
- [73:10] – The rising complexity of organizations and distributed “gardening”
- [78:11] – Predicting the future of work: more complexity, more work
Takeaways for Leaders
- Don’t chase AI for AI’s sake—clarify your goals and start with “why.”
- Recognize and plan for complexity—AI success requires major shifts in measurement, governance, and culture.
- Invest in infrastructure and talent, not just tech—your “AI readiness” includes human training, expectation management, and cultural adaptation.
- Embrace a mindset of experimentation and innovation “waste”—tolerance for “failure” is the seed of transformation.
- Value (and train) judgment in your workforce—AI advice is cheap, good judgment is rare.
- Automate where human error is acceptable first; scale with caution into public-facing or critical business domains.
Episode in a Sentence
In a landscape where 95% of organizations aren’t seeing measurable ROI from AI, Cassie Kozyrkov urges leaders to ditch “AI-first” hype, get real about the required organizational, cultural, and governance transformation, and double down on empowered, judgment-driven adoption that starts with “why."
