Podcast Summary: Building Trust in Clinical AI
Podcast: Becker’s Healthcare Podcast
Episode: Building Trust in Clinical AI with Yaw Fellin of Wolters Kluwer Health
Date: October 20, 2025
Host: Erica Spicer Mason
Guest: Yaw Fellin, Senior Vice President and General Manager of Clinical Decision Support and Provider Solutions, Wolters Kluwer Health
Episode Overview
This episode explores how clinical intelligence and artificial intelligence (AI) are shaping the future of healthcare, with a focus on building trust and driving adoption. Yaw Fellin discusses the evolving role of AI and clinical decision support tools in tackling major challenges facing hospitals and health systems—such as clinician burnout, rising costs, and growing care complexity. The conversation emphasizes practical approaches to integrating trustworthy AI, the necessity of transparency, expert involvement, and strong governance, as well as strategies for workflow adaptation and results-focused experimentation.
Key Discussion Points & Insights
1. Yaw Fellin’s Role & Wolters Kluwer Overview ([00:33]–[02:07])
- Yaw’s Background: Over 20 years in healthcare, with leadership roles at organizations like Optum and The Advisory Board Company.
- Current Focus: Now leads all aspects of clinical decision support at Wolters Kluwer, especially the UpToDate product.
- Company’s Mission: Delivering evidence-based clinical intelligence for providers, patients, and nurses, integrated directly into clinical workflows.
2. The Value of Clinical Intelligence in Addressing Industry Challenges ([02:07]–[04:40])
- Ongoing Challenges: Cost pressures, workforce strain, and care complexity remain acute.
- Value Proposition of Clinical Intelligence:
- "I think of this as having a clinical thought partner...an expert peer clinician built into systems and workflows." — Yaw Fellin ([03:23])
- Saves time and helps clinicians collaborate for better care decisions.
- Potential for improved outcomes through more informed, reference-based decision-making.
3. Building Trust in Clinical AI: Guiding Principles ([04:40]–[10:06])
- Skepticism is Declining:
- Growing patient comfort with AI (e.g., using ChatGPT to research health issues).
- Survey findings: 40% of providers are ready to use AI-supported clinical decision support; two-thirds recently changed their views towards embracing AI solutions.
- Three Key Principles for Trustworthy AI:
- Transparency:
- Not just showing sources, but also revealing the system’s assumptions and reasoning.
- "In the work that we’re doing, one of the key guiding principles is...we actually want to show you the assumptions or the clinical reasoning that the AI system took to arrive at the answer." — Yaw Fellin ([06:45])
- Expert in the Loop:
- Not just “a human,” but specifically trained clinical leaders and specialists validating the outputs.
- "I would actually be a horrible human to be in the loop...for me to be in the loop...doesn't improve the safety." — Yaw Fellin ([07:57])
- Governance:
- Addressing risks from “ungoverned” AI use at the end-user level.
- Strong, thoughtful governance frameworks needed for responsible, safe adoption at scale.
- Transparency:
4. Case Study: St. Luke's and Wolters Kluwer’s Expert AI ([10:54]–[12:57])
- Real-World Example:
- St. Luke’s University Health Network implemented Wolters Kluwer’s new generative AI-powered clinical decision support.
- St. Luke’s highlighted as a model for transparency and robust governance in AI adoption.
- Rolled out incrementally, aligned with their care delivery goals.
- "...the focus that they've put on governance...the evaluation and the phasing and rollout..." — Yaw Fellin ([12:27])
5. Workflow Integration Strategies ([12:57]–[15:56])
- Integration is Essential:
- EMRs (Electronic Medical Records) are central but often slow to innovate.
- New technologies, particularly “ambient” systems, are creating fresh possibilities for integrating decision support into workflows in more seamless, less disruptive ways.
- Anticipation that both EMR and ambient solutions will continue to evolve and unlock better patient outcomes.
6. Priorities for Innovation: Experimentation & Outcome Alignment ([15:56]–[18:12])
- Top Immediate Priority:
- Organizations must prioritize "intentional experimentation"—testing, learning, and iterating with new technologies.
- "The ability to experiment, to pivot when those experiments aren't working, that would be number one." — Yaw Fellin ([16:31])
- Results Must Align with Outcomes:
- Experiments should be directly tied to improving organizational goals and patient outcomes.
- "I think the most important result that we can all be aiming towards is improving outcomes..." — Yaw Fellin ([17:43])
7. Final Thoughts: Need for Better AI Validation ([18:33]–[20:07])
- Validation Gaps:
- Current public benchmarks (e.g., USMLE-based testing) are insufficient; they don’t reflect the complexity of real-world, point-of-care decisions.
- Many private benchmarks show only 50–60% AI accuracy, which would be unacceptable for clinical practice.
- "If I told you your doctor was going to be 50–60% [accurate]...good question as to if that would be an acceptable threshold." — Yaw Fellin ([19:37])
- Call to Action:
- More rigorous validation standards are urgently needed for clinical AI. Wolters Kluwer is publishing research on this soon.
Notable Quotes & Memorable Moments
-
Clinical Intelligence as a 'Thought Partner':
"I think of this as having a clinical thought partner...an expert peer clinician that is built into hospitals’ workflows." — Yaw Fellin ([03:23]) -
On the Need for True Expertise in AI Loops:
"You don’t just want any human in the loop...you absolutely need the right experts—really the clinical leaders and the specialists—to validate these systems." — Yaw Fellin ([07:31]) -
On Governance:
"We just see a lot of ungoverned use of AI systems...and on the flip side, tremendous hospital peers that are really being thoughtful about the governance process." — Yaw Fellin ([09:15]) -
On Experimentation:
"The ability to experiment, to pivot when those experiments aren't working, that would be number one." — Yaw Fellin ([16:31]) -
On the Importance of Improved Validation:
"Most public benchmarks...aren't very representative of what happens at the point of care...validation is probably a topic that's not getting quite enough attention." — Yaw Fellin ([19:05])
Important Timestamps
- Yaw’s Introduction & Background: [00:33]–[02:07]
- Defining Clinical Intelligence & Its Impact: [02:46]–[04:40]
- Guiding Principles for Trustworthy AI: [05:25]–[10:06]
- Case Study — St. Luke’s University Health Network: [10:54]–[12:57]
- Workflow Integration Strategies: [13:26]–[15:56]
- Prioritizing Experimentation & Alignment with Outcomes: [15:56]–[18:12]
- Call for Improved Validation in Clinical AI: [18:33]–[20:07]
Conclusion
Yaw Fellin underscores that trust in clinical AI requires transparency, deep clinical expertise, and rigorous governance—especially as skepticism gives way to cautious optimism. Successful organizations are those that experiment intentionally, prioritize seamless workflow integration, and relentlessly focus on patient outcomes, all while demanding high standards of validation from their AI systems. As AI’s influence grows, the industry must keep its standards high to harness the promise of clinical intelligence without sacrificing safety or efficacy.
