Podcast Summary
Becker’s Healthcare Podcast
Episode: From Burden to Trust: How AI Can Humanize Healthcare
Date: October 30, 2025
Host: Brian Zimmerman
Guest: Dr. Josh Tamayo Sarver, Vice President of Innovation at Vituity
Episode Overview
This episode explores the evolving role of artificial intelligence (AI) in healthcare, with a special focus on how agentic AI (AI capable of taking actions and orchestrating multi-step processes) can shift healthcare from administrative burden back toward meaningful, trust-based patient care. Dr. Josh Tamayo Sarver brings a multidisciplinary perspective—clinical, technological, managerial, and statistical—to discuss both the promise and pitfalls of AI, strategies for restoring patient-clinician trust, and pragmatic advice for leaders hoping to adopt AI in health systems.
Key Discussion Points and Insights
Dr. Sarver’s Unique Perspective ("Suit, Scrub, Geek, and Quant")
- Dr. Sarver describes his background as “multi-dimensional mediocrity” meaning he is competent in business (Suit), clinical medicine (Scrub), software engineering (Geek), and statistics (Quant), but uniquely able to bridge all four domains.
- Quote: “It’s not very often that you find someone who can go across all four of those areas.” [01:00]
What Makes Agentic AI Critical in Healthcare
- Generative AI excels at summarizing information and knowledge retrieval, but lacks true conceptual understanding.
- Agentic AI can “decide” on tasks, sequence appropriate actions (sometimes using non-AI tools), and reliably execute, which generative AI alone cannot do.
- Quote: “Generative AI does great at... general knowledge retrieval, but when it comes to specific action taking, it really falls short.” [04:47]
- Memorable Analogy: Generative AI playing chess: can explain rules or evaluate moves, but if asked to play, might play backgammon instead. [03:40–04:30]
- In practice, agentic AI orchestrates various tools—some AI, some not—to handle tasks like billing/coding by dividing diagnosis (AI) from coding (deterministic lookup).
Applications in Healthcare
- Billing and Coding Example:
- Generative AI summarizes diagnosis; agentic AI triggers other tools to convert diagnosis into a billable code, reducing manual errors and administrative effort.
- Clinical Decision Support:
- Agentic AI can sequence existing clinical scoring or prediction models for better reliability (e.g., predicting sepsis using ML rather than LLMs).
- Quote: “You have to put all these pieces together and then come out with an action. Nothing happens until you have an action step coming out of it.” [08:41]
Efficiency vs. the Human Element: The Trust Factor
- Administrative Burden and Trust:
- A major reason for declining trust in clinicians isn’t clinical competence but administrative overload—doctors prioritizing screens and checklists over direct human engagement with patients.
- Quote: “We can do more than we could ever have done before... and yet we’re trusted less than we’ve ever been before.” [11:30]
- Potential for AI:
- AI (agentic, in particular) can take on “taking care of healthcare” so caregivers can focus on “taking care of patients.”
- Reducing administrative tasks frees up time—but what clinicians do with that time will determine the real impact (more patients, or better care per patient?).
- Quote: “If I spent the little amount of time I did before with those patients, I could now see five high acuity patients per hour... but that probably doesn’t actually increase that trust quotient or quality of care.” [13:55]
Organizational Adoption & Real-World Guidance
- Adoption Pitfalls:
- Many implementations fail because solutions are driven by organizational goals, not actual user (clinician/employee) frustrations.
- Shadow AI use among frontline staff shows unaddressed frustrations and organic adoption if real pain points are solved.
- Quote: “If you’ve identified what their problem is and that doesn’t align with something that frustrates them, then your solution to a problem they’re not frustrated by is another frustration.” [17:18]
- Practical Advice:
- Leaders should start by finding out what frustrates their staff, not just organizational priorities.
- Dr. Sarver recommends direct, empathetic inquiry (“What’s frustrated you today?”) and even mentions “frustration stations,” open forums for staff to vent problems, as a source of valuable insights.
- Quote: “If I ask someone what frustrates them... people are usually pretty transparent and genuine and open about what their frustration is.” [19:22]
Re-centering on Patient Goals
- Dr. Sarver urges a return to focusing on what patients actually seek from healthcare engagements (their true, lived needs—not just clinical goals or metrics).
- Quote: “How do we get back to focusing on what the patients are really trying to accomplish and align what we’re doing around the patients?” [22:25]
Notable Quotes and Memorable Moments
| Timestamp | Speaker | Quote | |-----------|---------|-------| | 01:00 | Dr. Sarver | “It’s not very often that you find someone who can go across all four of those areas.” | | 03:40–04:30 | Dr. Sarver | AI Chess Analogy: “If we ask it to play chess, it would suddenly start playing backgammon.” | | 04:47 | Dr. Sarver | “Generative AI does great at... general knowledge retrieval, but when it comes to specific action taking, it really falls short.” | | 11:30 | Dr. Sarver | “We can do more than we could ever have done before... and yet we’re trusted less than we’ve ever been before.” | | 13:55 | Dr. Sarver | “If I spent the little amount of time I did before with those patients, I could now see five high acuity patients per hour... but that probably doesn’t actually increase that trust quotient.” | | 17:18 | Dr. Sarver | “If you’ve identified what their problem is and that doesn’t align with something that frustrates them, then your solution to a problem they’re not frustrated by is another frustration.” | | 19:22 | Dr. Sarver | “If I ask someone what frustrates them... people are usually pretty transparent and genuine and open about what their frustration is.” | | 22:25 | Dr. Sarver | “How do we get back to focusing on what the patients are really trying to accomplish and align what we’re doing around the patients?” |
Timestamps for Key Segments
- Introduction & Dr. Sarver’s Background – [00:00–02:20]
- Generative vs. Agentic AI – Chess Analogy – [02:44–04:47]
- Agentic AI in Billing & Coding – [04:47–07:05]
- Agentic AI Technical Foundations – [07:05–08:24]
- Potential in Clinical Decision Support – [08:24–09:56]
- Administrative Burden and Trust – [10:50–13:35]
- Efficiency vs. Human Connection – [13:35–15:36]
- Advice for AI Adoption – [16:15–20:38]
- Approaching Staff Frustrations – [19:06–20:38]
- Final Thoughts: Refocusing on Patient Needs – [21:07–22:52]
Tone & Language
The conversation blends dry humor (“multi-dimensional mediocrity,” “suffering edge” technology) with deep technical and human insights. Dr. Sarver is candid and accessible, demystifying technical topics for a broad audience, while stressing empathy, human connection, and the importance of trust in healthcare.
Key Takeaways
- Agentic AI adds meaningful action and orchestration capabilities absent from generative AI—making it far more applicable to complex, real-world healthcare settings.
- Trust—not just efficiency or throughput—must be restored to healthcare, and AI’s greatest promise is in freeing clinicians from paperwork to reconnect with patients.
- Organizational AI adoption should start with solving real frustrations faced by clinicians and staff, not simply pursuing shiny technology or management-driven problem sets.
- The ultimate goal is a renewed focus on understanding and serving the fundamental reasons patients seek care, not just satisfying metrics or process improvement.
