Podcast Summary: Paging Dr. ChatBot
Today, Explained (Vox) | October 26, 2025
Host: Jonathan Hill
Guests: Dr. Dhruv Kullar (Weill Cornell Medicine), Dr. Eric Topol (Scripps Research), Emergency Department Physicians
Episode Overview
This episode explores the rapidly evolving role of AI, especially chatbots like ChatGPT, in modern healthcare. Host Jonathan Hill and expert guests discuss the promise and peril of AI-assisted self-diagnosis, the integration of AI into physician workflows, the risk of overreliance, and whether AI might paradoxically restore humanity to the doctor-patient relationship.
Key Discussion Points & Insights
1. Patients and Self-Diagnosis with AI
-
Jonathan Hill’s Personal Experience:
Jonathan recounts his own doctor’s use of an AI-powered dashboard to predict outcomes, raising his trust—and skepticism—regarding AI in medicine. He observes that patients seem increasingly comfortable letting AI weigh in on their health. -
The Temptation & Risks of Chatbot Diagnosis:
- Dr. Dhruv Kullar argues it’s “natural” for people to consult AI given healthcare’s barriers, but warns of substantial risks.
“AI is so fluent and so persuasive that it makes a lot of sense… But there's also real risks if you over rely on AI… They can give you misleading or incorrect medical information.” — Dr. Dhruv Kullar [03:28]
- Dr. Eric Topol concurs, pointing to AI’s “convincing” but sometimes entirely inaccurate output.
“The GPT's job is to convince you that it's right. You should be careful.” — Dr. Eric Topol [04:00]
- 1 in 5 Americans report getting incorrect advice from a chatbot [04:50].
- Dr. Dhruv Kullar argues it’s “natural” for people to consult AI given healthcare’s barriers, but warns of substantial risks.
-
Advice for Patients:
Dr. Kullar suggests patients can harness AI wisely:- Ask the AI to grade urgency and list possible conditions.
- Inquire about “red flag” symptoms.
- Use AI to interpret lab results or prep questions for doctors.
- But: Avoid using AI as a replacement for diagnosis.
“I think it can be potentially revolutionary… if they use it in the right way.” — Dr. Dhruv Kullar [05:43]
-
Pitfalls with Chatbots:
- “Hallucination” (making things up) and confusion between patients’ records are ongoing issues.
- Fluent AI can make mistakes hard to spot.
“Because these chatbots are so fluent and so persuasive in a way, it can make it challenging to figure out when they're actually being inaccurate.” — Dr. Dhruv Kullar [07:56]
2. The Physicians' Perspective: Chatbots in Clinical Use
-
Doctors’ Experiences:
Some doctors use chatbots as diagnostic aids in complex or unusual cases, finding them useful as “second opinions.”
> “There's been a couple really helpful times where I've typed in a patient's symptoms… and then [AI] helping me be more confident in what I think the diagnosis is.” — Emergency Department Doctor [14:04] -
Critical Thinking vs. Deskilling:
Dr. Kullar warns that while AI can supplement clinical reasoning, it can also erode (“deskilling”) doctors’ diagnostic abilities. > “There's evidence already that doctors can get deskilled pretty quickly.” — Dr. Dhruv Kullar [15:49] Hill underscores that “not only does AI make it so that you’re not learning those skills, new research suggests that it’s also making you unlearn those skills you previously knew.” [15:39]- Quote:
“I want to be very careful that we really use this more as the second opinion rather than generating the initial kind of set of thinking using AI.” — Dr. Dhruv Kullar [16:19]
- Quote:
-
Is Using AI ‘Cheating?’
Hill draws analogies with TV doctor shows, asking if using AI is “cheating.” Dr. Kullar disagrees.
> “We shouldn’t be thinking necessarily about AI as solving the medical diagnosis… [but] assisting doctors and patients along the diagnostic journey.” — Dr. Dhruv Kullar [17:42]
3. AI’s Broader Impact Beyond Direct Care
-
Administrative Tasks:
AI is already streamlining back-office tasks: entering records, drafting orders, coding diagnoses, and managing appointments. These are “low-hanging fruit.” [18:45] -
Personalization and Prediction:
AI can help tailor treatment, predict medication effectiveness, and clarify medical guidelines at the individual level. -
Drug Discovery:
AI holds promise in developing new therapies, especially for rare or hard-to-treat diseases.
4. Can AI Make Medicine More Human?
-
Dr. Eric Topol’s Vision:
Dr. Topol laments the erosion of the doctor-patient relationship but sees AI as a means to “give the gift of time back,” allowing physicians to focus on care rather than data entry or admin.
> “The gift of time will be given to us through technology… We can capture the conversation with AI… and now we're seeing some really good products that do that… so that when the two get together, they really are getting together.” — Dr. Eric Topol [23:53] -
The Threat of Business Pressures:
Hill raises a concern: efficiency gains may be swallowed by administrators who demand doctors see even more patients, not spend more time per patient.
> “That's exactly what could happen… AI could be making [us] more efficient… So no, we have to stand up for patients and for this relationship.” — Dr. Eric Topol [25:15] -
Bias, Equity, and Access:
- Topol outlines that AI can either perpetuate inequality or be deliberately used to improve access for underserved populations, e.g., in Kenya or UK minority communities.
“Step number one is to acknowledge that there's deep seated bias… But you can use AI if you deliberately want to help reduce inequities…” — Dr. Eric Topol [25:58]
- He expresses hope that “this is going to be the time we finally wake up and say it's much better to give everyone these capabilities…” but admits the US isn’t yet set up for equity. [27:27]
- Topol outlines that AI can either perpetuate inequality or be deliberately used to improve access for underserved populations, e.g., in Kenya or UK minority communities.
-
A Realist’s Optimism:
Despite challenges, Topol is bullish that embedding AI into medicine will help address persistent errors and inefficiencies. > “Remember, we have 12 million [serious] diagnostic errors a year… We need to fix that… I have tremendous optimism… someday we'll all be appreciative of it.” — Dr. Eric Topol [28:33]
Notable Quotes & Memorable Moments
-
"I have used ChatGPT to diagnose myself." — Emergency Department Doctor [02:05]
-
“ChatGPT has actually helped me navigate the disease better than most of the doctors.” — ER Doctor [02:12]
-
“Part of me feels like this is a natural thing that's going to happen, particularly in a system as difficult to access as ours is…” — Dr. Dhruv Kullar [03:28]
-
“The GPT's job is to convince you that it's right. You should be careful. My worst fears are that we cause significant harm.” — Dr. Eric Topol [04:00]
-
“You know, the challenge with using these things is that they don't come with a lot of context.” — Dr. Dhruv Kullar [09:10]
-
“The reason we went into medicine was to care for patients. And you can't care for patients if you can't even have enough time with them, listen to them, you know, really be present.” — Dr. Eric Topol [22:02]
-
“Who would have the audacity to say technology could make us more human? Well, that was me and I think we are seeing it now.” — Dr. Eric Topol [23:53]
Key Timestamps
- 01:00 — Jonathan describes his AI-powered doctor’s appointment
- 03:17-06:45 — Dr. Dhruv Kullar discusses AI self-diagnosis: use cases and risks
- 08:33 — ER doctor on anxious, self-diagnosing patients
- 14:04 — Emergency physician on using AI as a clinical aide
- 15:39-17:01 — The risk of “deskilling” physicians
- 18:45-20:08 — How AI is transforming healthcare admin, prediction, and drug discovery
- 21:36-24:58 — Dr. Eric Topol on restoring human relationships in medicine with AI
- 25:15-29:36 — Potential for AI to both widen and close equity gaps; reasons for cautious optimism
Takeaways
- AI is becoming omnipresent in medicine—for both patients seeking answers and providers seeking second opinions.
- Chatbots can empower and educate, but also risk spreading misinformation and fueling health anxiety.
- While AI can streamline admin and potentially enhance doctor-patient relationships, there are dangers: overdependence (“deskilling”), exacerbation of inequity, and structural pressures that could undermine gains in care quality.
- Experts are hopeful that, with thoughtful deployment and an equity focus, AI might actually help medicine become more human, not less.
For listeners interested in self-diagnosis, the guidance is clear:
Use AI as a tool, not a replacement for medical judgment. For providers, AI is a partner for better care—but maintaining the art of medicine is more important than ever.
