
Patients and doctors both are turning to AI for help with diagnosing ailments and managing chronic issues. Should we trust it?
Loading summary
Jonathan Hill
Support for Explain it to Me comes from Anthropic, the team behind Claude. Ever have a question where you just can't find the answer? CLAUDE is an AI that's designed for exactly that those mysteries that need real exploration. It can help you dig into the layers, piece together scattered information, and work through complexity until things start making sense. It's the thinking companion for anyone who refuses to accept. I guess we'll never know. You can try Claude for free at claude.AI. explain it to Me. We all have moments where we could have done better. Like cutting your own hair. Yikes. Or forgetting sunscreen so now you look like a tomato. Ouch. Could have done better. Same goes for where you invest. Level up and invest smarter with Schwab. Get market insights, education, and human help.
Schwab/Wondery Sponsor Announcer
When you need it. Learn more@schwab.com.
Jonathan Hill
I didn't know what to do, so I turned to ChatGPT.
Dr. Dhruv Kullar
How do you integrate this in a way that retains what is best about medicine?
Dr. Eric Topol
They all say that healthcare is a sweet spot for AI.
Jonathan Hill
This is Explain it to Me from Vox. I'm Jonphin Hill. A couple weeks ago I went to the doctor and there was a moment during the appointment that really surprised me. She turned her computer monitor towards me and there on the screen was this colorful dashboard with all kinds of numbers and percentages. She explained that she'd entered my information into a database with millions of other patients, and that database used AI to predict my most likely outcome. There it was, a snapshot of my future. Or at least maybe my future. Usually I'm skeptical when it comes to AI, but I do trust my doctors. So if I trust them, I should trust this technology too, right? It turns out a lot of you already do.
Emergency Department Doctor
I have used ChatGPT to diagnose myself.
Dr. Eric Topol
ChatGPT cured my acne.
ER Doctor
ChatGPT has actually helped me navigate the disease better than most of the doctors. I found out the gender of my second baby by using ChatGPT. ChatGPT is honestly the most calming, reassuring, reassuring voice of like hey, great question.
Jonathan Hill
Today on Explain it to me. Paging Dr. Chatbot Pagin Dr. Chatbot, how AI is shaping the way we get medical care. We'll cover the do's and don'ts of self diagnosis, how medical professionals are using these tools and hear a doctor make the case for why AI is the key to a more human experience at the doctor's office. Full Disclosure Vox Media has a partnership with OpenAI. To start, I had to make an appointment with a doctor for an interview.
Dr. Dhruv Kullar
My name is Dhruv Kullar. I'm a physician at Weill Cornell Medicine in New York. I'm also a health services researcher here, as well as a writer at the New Yorker magazine.
Jonathan Hill
One of the things he's written about for the New Yorker, medical care and AI. Okay, so as a doctor, what do you think about folks who are self diagnosing with AI, AI, Chatbots.
Dr. Dhruv Kullar
Part of me feels like this is a natural thing that's going to happen, particularly in a system as difficult to access as ours is and as difficult to navigate as ours is. And AI is so fluent and so persuasive that it makes a lot of sense that people are starting to enter their symptoms into these chatbots and try to get diagnoses. But there's also real risks if you over rely on AI. I mean, these things are not infallible. They can give you misleading or incorrect medical information.
Dr. Eric Topol
You can give it a prompt and it will give you something back that's extremely convincing and it's completely wrong. The GPT's job is to convince you that it's right. You should be careful. My worst fears are that we cause significant.
Dr. Dhruv Kullar
We, the field, the technology, the industry cause significant harm. You know, one of the things that's so interesting about these chatbots is that they're not like a COVID test or an MRI where you get the answer that you get. I mean, how accurate these chatbots are really depends on how you're prompting them. And so in the piece I talked to this particular chatbot that's called Cabot, which is a chatbot that was developed at Harvard that's not in clinical use yet, it's more of a research tool. But it can perform exceptionally well, kind of almost in a superhuman way on these specific, very challenging, complex clinical cases that are curated in a perfect way. But the way that these chatbots perform depends on how the information that you give it is organized. So if you give certain broad strokes or you don't emphasize the right details, you could get a very different and possibly incorrect diagnosis. You know, I will mention there, there was a recent survey that was done that found that something like 1 in 5, around 20% of Americans said that they had turned to a chatbot for advice that later turned out to be incorrect. And so certainly there's a lot of incorrect information that's coming out of them.
Jonathan Hill
I'm going to be honest, if I'm sick or worried about how I'm feeling, I have gone to doctored Google. You know, I have been in those webmd trenches. For those that are using AI, are there things that they should do or shouldn't do to get the most accurate information?
Dr. Dhruv Kullar
Sure. I mean, first of all, you're not alone. I mean, a lot of people for years have been using Google and now AI is kind of the latest iteration. And I think it can be potentially revolutionary and transformative for people if they use it in the right way. I don't think the right way is just to put in your symptoms and ask for a diagnosis. At least I don't think that's the right thing to do right now. But there are really important ways that people can use it for benefit. You know, if you have symptoms, asking the AI to rate the urgency of those symptoms, listing possible conditions that could explain them, and some sense of which conditions might be most likely, I think it might be helpful for people to ask about red flag symptoms. Those are warning signs that suggest that you might have a more serious condition. If you've gone to the doctor and you have lab results or clinic visits, an AI might be able to walk you through those lab results in greater detail and it might be able to help you prepare questions for your next visit. And so in all those ways, I think it can be a really helpful adjunct to the way that people are currently receiving care.
Jonathan Hill
Okay, so you're saying it may not be a 100% bad idea to use ChatGPT to interpret what your doctor's telling you?
Dr. Dhruv Kullar
No, I don't think so at all. And you know, part of the challenge here is that healthcare is so resource constrained in a lot of ways. There's such enormous time pressures. Doctors, nurses don't always have the time and attention to explain every diagnosis and treatment in the level of detail that we might want. And AI does have unlimited time and attention in a way, can explain things at whatever level of sophistication you need. It can help people navigate through the medical system. It can help, you know, patients with limited access to care. Some people are already starting to use it in an interesting way. You know, I've spoken to patients who are now trying to record or asking if they can record their conversations with physicians, and then uploading those transcripts into ChatGPT to try to have them explain what happened in that visit in greater detail, and then kind of continually ask questions, probe for more information. And that has been really helpful for a number of patients that I've spoken with. But there are also challenges. I mean, these AIs still hallucinate, they still make things up. They may mix up one patient for another, I spoke to a woman who, you know, her own medical conditions were being confused with those of her mother's and it became a kind of really confusing situation when she was speaking with a chatbot. And so because these chatbots are so fluent and they're so persuasive in a way, it can make it challenging to figure out when they're actually being inaccurate. And so that's kind of the note of caution that I want to sound as well.
Jonathan Hill
We got a call about that, not from a patient, but from a doctor. They're worried about how everyday people are using chatbots to diagnose themselves.
ER Doctor
I work as an ER doctor and I have noticed a lot more patients coming in after having talked to ChatGPT, trying to figure out what's going on with them. And on the one hand, I find people are asking really great questions. They're often self educating tremendously. On the other hand, I sometimes feel like the things they're bringing up are kind of random and hard to encourage people that they're going to be okay or to convince them that I think that's really unlikely. There's a lot of anxiety in the air And I think ChatGPT sometimes makes that worse.
Jonathan Hill
So is this a problem you've dealt with?
Dr. Dhruv Kullar
You know, it's a real challenge in a way. It's a more sophisticated version of what Dr. Google has put out there for the past few decades. And you know, when you look up your symptoms online, there's often a range of, of potential diagnoses that are listed. And it's only natural for the human mind to gravitate towards the most concerning or the most dangerous ones. Those are the ones that represent the greatest threat. And the challenge with using these things is that they don't come with a lot of contacts. They don't have the contacts that you might have if you came to those medical diagnoses in a clinical setting with a physician or another clinician. And so there's this challenge of helping people actually understand the context around the words and the diagnoses that they're learning about online. But there's also this challenge of, you know, is AI going to steer people away from medical attention? So in the piece I note this poison control center in Arizona that reported a drop in the overall call volume that they were getting, but a rise in severely poisoned patients. And the suggestion here was that the AI tools could have steered people away from needed medical attention. And so this is another part of the challenge that people are starting to encounter.
Jonathan Hill
What can't Dr. Chatbot tell us, you know, like, what is it that doctors can do for patients that chatbots can't?
Dr. Dhruv Kullar
Right now, there's a lot that doctors do that chatbots can't. I mean, they're not reasoning clinically in the way that a doctor is reasoning. They're not able to come to the same judgments and integrate patients values and preferences and circumstances in the way that a physician is, you know, managing pain or talking to families, helping people understand their options, guiding them through the trade offs that occur in any medical setting. So as helpful as these AI technologies can be, they're only going to be part of the solution, at least for the foreseeable future.
Jonathan Hill
When we get back, Dhruv is going to stick around and we'll ask him about how AI is changing the way doctors are practicing medicine. Support for Explain It To Me comes from Anthropic, the team behind Claude. Some questions need more than a quick search, the kind where you want to really understand what's happening, not just get a basic overview. That's where Claude comes in. Claude is an AI thinking partner designed for people who enjoy digging deeper. It lets you upload documents, explore multiple perspectives, and piece together the context that might make complex topics finally make sense. Claude can analyze documents up to 200 pages, search current sources with proper citations, and work through problems step by step. What makes it different is how it explores complexity with you. Rather than rushing to simple answers, it helps you connect scattered information and understand the deeper patterns. Whether you're researching for work, trying to understand current events, or working through personal decisions that matter to you, Claude matches your curiosity and commitment to getting the full picture. You can try Claude for free at Claude AI/explain it to Me and see why the world's best problem solvers choose Claude as their thinking partner.
Schwab/Wondery Sponsor Announcer
Support for Today Explained comes from Wondery and their new podcast, Lawless Planet. It unfolds only almost like a true crime podcast I've been asked to tell you, but it is about the global climate crisis. Complex stories, wide ranging, happening in every corner of the planet. On Lawless Planet, the new podcast from Wondry, you will hear stories from the depths of the Amazon to small town America. Host Zach Goldbaum takes you around the world as he investigates stories of conflict, corruption, resistance and highlights activists risking their lives for their beliefs, corporations shaping the planet's future and the everyday people affected along the way. Each episode takes you inside the global struggle for our planet's future. Mysterious crimes, those high stakes operations, those billion dollar controversies that you do know so well to reveal what's truly at stake. You can follow Lawless Planet on the Wondry app or wherever you get your podcasts. You can listen to new episodes early and ad free right now by joining Wondery plus in the Wondry Apple Podcasts or Spotify.
Jonathan Hill
We're back. This is Explain It To Me. I'm Jonquin Hill. Before the break, we heard from Dr. Dhruv Kullar about how folks are using AI to help them understand their symptoms, come up with treatments, and even talk to their doctors. And those doctors are consulting AI too. They've got their own chat bots, which are trained on medical research and patient data and even suggest their own diagnoses. And some physicians are listening.
Emergency Department Doctor
I work in a hospital in an emergency department, and one of the cool things about being an ed doc is that you never know what you're going to see. And patients coming with a question for you, and you got to kind of be the person that gives them an answer and gives them next steps in terms of a solution. And so there's been a couple really helpful times where I've typed in a patient's symptoms. For example, patient coming in with abnormal lab values and a little bit of their history, and then helping me be more confident in what I think the diagnosis is.
Jonathan Hill
Okay, Dhruv, how common is that? Does that sound like something you hear a lot?
Dr. Dhruv Kullar
You know, I think this is one of the fastest uptakes of any technology that I've seen in medicine, certainly since I've started practicing. So many of my colleagues now turn to generative AI models, other forms of predictive analytics, to make decisions about the patients that they're caring for. And I think these things are going to be incredibly powerful, I think best used as a really good second opinion to try to get a consultant's advice. Basically, in any specialty at any time, you know, you can put in a patient's symptoms, it might remind you of certain diagnosis, raise rare diagnoses that you haven't seen in months or years, and give you expertise and support that wouldn't otherwise be possible. Now, I think this really needs to be balanced with something else that we're starting to see, which is this idea of cognitive de skilling.
Jonathan Hill
Not only does AI make it so that you're not learning those skills, new research suggests that it's also making you a unlearn those skills that you previously knew.
Dr. Dhruv Kullar
You know, if you're not doing the critical thinking of going through a patient's case, understanding their problems, using kind of your own judgment to arrive At a diagnosis, what happens to the skills that doctors have? You know, there's. There's evidence already that doctors can get deskilled pretty quickly.
Jonathan Hill
The doctor's baseline performance got worse after they got used to using AI, which creates a risk if the AI fails, if it's unavailable, or it just misses something.
Dr. Dhruv Kullar
And so the question then becomes, you know, in a future in which AI basically pervades medicine and it's extremely effective and useful, is it a big deal that we've lost some of the skills that we used to have? You know, in the past, doctors were probably better at listening to heart murmurs and doing certain physical exams, and now we have technology like echocardiograms or CT scans that can replace that. I don't think people feel like we've had a huge loss there, but I do think there's something distinct about the critical thinking that goes into diagnostic work. So I want to be very careful that, you know, we really use this more of the second opinion rather than generating the initial kind of set of thinking. Using AI.
Jonathan Hill
Yeah, I. I wonder how common this use is, because, you know, we have shows like House Differential Diagnosis People, or er.
Dr. Dhruv Kullar
It could be hyperaldosteronism or Barger Syndrome.
Dr. Eric Topol
Carter, put that damn book down.
Jonathan Hill
Or the Pit.
Supporting Voice/Producer
So he's not.
Jonathan Hill
He's not coming back?
Dr. Eric Topol
No. What happens now? He's hooked up to all those machines.
Dr. Dhruv Kullar
Take some time, try to process this news.
Jonathan Hill
Personally, I'm a pithead. I love that show. And, you know, a quote, unquote, good doctor flexes their brain and, you know, maybe uses some books, but, I don't know, is. Is using AI, I guess, for lack of a better word. Is it. Is it cheating?
Dr. Dhruv Kullar
No, I. I don't think it's cheating.
Emergency Department Doctor
I.
Dr. Dhruv Kullar
You know, I think the. The. The challenge again, is how do you maintain critical thinking skills while offloading cognitive work that can be done by machines? And one way, you know, I've started to think about this, and this is a concept that came from. From a physician that I interviewed for, for the piece, Dr. Gurpreet Dhaliwal at UCSF. And he told me, you know, we shouldn't be thinking necessarily about AI as solving the medical diagnosis. It's better off thinking partner in what he called wayfinding, you know, assisting doctors and patients along the diagnostic journey. And that might involve alerting doctors to a recent study proposing a helpful blood test that, you know, could be used to aid in diagnosis, looking up a lab result that happened to be in the medical record from decades ago. You know, there's a real difference between, you know, getting the right answer and actually competently caring for people along their medical journey.
Jonathan Hill
Okay, we've talked a lot about the doctor patient relationship, but healthcare is a lot more than that. Where else is AI showing up?
Dr. Dhruv Kullar
There's a lot of ways that people are trying to use AI in medicine. I think the first area that's going to have a big impact is on the administration of healthcare. That has to do with, you know, entering things in the medical record, capturing diagnoses of patients who are coming in, writing orders, helping people navigate the medical system. So all these administrative tasks are in some ways the low hanging fruit of medicine that rack up a lot of costs. Another area is kind of prediction or personalization. What does this new guideline mean? How likely is this medication to be effective? Should you use this treatment or not? You know, this procedure? Does that make sense for you? So I think AI can do a lot in terms of personalization and prediction of both risk and, and benefit from particular medications. And then there's this whole area that we haven't talked about yet, which is around drug discovery and development. I think there's a tremendous amount of potential for AI to supercharge drug discovery so that in a handful of years we have a lot more options and potentially options for conditions that thus far are incurable or very difficult to treat. And so, you know, at least right now, you know, I think what the way in which AI can be most helpful is in helping people prepare for their interactions with the medical system and hopefully making those most seamless.
Jonathan Hill
So the machines are here and there's an argument that they could actually make our relationships with our doctors more human. We'll hear that next.
Supporting Voice/Producer
Support for the show comes from Charles Schwab. At Schwab, how you invest is your choice, not theirs. That's why when it comes to managing your wealth, Schwab gives you more choices. You can invest and trade on your own. Plus get advice and more comprehensive wealth solutions to help meet your unique needs. With award winning service, low costs and transparency, transparent advice, you can manage your wealth your way at Schwab. Visit schwab.com to learn more. That sound is the leaky showerhead that came with your rental. The one that has you starting every morning with a low pressure nightmare that's nearly your age. Stop settling for someone else's shower. Consider this your wake up call to swap it for the relaxing feel of a moen showerhead and see how one easy change changes everything. Water designs our life. Who designs for water.
Jonathan Hill
Moen, you're listening to explain it to me.
Dr. Eric Topol
Well, I love seeing patients, you know, I really like to listen and help them as much as I can. And that's what medicine's all about. That's what drew me in 40 years ago.
Jonathan Hill
Dr. Eric Topol is a physician scientist at Scripps Research. He also founded the Scripps Research Translational Institute, which means he thinks a lot about the ways technology can advance medicine and he's worried that the personal aspect of medicine is slipping away.
Dr. Eric Topol
I think most people are familiar with there's been tremendous erosion of this patient doctor relationship because we're talking about 7 minutes for a routine follow up visit or 12 minutes for a new patient. Very limited time. That time is often lost as far as face to face contact by typing into keyboard, looking at screens rather than face to face, eye to eye with patients. And then of course there's the data clerk function of doing all the records and ordering of tests and prescriptions and pre authorizations that each doctor saddled with after the visit. So it's a horrible situation because the reason we went into medicine was to care for patients. And you can't care for patients if you can't even have enough time with them, listen to them, you know, really be present, have a trust and basically have this, what used to be back in the 70s and 80s, a precious intimate relationship. So we don't have that now, by and large, and we got to get that back.
Jonathan Hill
Yeah. What, what caused that change? Why did that shift happen in that relationship between patient and doctor?
Dr. Eric Topol
If I were to simplify it into three words, it would be the business of medicine. And basically the squeeze was on to see more patients in less time to make the medical practice money.
Jonathan Hill
You've literally written a book about how AI can transform healthcare and make healthcare human again. Can you explain that idea? Because my first thought when I hear AI in medicine is not, oh, this will fix it and make it more intimate and personable.
Dr. Eric Topol
Who would have the audacity to say technology could make us more human? Well, that was me and I think we are seeing it now. So the gift of time will be given to us through technology. Now I'll walk through a few examples. One is that we can capture the conversation with the AI ambient natural language processing and we can make a better note than it's ever been made by doctors from that whole conversation. And now we're seeing some really good products that do that, but they don't just capture the note with audio links for the patient. So in case there was any confusion or something forgotten during the discussion. But also they do all these things to get rid of data clerk work so that when the two get together, they really are getting together. And I think we can, even with the physician shortage that we have today, we can leverage this technology to make it much more efficient, but also much more human to human bonding.
Jonathan Hill
Do you worry at all that, you know, if that time gets freed up, if it's like, okay, we have less administrative tasks and more time to spend on patients, like, what's going to keep administrators from saying, all right, well then you got to see more patients, it's the same amount of time, or you got to go even faster, you know?
Dr. Eric Topol
Well, yeah, no, I have been worried about that. That's exactly what could happen. AI could be making more efficient and productive. So oh yeah, see more patients, read more scans and slides and whatnot. So no, we have to stand up for patients and for this relationship. And this is our best shot to get us back to where we were or even exceed that.
Jonathan Hill
Yeah. I also wonder, you know, because there are so many issues that come up in medicine and I think about bias in healthcare. I wonder how you think of that factoring into AI, because on one hand I can see like, okay, it's taking that out, but AI learns from human models and humans have bias. Like, how does that, how do you see that?
Dr. Eric Topol
Yeah. So step number one is to acknowledge that there's deep seated bias. You know, it's a mirror of our culture and society. However, we've seen so many great examples around the world where AI is being used in the hinterlands in people, you know, low socioeconomic, low access to give access and help promote better health outcomes, whether it be in, in Kenya for Panda Health or for diabetic retinopathy and people that never had that ability to be screened mental in the UK for underrepresented minorities. And so you can use AI if you deliberately want to help reduce inequities and try to do everything possible to interrogate a model about potential bias.
Jonathan Hill
You talked about the disparities that exist. And in our country, if you have a high income, you can get some of the best medical care in the world here. And if you do not have that high income, there's a good chance that you're not getting very good health care. Are you worried at all that AI could deepen that divide? You know, people with money will have access to almost this kind of super doctor and those without may not have, will have to rely on chatbots. Instead, or, you know, something like that.
Dr. Eric Topol
I am worried about that. And we have a long history of not using technology to help people who need it the most. So many things we could have done with technology we haven't done. Is this going to be the time when we finally wake up and say it's much better to give everyone these capabilities to reduce the burden that we have on the medical system, if you call it a system to help care for patients, so other countries will get ahead of us on that. JQ I mean, I think that's the issue is that that's where we should be, is making a level for all people. To me, that's the only way that we should be using AI and making sure that the people who would benefit the most are getting it the most right. But we're not in a very good structure framework for that. I hope we'll finally see the light.
Jonathan Hill
What makes you so hopeful? I mean, it's, and I consider myself an optimistic person, but sometimes it's very hard to be optimistic about healthcare in America.
Dr. Eric Topol
It is. I'm not. I would be the first to acknowledge that. But remember, we have 12 million errors a year, diagnostic errors that are serious. With 800,000 people dying or getting disabled, that's real problem. We need to fix that. And we have lots of ways to get to much higher levels of accuracy. So for those who are concerned about AI making mistakes, well, guess what, we got a lot of mistakes right now that can be improved. I have tremendous optimism. I recognize the challenges. But, you know, if I had a better way to fix medicine, I don't know of it. So it's going to take time. We're still in the early stages of all this, but I am confident we'll get there. We won't even talk about AI in medicine. It'll be all embedded. It'll just be part of the practice of medicine. And someday we'll all be appreciative of it.
Jonathan Hill
That was Dr. Eric Topol of Scripps Research. Speaking of healthcare, open enrollment is coming up. Insurance can be really confusing, especially right now. Call in with your questions about insurance and FSAs and HRAs and PPOs and vision and why it all works the way it does. We'll decode it for you. Or if you feel like you can't afford insurance with all these upcoming increases, we want to hear about that, too. 1-800-618-8545. You can also email us@askvoxox.com if you like. This and other Vox podcasts. You can help make this work happen by becoming a VOX member. When you become a member, you get to listen to the show ad free. And you also get a ton of other perks. Right now, we're having a sale on membership, which means you can get 30% off. Just go to vox.com members and the deal is all yours. This episode was produced by Hadi Milwaudi. It was edited by Ginny Lawton, and our executive producer is Miranda Kennedy. Fact checking was by Melissa Hirsch with engineering by Adrienne Lilly and Brandon McFarland. Special thanks to Lauren Mapp. I'm your host, Jonathan Hill. Thanks so much for listening. Talk to you soon. Bye.
Today, Explained (Vox) | October 26, 2025
Host: Jonathan Hill
Guests: Dr. Dhruv Kullar (Weill Cornell Medicine), Dr. Eric Topol (Scripps Research), Emergency Department Physicians
This episode explores the rapidly evolving role of AI, especially chatbots like ChatGPT, in modern healthcare. Host Jonathan Hill and expert guests discuss the promise and peril of AI-assisted self-diagnosis, the integration of AI into physician workflows, the risk of overreliance, and whether AI might paradoxically restore humanity to the doctor-patient relationship.
Jonathan Hill’s Personal Experience:
Jonathan recounts his own doctor’s use of an AI-powered dashboard to predict outcomes, raising his trust—and skepticism—regarding AI in medicine. He observes that patients seem increasingly comfortable letting AI weigh in on their health.
The Temptation & Risks of Chatbot Diagnosis:
“AI is so fluent and so persuasive that it makes a lot of sense… But there's also real risks if you over rely on AI… They can give you misleading or incorrect medical information.” — Dr. Dhruv Kullar [03:28]
“The GPT's job is to convince you that it's right. You should be careful.” — Dr. Eric Topol [04:00]
Advice for Patients:
Dr. Kullar suggests patients can harness AI wisely:
“I think it can be potentially revolutionary… if they use it in the right way.” — Dr. Dhruv Kullar [05:43]
Pitfalls with Chatbots:
“Because these chatbots are so fluent and so persuasive in a way, it can make it challenging to figure out when they're actually being inaccurate.” — Dr. Dhruv Kullar [07:56]
Doctors’ Experiences:
Some doctors use chatbots as diagnostic aids in complex or unusual cases, finding them useful as “second opinions.”
> “There's been a couple really helpful times where I've typed in a patient's symptoms… and then [AI] helping me be more confident in what I think the diagnosis is.” — Emergency Department Doctor [14:04]
Critical Thinking vs. Deskilling:
Dr. Kullar warns that while AI can supplement clinical reasoning, it can also erode (“deskilling”) doctors’ diagnostic abilities.
> “There's evidence already that doctors can get deskilled pretty quickly.” — Dr. Dhruv Kullar [15:49]
Hill underscores that “not only does AI make it so that you’re not learning those skills, new research suggests that it’s also making you unlearn those skills you previously knew.” [15:39]
“I want to be very careful that we really use this more as the second opinion rather than generating the initial kind of set of thinking using AI.” — Dr. Dhruv Kullar [16:19]
Is Using AI ‘Cheating?’
Hill draws analogies with TV doctor shows, asking if using AI is “cheating.” Dr. Kullar disagrees.
> “We shouldn’t be thinking necessarily about AI as solving the medical diagnosis… [but] assisting doctors and patients along the diagnostic journey.” — Dr. Dhruv Kullar [17:42]
Administrative Tasks:
AI is already streamlining back-office tasks: entering records, drafting orders, coding diagnoses, and managing appointments. These are “low-hanging fruit.” [18:45]
Personalization and Prediction:
AI can help tailor treatment, predict medication effectiveness, and clarify medical guidelines at the individual level.
Drug Discovery:
AI holds promise in developing new therapies, especially for rare or hard-to-treat diseases.
Dr. Eric Topol’s Vision:
Dr. Topol laments the erosion of the doctor-patient relationship but sees AI as a means to “give the gift of time back,” allowing physicians to focus on care rather than data entry or admin.
> “The gift of time will be given to us through technology… We can capture the conversation with AI… and now we're seeing some really good products that do that… so that when the two get together, they really are getting together.” — Dr. Eric Topol [23:53]
The Threat of Business Pressures:
Hill raises a concern: efficiency gains may be swallowed by administrators who demand doctors see even more patients, not spend more time per patient.
> “That's exactly what could happen… AI could be making [us] more efficient… So no, we have to stand up for patients and for this relationship.” — Dr. Eric Topol [25:15]
Bias, Equity, and Access:
“Step number one is to acknowledge that there's deep seated bias… But you can use AI if you deliberately want to help reduce inequities…” — Dr. Eric Topol [25:58]
A Realist’s Optimism:
Despite challenges, Topol is bullish that embedding AI into medicine will help address persistent errors and inefficiencies.
> “Remember, we have 12 million [serious] diagnostic errors a year… We need to fix that… I have tremendous optimism… someday we'll all be appreciative of it.” — Dr. Eric Topol [28:33]
"I have used ChatGPT to diagnose myself." — Emergency Department Doctor [02:05]
“ChatGPT has actually helped me navigate the disease better than most of the doctors.” — ER Doctor [02:12]
“Part of me feels like this is a natural thing that's going to happen, particularly in a system as difficult to access as ours is…” — Dr. Dhruv Kullar [03:28]
“The GPT's job is to convince you that it's right. You should be careful. My worst fears are that we cause significant harm.” — Dr. Eric Topol [04:00]
“You know, the challenge with using these things is that they don't come with a lot of context.” — Dr. Dhruv Kullar [09:10]
“The reason we went into medicine was to care for patients. And you can't care for patients if you can't even have enough time with them, listen to them, you know, really be present.” — Dr. Eric Topol [22:02]
“Who would have the audacity to say technology could make us more human? Well, that was me and I think we are seeing it now.” — Dr. Eric Topol [23:53]
For listeners interested in self-diagnosis, the guidance is clear:
Use AI as a tool, not a replacement for medical judgment. For providers, AI is a partner for better care—but maintaining the art of medicine is more important than ever.