Transcript
Carl Zimmer (0:00)
Chasing Life is supported by the World as yous'll Know It, a podcast about the forces shaping the future. In this season, host and science journalist Carl Zimmer speaks to some of the most respected scientists in the field of aging research about the massive changes in human longevity and what comes next. Is our lifespan set or will a breakthrough add decades to our lives? Can older brains be rewired to function like younger ones? Which so called biohacks actually work? The world as you'll know it is available now.
Sanjay Gupta (0:31)
This episode is brought to you by Polestar. There's only one true way to experience the all electric luxury SUV Polestar 3, and that's to take a test drive. It can go from 0 to 60 in as little as 4.8 seconds with the dynamic handling of a sports car. But to truly understand how it commands the road, you need to be behind the wheel up to 350 miles of range. The 3D surround sound system by Bowers and Wilkins. It's all something you have to experience to believe. So book your Test drive for Polestar 3 today@Polestar.com welcome to Paging Dr. Gupta.
Sanjay Gupta (1:05)
You know, I really love these episodes in large part because they're all about you. Your questions, your concerns, your curiosity about health and medicine. Topics that are near and dear to my heart. Whether it's something in the headlines or something that's happening in your own life, share it and I'll try my best to try and help break it down. New week, new questions. Kira is back. Who do we have first?
Kira (1:31)
Okay, we're kicking things off today with a question from Kim. She's a nurse out in Los Angeles and she's thinking ahead about how tech might change her day to day. Here's her question.
Kim (1:41)
Hi, Dr. Gupta. My name is Kim and I'm a nurse. With the incoming era of AI, what's in store for the medical community? Whether that's medical procedure, a surgical procedure, diagnostics, or even as simple as making our notes on the patient's medical chart. Looking forward to hear what we have. Thank you so much.
Sanjay Gupta (2:10)
Okay, Kim, thank you very much for this. This is a topic that I think about a lot. In full disclosure, I sit on the National Academy of Medicine and there is a subcommittee on Artificial intelligence that I sit on that as well. So I've been pretty immersed in the intersection of AI and healthcare for a while. And I'll tell you two things as top lines. One is that I'm bullish on it. It's here. It's definitely here to stay and it's already being transformative. And two is that you have probably already been affected by AI in healthcare. If you've had any kind of recent visits to the doctor, to a hospital, to a clinic, your care was probably already impacted by AI in some way. Let me break down a few basics as I often do. You will hear about two main types of AI in healthcare. Predictive AI and generative AI. Okay, so predictive AI is basically analyzing large sets of data, everything from age of patients, symptoms, test results, and that can help doctors make more informed decisions. It looks at lots and lots of data, maybe it finds lots and lots of people who are just like the person they are investigating. And they say, okay, here's the problem this person had, here's the outcomes that we see in thousands, hundreds of thousands of people around the country, around the world. And that helps predict what we should do best during colonoscopies. AI can, for example, flag polyps that might otherwise be deemed inconsequential with mammograms. The FDA has already cleared two dozen AI tools to help spot early signs of breast cancer. Predicting breast cancer and stroke care. AI models now pinpoint the timing of a stroke, sometimes twice as accurately as humans, which is really crucial because that will determine in part if someone can receive certain life saving or life altering treatments. Hospitals are using AI to catch signs of sepsis before they become obvious. There are also tools the companies say can now detect things like bone fractures that may go undetected by the patient. Signs of over a thousand diseases that may exist even before symptoms show up. And then there is generative AI. And I think that's what people often think of when they think of the ChatGPT style stuff. It's mostly happening behind the scenes. One big use case for generative AI is documentation. So maybe you've heard of Microsoft's Dragon Copilot. So this is a platform that kind of listens in during a visit and then writes up the clinical note that is generated afterwards, helps draft letters that are sent to insurance companies to get medications or procedures approved. More Advanced versions combine AI with real world medical data. That's called ChatRWD and they are continuously being tested to reliably answer doctors clinical questions. There's a platform that I use quite a bit and I think about a quarter of physicians in the country now use it called open evidence, which again is looking at these large sets of data and then using that data in real time to answer questions. How long do we wait to start aspirin after a person has had A procedure, had an operation. These are tools that I'm already using now. I will tell you one thing that's interesting about these platforms is that there is very high expectations of how well they will work. You know, I think a lot of people think of AI platforms like they think of a computer. If you go to your computer and you ask your computer, you know, any question you might ask, you get an answer, you sort of expect that that answer is accurate. You don't then go to another computer and ask another computer to verify what the first computer said. But AI is a little bit different in this regard. In some ways, it's less like a computer and more like a tool that is trying to replicate human consciousness, which can falter. Right. So there's a trust gap. There was this 2023 survey that found most Americans feel discomfort with doctors using AI to manage their care. So, high expectations, low trust. There aren't many things in society like that. I would think, for example, autonomous vehicles might fall into that category. Even though there are car accidents all the time, it's one of the leading causes of preventable death in the United States. If an autonomous vehicle gets into an accident, it almost feels existential because the expectations are so much higher. So high expectations, low trust. When it comes to things like AI, AI can make mistakes. It can hallucinate. That's how it's often referred to, especially if the platform's been trained on incomplete data or biased data. Privacy is still an issue. I mean, HIPAA applies to AI platforms in healthcare, but, you know, I think that there's concerns about how might that information be stored or shared. So, bottom line, Kim, AI is here. I'm bullish on it. I think it's already making an impact. It's already working in the background. It's improving diagnostics, documentation, access. But with many things in life, we often adopt a trust but verify model. And I think AI and healthcare should be treated the same way. Coming up. There are a lot of pain medications out there, but not all of them are right. For every kind of pain, it's sometimes surprising what works best for what. I'll break it down after the break.
