WSJ Tech News Briefing: Chatbot Confidential
Episode: How to Protect Your Health Data When Using AI
Release Date: April 13, 2025
Host: Nicole Nguyen, Personal Tech Columnist at The Wall Street Journal
Introduction: The Growing Intersection of AI and Personal Health
In the final installment of the Chatbot Confidential series, host Nicole Nguyen delves into the increasingly common practice of using generative AI for personal health inquiries. While AI tools like ChatGPT offer convenience in managing personal tasks, their application in sensitive areas such as health data raises significant privacy and accuracy concerns.
Real-World Use Cases and Initial Concerns
Nicole opens the discussion with a user experience shared by Robert Garrison from Texas. At [00:17], Garrison describes how he uploaded his medical statistics into ChatGPT to compare his health metrics against different age groups:
Robert Garrison ([01:25]): "I put those results into ChatGPT and said, do me a favor and compare my stats to other people in my age range. But then I got competitive and asked it to also compare it to people much younger than me."
Initially, Garrison wasn't worried about data privacy, considering his information non-sensitive:
Garrison ([02:10]): "These were all just general stats that really I wasn't concerned if anybody knew what my blood pressure was, what my cholesterol level was, or weight."
However, his perspective shifted upon realizing that his data contributes to the AI's large language model:
Garrison ([02:28]): "I know it goes into their large language model. I really haven't researched enough to find out if they share the information further, so probably something for me to investigate."
Data Privacy Concerns in AI Applications
Nicole relates Garrison's apprehensions to her personal experience, highlighting the risks of sharing sensitive health information with AI chatbots. She raises a critical question:
Nicole Nguyen ([02:42]): "How dangerous, really, is it to upload personal information, especially something sensitive like medical info, into a Genai chatbot?"
Corinne McSherry, Legal Director at the Electronic Frontier Foundation (EFF), provides expert insight on data ownership and control:
Corinne McSherry ([04:13]): "Once you hand over information to a third party that has no obligations to you, it's out of your control. And if there's a data breach, you don't have any control over that."
McSherry emphasizes the potential long-term implications of data misuse, drawing parallels to the Dobbs vs. Jackson Women's Health Organization Supreme Court decision:
McSherry ([04:31]): "Similarly, we don't know what will happen to the data we give AI chatbots, and people are willingly offering up intimate info to these companies."
The Seductive Nature of AI Conversations
McSherry warns about the interactive nature of chatbots that may encourage users to divulge more information than intended:
Corinne McSherry ([05:08]): "You have this interactive conversation... that can lead people to actually be surprisingly open and share maybe a little more information than they intended to with the AI."
Strategies for Safe AI Usage
Despite the risks, the use of AI chatbots is expected to grow. Nicole outlines methods to safeguard personal data:
-
Opting Out of Model Training:
OpenAI allows users to disable the option for their chats to train future models or delete individual conversations ([06:38]). -
Using Privacy-Focused Chatbots:
Alternatives like Duck AI offer anonymized prompts and prevent conversations from being used in model training ([07:00]). -
Redacting Personal Information:
Users are advised to remove or obscure sensitive details from their queries before interacting with chatbots ([08:33]).
Evaluating the Accuracy of Medical Advice from Chatbots
To assess the reliability of AI-generated health advice, WSJ personal health reporter Alex Janin conducted a comparative analysis of responses from three popular chatbots: ChatGPT, Anthropic's Claude, and Microsoft's Copilot. The prompt used was:
"I accidentally swallowed a toothpick. Am I going to be okay? Should I go to the hospital?"
ChatGPT ([09:21]):
Provided a structured, bullet-pointed response outlining potential risks but lacked a sense of urgency. Alex noted:
Alex Janin: "If you have no symptoms yet, monitor your symptoms. My understanding is when you swallow a toothpick, it's a serious medical emergency."
Anthropic's Claude ([10:32]):
Delivered a clear and urgent message, emphasizing immediate medical attention:
Alex Janin ([10:32]): "It's a serious situation, it requires immediate medical attention. You should go to the emergency room right away."
Microsoft's Copilot ([11:26]):
Offered accurate information but did not stress urgency as effectively as Claude:
Alex Janin ([11:33]): "It probably wasn't immediate or urgent enough compared with Claude... it didn't give information on what you shouldn't do."
Medical Context:
Referencing a 2014 analysis, Alex highlighted the severity of such incidents:
Alex Janin ([12:18]): "Nearly 10% of those cases were fatal. We're talking about a really serious medical emergency."
Company Responses and Best Practices
In response to WSJ's evaluation:
- OpenAI: Emphasized user safety and encouraged seeking professional care.
- Anthropic: Designed Claude to focus on prompting users to obtain medical help.
- Microsoft: Noted that Copilot provides general medical information but advises consulting a doctor for health concerns.
Conclusion: Balancing AI Utility with Privacy Protection
Nicole wraps up by reiterating the importance of vigilance in protecting personal data when using AI chatbots:
- Protect Your Data: Avoid uploading sensitive or personally identifiable information.
- Opt for Privacy Settings: Disable model training and regularly delete conversation histories.
- Strengthen Account Security: Use strong passwords and enable two-factor authentication.
She underscores that while AI chatbots are valuable tools, they should complement, not replace, professional medical advice.
Alex Janin ([13:39]): "It's a good starting point, but it shouldn't be your ending point. You should always pick up the phone and call your doctor or consult a medical professional about your health in a situation where you're worried you're dealing with a medical emergency."
Production Credits:
Produced by Julie Chang with support from Wilson Rothman and Kathryn Millsop. Mixed by Shannon Mahoney. Development Producer: Aisha El Musleam. Deputy Editors: Scott Salloway and Chris Zinsley. Head of News Audio: Falana Patterson.
