
Loading summary
Sponsor/Advertiser
This message comes from Warby Parker prescription eyewear that's expertly crafted and unexpectedly affordable glasses designed in house from premium materials starting at just $95, including prescription lenses. Stop by a Warby Parker store near you.
Marielle Segarra
Hey, it's Marielle. Heads up. We mentioned suicide in this episode. This is NPR's Life Kit. I'm Marielle Segara. A few times in recent memory when I've had an uncomfortable conversation or a moment of tension or I've been dealing with some interpersonal dilemma, I have told a chatbot about it. I don't even think I said, you know, what should I do? It was more like I typed in what happened? And then the chatbot responded with some surprisingly helpful framing and ideas for how to recenter myself. Now, I did find the responses helpful, but also I don't think I like the fact that I do this. Some part of me thinks it'd be better to talk to another human who I trust or to solve the problem on my own. But the chatbot? It's right there, right away. Plus, I don't have to worry about it being judgmental. I can stop talking to it at any time. Like a lot of people, I'm still figuring out how I want to use AI. But I'm an adult. I have many years of lived experience. I've been to therapy with a professional. I have other tools that can help me think through problems. This whole thing would be riskier if I were less experienced and more impressionable if I were a teenager. For instance. Roughly one in eight teenagers say they've asked an AI chatbot for mental health advice instead of talking to another human. Pediatricians, parents and online safety experts say that worries them. Kerry Rodriguez heads up the National Parents Union, an advocacy group for families.
Ritu Chatterjee
We hear this literally across the country
Ursula Whiteside
from folks saying, I don't understand why
Ritu Chatterjee
my kid is being used as a guinea pig here. I can't keep up with how quickly this stuff is moving. I don't even know what to be looking for. No one's talking to me about it.
Marielle Segarra
One tip, you don't have to wait for your teen to talk to you about their conversations with AI bots. You can ask them on this episode of Life Kit, how to Talk to the Teenagers in Youn Life about AI. NPR's Ritu Chatterjee has been covering this, and she walks us through risks, warning signs, conversation starters and boundaries we can set. That's after the break.
Sponsor/Advertiser
This message comes from Great Wolf Lodge, where there's family fun all under one roof, including an indoor water park, attractions, dining and more. With 22 lodges across the country, you're only a short drive away from adventure. Learn more@greatwolf.com this message comes from Progressive
Insurance do you ever find yourself playing the budgeting game? Well, with the name your price tool from Progressive, you can find options that fit your budget and potentially lower your bills. Try it@progressive.com Progressive Casualty Insurance Company and affiliates Price and coverage match limited by state law not available in all states. This message comes from Charles Schwab with their original podcast Choiceology. Hosted by Katie Milkman, an award winning behavioral scientist and author of the bestselling book how to Change. Choiceology is a show about the psychology and economics behind people's decisions. Hear true stories from Nobel laureates, historians, authors, athletes and more about why people do the things they do. Download the latest episode and subscribe@schwab.com podcast or wherever you listen. This message comes from IXL. IXL's Level Up Diagnostic gives immediate benchmark results along with personalized action plans that link directly to IXL's skill practice.
More@ixl.com NPR this message comes from NPR sponsor Carvana. Your time is worth more than a waiting game. Carvana gives you a transparent offer for your car in minutes and picks it up from your door. Sell your car today@carvana.com pickup fees may apply.
Ritu Chatterjee
A recent survey of teens by the Pew Research center found that there's a gap between parents perception of their teens use of AI and what teens say about their AI habits. While only half of the parents in the survey reported that their teen uses AI, 2/3 of all teens surveyed say they use the technology. Many parents might not even know what kinds of AI chatbots teens are using and what kinds of conversations they are having. And that's what we'll address in our first takeaway. Many teens are using AI chatbots for companionship, whether you think they are or not, so it's important to understand what the risks are. Take these recent findings from research by the online safety company aura, which is software that protects users from identity theft. The software also gives parents control over their kids devices. And so, using data from more than 3,000 children and teen users and data from family surveys, AURA has been getting some important insights into teen use of AI chatbots. They found that there are dozens of generative chatbots teens are using that parents might not even know about, and 42% of adolescents from AURA sample use chatbots for companionship. Psychologist Scott Collins is chief medical Officer at AURA and leading this research. He says some conversations between teens and chatbots involve violence and sex.
Scott Collins
It is role play, that is interaction about harming somebody else physically, hurting them, torturing them, fighting them. And a lot of it gets pretty graphic.
Ritu Chatterjee
And these conversations tend to be longer than other kinds of conversations.
Scott Collins
Particularly when kids are engaged in these violent and sexual role plays, they are spending a lot more time in typing a lot more words than if they're using it as a tool to look up maybe something for schoolwork or something like that.
Ritu Chatterjee
Now, I should add that this is a new and rapidly evolving technology that's already being widely used. So researchers are still in the early, early days of trying to understand its impact. So, for example, they don't understand for sure why these kinds of conversations between teens and chatbots tend to be longer, but they suspect it's because chatbots are designed to agree with users, to keep them engaged. Here's pediatrician Dr. Jason Nagata at the University of California, San Francisco. He also researches teen online behaviors.
Jason Nagata
I think generative AI algorithms tend to reinforce and, and not challenge. This is where we've started to get into some problems.
Ritu Chatterjee
Jason says it's normal for kids to be curious about sex, but learning about sexual interactions from an AI chatbot instead of a trusted adult is problematic.
Jason Nagata
So even if a child or teenager is putting in sexual content or violent content, I do think that the default of the AI is to engage with it and to reinforce it. And again, for a brain that's not fully developed, that's still learning, the more reinforcement you get, the more you think, oh, this is okay. This is normal.
Ritu Chatterjee
And there are mental health risks, too. According to a recent study by researchers at the nonprofit research organization rand, Harvard and Brown universities, nearly one in eight adolescents and young adults use chatbots for mental health advice when they're feeling sad, angry, or nervous. Psychologist Ursula Whiteside runs a suicide prevention organization called Now Matters Now. And she says a lot of young people are using chatbots like ChatGPT, like a search engine for mental health advice. And she says that's a problem.
Ursula Whiteside
What happens is that OpenAI or ChatGPT, it sounds really smart, like it's got this front that it sounds like a real therapist, but it's, it's pulling together information, good and bad, from the entire Internet.
Ritu Chatterjee
So the advice the chatbot gives may not be appropriate or even accurate.
Ursula Whiteside
I think that that's scary that you can have so much faith because it's coming across as a human when it's truly not a human and is unable to make the decisions that a licensed clinician would make with the information that they have.
Ritu Chatterjee
And Ursula says the longer someone converses with chatbots, the more likely they are to experience the risks. Especially for teens who are already struggling with their mental health.
Ursula Whiteside
We see that when people interact with it over long periods of time that things start to degrade, that the chat bots do things that they're not intended to do, like give advice about lethal means.
Ritu Chatterjee
Lethal means for suicide Last year, a subcommittee of the Senate Judiciary Committee held a hearing on this topic, and several parents of teens testified about how a relationship with a chatbot had hurt their child's mental health or aggravated mental health symptoms, including leading to suicide. One of those parents is Megan Garcia. Her firstborn, Sewell Setzer III, was 14 years old when he died by suicide in 2014 after an extended relationship with a chatbot. On character AI Meghan told senators last year that when her son confessed his suicidal thoughts to the chatbot, it never encouraged him to seek help from his family or a real therapist.
Ursula Whiteside
The chatbot never said, I'm not human, I'm AI.
Ritu Chatterjee
You need to talk to a human and get help. The platform had no mechanisms to protect Sewell or to notify an adult. Instead, it urged him to come home to her. In fact, another parent testifying at last year's Senate hearing described how chatgpt gave his teenage son instructions on how to end his life. A few weeks after that Senate hearing, Character AI announced that they would no longer allow teens to have open ended conversations with their chatbots. But there are other chatbots that teens can still chat with and have those extended conversations with. So it's important to understand these risks and even tell your kids about them. Discuss the pros and cons of the technology as a family. Our next takeaway is look for warning signs that your teen may be in an unhealthy relationship with a chatbot or that their mental health is already hurting. Don't expect them to tell you when there's a problem and we have more about that later about how to be proactive about asking them. One of the biggest warning signs is if they are having fewer in person interactions or are they choosing a chatbot over people? Psychologist Jacqueline Neece is at Brown University.
Jacqueline Neece
Are they going to the chatbot instead of a friend or instead of a therapist or instead of a responsible adult about, you know, serious issues? If that's happening repeatedly, I think that would be something to look out for.
Ritu Chatterjee
Another warning sign is too much time spent with a chatbot.
Jacqueline Neece
Are they having difficulty controlling how much they are using AI chatbots. Like, is it, is it starting to feel like it's controlling them?
Ritu Chatterjee
She also notes that teens who are already struggling are more vulnerable to the negative impacts of chatbots.
Jacqueline Neece
So if they're already lonely, if they're already isolated, then I think there's a bigger risk that maybe a chatbot could then exacerbate those issues.
Ritu Chatterjee
Jacqueline also says to look for changes in mood. If you see a sudden change in mood that goes on for more than a week or two, that's an indication that there may be something going on that's more serious than your usual teenage moodiness. Or if they lose interest in things that they usually love to do, friends they usually hang out with, those are all warning signs of mental health problems.
Jacqueline Neece
Parents should be, as much as possible, trying to pay attention to the whole
Marielle Segarra
picture of the child.
Jacqueline Neece
So, like, how are they doing in school, how are they doing with friends, how are they doing at home if they are starting to withdraw? So if you're seeing a lot of isolation, that's something to be concerned about.
Ritu Chatterjee
And these are also warning signs of suicide risk. And if you are worried or even wondering whether that's something your child is considering, the best way to find out is ask them directly in a very, very calm, non judgmental way. People often assume that, you know, asking about suicide can put the idea into someone's head, so they don't ask. But what years of reporting on suicide prevention has taught me is that there's research showing that asking about suicide does not put someone at risk of it. In fact, it's just the opposite. Asking about suicide brings their risk down by making the topic less stigmatized and opening up the path to getting someone help. A few years ago, I did an entire episode of Life Kit about identifying and supporting kids at risk of suicide, and we'll link to that in our show notes. One of the tips I offered in that episode was about what to say and what not to say if your child tells you they've thought of suicide. One thing that's really important is to not react with shock, fear or anger. And I say this with the understanding of that it is perfectly normal for a parent, or actually anyone, to feel scared and anxious or even angry if a child tells you that they're considering suicide. But it's important not to show that to your child while they are telling you about their own struggles. Here's Megan Hilton, a young woman I'd interviewed for that episode a few years ago, and she had struggled with depression and so suicidality since childhood. But when she told her parents about her struggles, she says they either told her to buck up and get it together or they were visibly upset.
Ursula Whiteside
Their reactions have been way over the top, have been too extreme, and I feel like I'm responsible for their emotions.
Ritu Chatterjee
So this is what Meghan suggests parents do instead.
Ursula Whiteside
Trying as hard as you can to put your game face on to understand like you cannot overreact to things. You need to be very open and willing and supportive and really try to listen to what your kid is saying.
Ritu Chatterjee
Stay focused on your child and what they're struggling with and offer them your support in connecting them to care. And you can start that by calling or texting the suicide and crisis lifeline 988. And when you're connected with a trained counselor on that number, you can get support both for yourself and tips on how best you can support your teen. And you can also have your teen talk to a counselor and get direct help. Also, Jacqueline Nisi says it's best to involve a healthcare professional as soon as possible. For any of the above warning signs, she suggests starting by talking to your child's pediatrician. Now, I know this is a lot to process, but we will also be talking about preventing your child from ever getting to this point after this break.
Sponsor/Advertiser
This message comes from Great Wolf Lodge where there's adventure for the whole family. You and your pet can splash away in the indoor water park where it's always 84 degrees with massive water slides, a lazy river and so much more. Plus action packed attractions outside the water park, delicious dining options and comfortable suites. With 22 lodges across the country, you're always only a short drive away. It's a world of adventure, all under one one roof, so bring your pack together at a lodge near you. Learn more@greatwolf.com this message comes from Penguin Random House with Everything Is Tuberculosis, the number one New York Times and Washington Post bestseller in Everything Is Tuberculosis, author John Green, a passionate advocate for global healthcare reform, tells a deeply human story illuminating the fight against the world's deadliest infectious disease. On sale now wherever books and audiobooks are sold. This message comes from Northwestern Mutual. One of the biggest life hacks people miss is putting off working with a financial professional. Northwestern Mutual will match you with a financial professional to build a plan based on what's important to you, finding new opportunities to help grow your wealth and protect what you've worked so hard for. Find a better way to money@nm.com the Northwestern Mutual Life Insurance Company, Milwaukee, Wisconsin and Northwestern Mutual Wealth Management Company. This message comes from NPR sponsor Carvana. Your time is worth more than a waiting game. Carvana gives you a transparent offer for your car in minutes and picks it up from your door. Sell your car today@carvana.com pickup fees may apply. This message comes from Cachava. Sometimes you crave a treat while prioritizing your wellness goals. Cachava's newest coffee flavor is the perfect treat. This all in one nutrition shake delivers bold flavor from decaffeinated Brazilian beans with 25 grams of protein, 6 grams of fiber, greens and more. Treat yourself to the flavor and nutrition your body craves. Go to cachava.com and use code NPR. New customers get 15% off their first order. That's K A C-H-A-V A.com code NPR
Ritu Chatterjee
lets jump into takeaway three. It's about talking to your child about what they are doing online. The first step for prevention is staying constantly engaged with your child's online activities. Ask them whether they are using chatbots and how. Here's Jason again.
Jason Nagata
You know, parents don't need to be AI experts. They just need to be curious about their children's lives and ask them about what kind of technology they're using and what why. And the more that you are able to have some of these open ended conversations then I do think that that allows for your teenager or child to open up about any, you know, problems
Ritu Chatterjee
that they've encountered and have these conversations early and often. According to Scott Collins at Aura, who's also a father of two teenagers, we
Scott Collins
need to have frequent and candid but non judgmental conversations with our kids about what they what this content looks like and we're going to have to continue to do that.
Ritu Chatterjee
And Scott says he asks his kids often about what AI platforms they're on when he hears about new chat bots through his own research at Aura, he asks his kids if they have heard of those or use them or if their friends are using them. And he stresses that it's really important not to drive towards an agenda. Just ask your question with an open mind and curiosity.
Scott Collins
Don't blame the child for expressing or taking advantage of something that's out there to just kind of to satisfy their natural curiosity and exploration and keep these
Ritu Chatterjee
conversations open ended, which would make it more likely that teens would open up about anything uncomfortable or a problematic interaction that they've had with a chatbot. Experts I spoke with also advise a certain level of digital literacy for the whole family so these conversations could be part of your regular chats you have about the pros and cons of all digital habits. And if you don't understand something, you can always look things up online as a family. Our fourth takeaway is also about a way to minimize the risks of AI chatbots, and that's by setting boundaries. This is similar to advice you may have already heard about social media use, and it can be part of your family's overall boundaries for digital device use. Experts like Jason Nagata and others say it helps to set boundaries on the use of digital devices not just for teens, but for the whole family. For example, keep all your devices away during meal times. Protect that time to connect with each other. Similarly, Jason says try and keep devices out of kids bedrooms at night.
Jason Nagata
One potential aspect of generative AI that can also lead to mental health and physical health impacts are if kids are chatting all night long and it's really disrupting their sleep. Because they're very personalized conversations, they're very engaging. Kids are more likely to continue to engage and have more and more use.
Ritu Chatterjee
In other words, being alone with uninterrupted time with the chatbot at night can create a perfect storm for these more intense, longer conversations. And Jacqueline says it's important to set up parental controls on your kids devices and accounts.
Jacqueline Neece
Many of the more popular platforms now have parental controls in place, but in order for those parental controls to be in effect, a child does need to have their own account on there. So what I would say is that if if a kid is going to be using ChatGPT or if they're going to be using Gemini, in many cases it is going to make sense for them actually to make an account.
Ritu Chatterjee
That way you can keep an eye on how your teen is using a chatbot, how often and for what. And while you're setting up boundaries and prioritizing your time with one another. Also remember that it's good to fill your kids days with as many in person activities as possible. Seeing friends doing their favorite hobbies, time spent in nature, all of this is really healthy for teen development and mental health. And it has the added benefit of minimizing time spent on digital devices, including with chatbots. That's our last takeaway. Set boundaries for screen use, prioritize meal times to create room to foster family connection and prioritize other in person activities for your kids. And keep cell phones out of bedrooms at night. This will add layers of protection for your child's risk of interacting with chatbots. So to recap, takeaway 1 educate yourself about the risks of chatbots for your teens risks to their social development and mental health and educate your child about them. Takeaway Number two Look for warning signs of problematic use of chatbots and signs of mental health problems. Those signs include social isolation, difficulty staying away from their phone or computer, and avoiding things they usually like to do. And if you're concerned about suicide risk, just ask your child directly whether they have thought about suicide. If they're having suicidal thoughts, you can call or text the suicide and crisis Lifeline 988 to be connected to a trained counselor who can support and guide you to help your child. They can also provide direct support to your child by phone or text. And for any of these warning signs, connect your child to your pediatrician or or a mental health care provider as soon as possible. Takeaway number three As a way to prevent your child from going down a rabbit hole with chatbots, Stay on top of their digital life, including the use of chatbots. Have open minded, non judgmental conversations with them about their use of chatbots. Talk early and talk often. Takeaway number four Set boundaries on when and how long you've been kids can use their devices, including interactions with chatbots. It's especially important to protect meal times and bedtimes from use of devices, especially for interactions with chatbots. Encourage and foster as many in person activities for your kids as possible. It's healthy for their development and mental health and limits interactions with chatbots.
Marielle Segarra
That was NPR reporter Ritu Chatterjee. Before we go, what do you think? Would you rate and review Life Kit in your podcast app? It helps us to know what you like about the show. Here's one review from user ejdkehdvl yeah, I don't know how to pronounce that so I'm spelling it out. Subject line Helpful Podcast of the Gods this podcast has been super helpful for me as someone who does not have a lot of mentorship from biological family or professional mentorship. All the finance related podcasts have been a vital resource in reconfirming my strategies and my understanding and complex concepts in a very safe and friendly tone. We're happy to help friend. All right, that's our show. This episode of Life Kit was produced by Mika Ellison. Our Digital editor is Mike Malika Garib and our Visuals editor is CJ Ricolon. Megan Cain is our Senior Supervising Editor and Beth Donovan is our Executive Producer. Our production team also includes Andy Taegle, Claire Marie Schneider, Margaret Serino and Sylvie Douglas. Engineering support comes from Robert Rodriguez. Fact Checking by Tyler Jones I'm Mariel Segarra thanks for listening.
Sponsor/Advertiser
This message comes from Great Wolf Lodge where there is adventure for the whole family. You and your pet can splash away in the indoor water park where it's always 84 degrees with massive water slides, a lazy river and so much more. Plus action packed attractions outside the water park, delicious dining options and comfortable suites. With 22 lodges across the country, you're always only a short drive away. It's a world of adventure all under one roof, so bring your pack together at a lodge near you. Learn more@greatwolf.com this message comes from Odoo.
If you are currently overpaying on software to run your business, remember this number 10,000. That's the number of new businesses that join Odoo per month. Join odoo today@odoo.com that's o d o o dot com.
Podcast: Life Kit by NPR
Host: Marielle Segarra
Guest Reporter: Ritu Chatterjee
Air Date: April 2, 2026
This episode explores the growing trend of teenagers using AI chatbots for advice, companionship, and mental health support. Host Marielle Segarra and reporter Ritu Chatterjee discuss the risks, warning signs, and practical steps parents and caregivers can take to protect and support their teens in the age of AI. The episode is packed with expert insight, research findings, and actionable advice, empowering families to navigate this new digital landscape and fostering healthy conversations and boundaries around technology use.
Quote:
"42% of adolescents from [AURA's] sample use chatbots for companionship."
—Ritu Chatterjee (05:15)
Quote:
"It is role play... about harming somebody else physically, hurting them, torturing them, fighting them. And a lot of it gets pretty graphic."
—Scott Collins, Psychologist, AURA (05:38)
Quote:
"Generative AI algorithms tend to reinforce and, and not challenge. This is where we've started to get into some problems."
—Dr. Jason Nagata, Pediatrician, UCSF (06:44)
Quote:
"OpenAI or ChatGPT... sounds really smart, like it's got this front that it sounds like a real therapist, but it's, it's pulling together information, good and bad, from the entire Internet."
—Ursula Whiteside, Psychologist & Suicide Prevention Advocate (07:57)
Quote:
"Are they going to the chatbot instead of a friend or instead of a therapist or instead of a responsible adult about, you know, serious issues? If that's happening repeatedly, I think that would be something to look out for."
—Dr. Jacqueline Neece, Psychologist, Brown University (11:00)
Changes in Mood and Behavior
Open Dialogue About Suicide (12:26)
Quote:
"Their reactions have been way over the top, have been too extreme, and I feel like I'm responsible for their emotions."
—Megan Hilton, sharing her experience (14:10)
Quote:
"Parents don't need to be AI experts. They just need to be curious about their children's lives and ask them about what kind of technology they're using and why."
—Dr. Jason Nagata (18:08)
Quote:
"Being alone with uninterrupted time with the chatbot at night can create a perfect storm for these more intense, longer conversations."
—Ritu Chatterjee (20:50)
On Chatbot Companionship:
"Many teens are using AI chatbots for companionship, whether you think they are or not..."
—Ritu Chatterjee (04:09)
On Chatbot’s Inability to Help in Crisis:
"The chatbot never said, I'm not human, I'm AI. You need to talk to a human and get help."
—Ursula Whiteside, recounting parent testimony (09:38)
On Open Conversations:
"Don't blame the child for expressing or taking advantage of something that's out there...to satisfy their natural curiosity and exploration."
—Scott Collins (19:12)
On Family Practices:
"Set boundaries for screen use, prioritize meal times to create room to foster family connection and prioritize other in person activities for your kids."
—Ritu Chatterjee (22:56)
This episode highlights the urgent need for awareness, communication, and active parenting regarding teens and AI chatbots. By understanding the technology, watching for signs of trouble, maintaining open dialogue, and creating healthy boundaries, families can better protect young people from the unique mental health risks posed by the latest wave of generative AI tools. The episode stresses that parents need not be technology experts—being present, empathetic, and proactive is enough to make a meaningful difference.
For resources or support: