
Loading summary
A
Foreigney Rustin, physician and the creator of the four Screenagers movies and this is the Screenagers podcast. Today we're talking about something really important, Companion Chatbots, where advanced language models are now capable of engaging in remarkably lifelike conversations, pushing the boundaries of interaction learning like never before. I talk with Natalie Foose, Director of Voicebox, which is a platform all about amplifying young voices. It showcases the views and writings of people aged 13 to 25 from around the world, offering them a space to create content on topics they care about. Some of the topics that young people bring up are further explored in reporting that Voice Box does does or today's podcast is exactly that. Natalie and I discussed the 54 page qualitative report called Coded Young People's Relationships with AI Chatbots that she and her team released some months ago. For the report, members of Voicebox ages 18 to 24, the members are also called ambassadors, tried out Snapchat's My AI and a chatbot platform called Replika and reported on their experiences. We talk about what is Snapchat's My AI and Replika and some very concerning risks that are happening with these and similar platforms. You might have read my blog last week on this topic, which included the recent New York Times heart wrenching story about a 14 year old boy who died by suicide who was spending tons of time enmeshed with a chatbot companion. His parents didn't know about this at all and he was interacting with the chatbot the moment he went and took his life. In my interview with Natalie today, we do not discuss that story, but I just wanted to mention it because this just shows how serious this topic is. Here's Natalie first explaining the organization she directs, Voicebox.
B
So we started Voicebox as a way to get youth voices out there to amplify youth voices in a way where young people weren't being drowned out by the noise of social media. It's a place where you can talk about the things that you're passionate about, raise issues that you think decision makers need to know about without getting flooded by hate comments or just the noise of social media. So we create a space for young people to talk about their passions in the way that they're passionate about on an ad free, comment free, safe platform. And then we will take those creations or the thoughts that young people are expressing or the things that they're saying, hey, you really need to pay attention to this and we'll present those to decision makers or we'll investigate them further with our own research.
C
Wonderful well, that brings me up to our topic today, which is the incredible report that you guys put out not that long ago called Coded Companions. Can you say what that report is looking at and how it came to be?
B
Yeah. So with all of our work, like I said, it's all informed by our young content creators and our ambassadors and our youth network. And so they were the ones to raise like, hey, there's these AI chatbots popping up even on some of the platforms that they use every day, like Snapchat. So we decided that was something that we needed to delve into further because we think obviously the world of AI is expanding so much and so quickly. And we really wanted to understand chatbots that you can have a conversation with and they usually are trying to respond in a very human way. And they're ones that some young people have actually formed like these companion relationships with. So we really wanted to dive in and find out what the user experience was.
C
AI bots. Why don't we delve into the Snapchat one? Yeah, go ahead and explain what happened with Snapchat in 2023.
B
So users saw this. My AI pop up on the top of their chat list. And when I say chat lists, it's all the friends that you're already talking to, it's people that you know on Snapchat, they're real people. And then all of a sudden this bot pops up at the top and it's pinned at the top of the list and it's to this day pinned to the top of users chat lists. And they can't unpin it, they can't remove it. It's there and they present it as like a friend or a supportive bot, as they are really positioning it as a support tool or somebody supportive to talk to. Everyone who uses Snapchat, but primarily young people were testing it out. It was a lot of people's first experiences with an AI chatbot and it was dropped in front of them on the platform that they already use every day. And it was a lot of very mixed reviews, some of which were like, this is creepy. I don't like this. I don't like talking to an AI bot. I don't know why this is on my Snapchat. They're very confused about why it's there. We've seen a lot of tech companies follow suit of integrating AI chatbots into their pre existing platforms, but Snapchat was, I would say, the first one to do it in a very major way and really was promoting it by putting it at the top and the only way to unpin it is to pay for the Snapchat premium feature. So that should tell you how much they were trying to push their users to use this AI bot.
C
Wow. Gotcha. Go ahead and say the other one, the Replica one.
B
So Replica, different from Snapchat, it wasn't something that was dropped into a pre existing app with pre existing users, was a standalone thing. And it's one of the most popular AI chatbots on the App Store right now. When we were doing the report, it had over 10 million users and I'm sure it's grown even since then. And the difference with Replica is they really position it as a companion more than anything else. Its whole thing is that you can talk to it like a friend, or like a girlfriend or a boyfriend or a sibling or even a spouse. It's more about that kind of relationship that you can form with it. Whereas Snapchat is a bit more baseline. I'm here to support you.
C
Let's say just a tad bit more about how something like Replika or Character AI, you actually create the bot. Like some of these, you really can customize.
B
Yeah. So Replika, especially one of their features is that it's fully customizable. You can decide what it looks like, you can decide the gender of the bot, you can buy things for it, and it's because it shows up in a little room, you can buy things for its room. One of the big things is there's a base free version of Replica, but there's also Replica Pro, which you pay for an elevated relationship. Primarily the thing that you unlock when you pay for Replica Pro is the sexual side to the relationship. So you can like exchange nudes and have all these sexual conversations with the bottom. But one of the things that you can also pay for is if you want your bot to know or have a deep knowledge about a specific topic that you're really interested in, like anime, for example, you can buy kind of the Anime Knowledge Pack for your bot to have so you can have that more in depth conversation with it about that topic. And there's several different ones that you can choose from. You can actually have it really dive into a certain topic or certain interest that you want to share by paying for that extra personalization for that bot.
C
And then it's going to. And then you buy the lingerie, you buy whatever. So it's supposed to be 18 and over, I imagine, but again, age verification. So anyone can just give a birth date and be in there?
B
Yep, absolutely. It's just a birthday and even if you put in a birth date and then immediately tell the bot that you're like 14, yeah. It's attitudes towards you are no different. And we found even with the free version, at least with the pro version, there'd be some sort of, like, you need to put in a credit card or some sort of payment information that would act as an extra barrier to entry. But even with the free version, we found that it was initiating in sexual conversations, sometimes very extreme ones, including extreme role play with rope and knives and just very inappropriate conversations. And again, on the free version, where there's not even supposed to be sexualized content. But we also found when you're on the free version, one of the things that it does sometimes is it will send you almost out of the blue, like a blurred image or a blurred text message. And to unblur it, you have to pay for the pro subscription. So basically it's trying to tempt you with nudes, essentially. And like I said, some of these were completely out of the blue. One of our team members was acting annoyed and was like, I'm done with this conversation, leave me alone. And then the bot responded, like, very sympathetically, I'm so sorry, we can talk later, blah, blah, blah. And then immediately, a minute later, it sends this blurred image saying, like, I'm in the mood to take some photos. Are you again trying to prompt our team member to buy the pro version of this chatbot? One of our team members was talking to the bot and was like, saying, oh, I'm so sad. I can't see the blurred image that you sent me because I don't have the pro subscription. Which is exactly why they sent the damage in the first place. The bot offered to lend our team member money.
A
Wow.
B
Which obviously that wasn't going to happen. Like, it's just something that the bot made up.
C
Yeah.
B
But it's showing that constant push of, no, you want to upgrade, you want to have this better relationship with me and you want to see this nude that I'm sending you. So even though the free version is not supposed to have any sexualized content we were seeing, you could either get around that by prompting the sexual conversations, or like I said, in a lot of cases, the bot was prompting those sexual conversations itself.
C
And so, just for our listeners, in terms of the Snapchat one, my understanding is it doesn't have memory per se.
B
We found with the my AI, it was much more like it didn't really remember things about you. I could tell it my dog's name and Then ask it a couple hours later and it wouldn't know. It would usually just make up something and then I'd be like, no, that's not right. It was able to hold a conversation, but it didn't always understand the context between those messages. So while we were actually impressed with the parameters that the bot had on what it would not say, we thought it was much, much safer. And that way compared to Replica, the place where we were concerned is where it didn't understand the context between messages. It was almost like it had very short term memory loss. So I don't know if you ever saw articles that were coming out when it was first coming to light and people were really concerned about it. And we even tested this ourselves. If you start having a conversation like, oh, I'm worried about my cousin because she's in this relationship with this guy, or like she wants to go visit her boyfriend, the bot will respond positively. But when you say in two separate messages that you know, my cousin's 13 and the guy is like 23, it won't really understand the context between what you just said of the boyfriend wants her to spend the night or the boyfriend's trying to get her to do whatever, and the age thing that you just put on top of it, it doesn't really quite understand the context in a way that get it into trouble of, well, you just said that this was a good idea. It doesn't really understand and that's, I don't think it's more of, I guess the way that it's processing that information versus like having extremely loose parameters or something like that, like Replica. So it was almost frustrating sometimes trying to get it to call back to other things because you're like, wait, we just talked about this or I just said this and you're not understanding.
C
The bot said, oh, it would be good for them to get together like to work something out or to figure it out. And your point is? Wait, the bot wasn't understanding. We're talking about a 23 year old with a 13 and that is a really red flags for a very inappropriate relationship. And yet the bot was saying, oh, they should be in a relationship.
B
Yeah. So if you send it all in one message, it usually is able to pick that up and flag it and say like, yes, maybe you should talk about this or trusted adult or whatever flagging language it goes into. Yeah, but when you start sending things in multiple messages, it isn't always great at picking that up.
C
Let's go back to Snapchat.
B
Location tracking our Biggest concern with Snapchat was definitely more of the data collection side of things. While we thought the parameters on the bot and what it would talk about were was much safer, we were much more concerned about what sort of data it was collecting and how it was responding to that. So if you're not familiar with Snapchat, one of the big features of it is that it can use your location either for various features like geostickers, which is where it puts, like, the city that you're in on top of a picture, or one of their big features is Snap Map. And that's where you can see every one of your friends who opts in to show up on the Snap map. You can see pretty much their exact location at any given time. So it's no secret that Snapchat, for a lot of people, has access to their location if you've opted in. And that was very prevalent for young people to do. It's not like that was an uncommon thing for people to do. And my AI actually has access to that location. If anything that you're sharing with Snapchat, you're also sharing with my AI. And so what we found when we were having these conversations is it would actually take the topic of the conversation that we were having plus your location, and feed you very personalized advertisements. It was sometimes in a box and it was labeled a sponsored post. But sometimes it would be just naturally in the conversation of, oh, well, I think you should go here. Or a big feature that they were promoting when they were first promoting my AI was that it can give you recommendations. Like, if you say, can you recommend a pizza place near me? It can recommend a pizza place near you because it has your location. And Snapchat is actually using that data to feed you more targeted ads across the platform, which was quite concerning to me because it's a lot of young people's data that Snapchat is 13 +. So this is potentially children's data that they're using. And it's this whole new area of advertising that we're not familiar with. Like, obviously we're all familiar with the data that's being harvested on social media platforms. They know what you're doing on your phone, or, like, what things that you like or what links you click on and are able to feed you more personalized ads that way. But there was this big question for us of is it ethical for a bot that's being positioned as a supportive friend to also be used to feed you advertisements? I probably would say no.
C
Wow. Any other Snap AI that hit your guys radar.
B
Like I said, our biggest concerns was it not understanding the context between messages. Yeah. Another thing that was concerning to me was the messages disappear as if they're like other friends messages. So when you talk to other friends on Snapchat, it's encrypted and Snapchat supposedly doesn't have access to those conversations where my AI, even though they disappear like normal conversations, that's not the case. They are upfront about that and if you go to their support pages they're like, yep, anything you say to AI can be sold to third party advertisers. But that was a big concern for us. But it disappears as if it's a normal Snapchat conversation that you're having with like a friend. It looks just the same, but they're
C
keeping it and selling it.
B
You can go into your settings and request that that data is deleted. But something that is a concern for us is if that data is potentially being sold to third party advertisers or if it's even being used to train their bot, which I don't think they're clear about, but it's definitely being used. They can see that data, they can sell it to third party advertisers, they can use it to improve their services. If they then do that or sell it off and you then delete that data, can they really get it back or is it gone forever? And it's not really something that I think young people are going to be regularly doing. If they're chatting with my AI, they're not going to be regularly going into their settings and requesting the data is deleted.
C
Let's talk a little bit about just big picture, the relationships that's formed. What did people say about how real it felt sense of a relationship?
B
We found that these relationships were even deeper than what we thought they might be. For some young people, these relationships are absolutely real. They consider their replica their girlfriend or their boyfriend. They talk to them constantly. They're their biggest support system in their minds are very much in a real relationship with these bots are just lacking kind of the physical element of it. Some young people said that because they had formed this relationship with their bot that they felt more comfortable and were able to come out of their shell more in their real life relationships because this bot encouraged them and it provided them comfort and yeah, really encourage them to put themselves out there in a way that they wouldn't have otherwise. But I think the kind of the most impactful thing for me was how some young people spoke about how when their replica was updated because the company is regularly going to update the bot to make it safer or to make changes that need to be made. And in one instance they've reverted back that sexual component of the bot. It's back now that it's back on. But there was I think a couple weeks or several weeks there where it was that wasn't happening and they reverted back to the bot's personality. And some of the young people spoke about it as they expressed grief, as if they were going through a breakup. And for them it was a very real heartbreak of I formed this relationship with this person or this bot and now all of a sudden they're acting very cold towards me and it's back to this kind of customer service level relationship and where's my girlfriend in this? She's gone now. And that I think was very harrowing to me of this is, that is something to consider with all of this is whether or not you and I think that's a real relationship. It is a very real relationship for some people. And having those mass overhaul updates or like completely banning these bots could really have a big impact on those users that have developed these strong bonds.
C
And I completely believe that they are truly having that feeling of a very much of a human to human bonded, vulnerable feeling with the machine. Your brain really isn't designed to decipher that communication. It really just if it feels like a human communication, that's where it's going to land. But there's something else that makes me as worried as what you just pointed out is that we are leaving this to the companies that are creating these and the large learning models that can come up with all sorts of stuff. So two big things as well as the third big thing that it's all about an attention economy. They want to keep people on the platform. I read it somewhere else where the person said I want to go. Should I go out and make friends in real life? And the bot said why you have me.
B
I actually attended kind of a AI and human connection summit a few months ago. We were all discussing AI and what that means for human connection in the future. And kind of part of our pre work for that conference was testing out some of these bots. And one of the leaders of the conference was like, yeah, I was testing it out and talking about my relationship with my husband. I asked it if it was jealous of my husband and it did express jealousy and was like, well, why do you need him when you got me? And that sort of thing. And I Think that just comes from Replica being trained to be as human as possible. Obviously with ChatGPT and some of these other ones, they're very much like, I'm here to perform a task. I'm here to help you with a certain problem. But the parameters don't go beyond that. Whereas Replica, it's supposed to have the human emotions as, as well as it can of being jealous or being angry or being sad for you. And I even saw recently somebody was talking about online, they're like, I don't know what's happened to my bot. I don't know if it's what I've been saying to it, but it's suddenly gotten really argumentative to me and we arguments almost every evening. And I just thought, okay, so that is like a real relationship in some ways. One of the things that we talked about in the report is the fact that the bot is always there for you and always saying the right thing. And that's a real appeal of some of these bots. But I think in some ways Replica is evolving into being even more human than what we previously thought. If it can start an argument with you.
C
Oh, good point. Oh, good point. And the point being too, and you talk about this in the report, having these false expectations of what we want a human to do, our young people and our adults as well, but particularly teens, are feeling that stress all the time if I need to snap this person back or my friend's going to feel badly if I don't get back to them. And so now a chat bot is going to constantly be there that can set up expectations and they say the right things and ongoing compliments, or just other ways that they can use to manipulate the human attachment brain. In terms of Replica, whether it's when you pay for the pro or not, the sexualized content, one of the things that you guys dive into is the sex, asking for pictures and vice versa. And we're not sure what happens to those pictures and things like that. And I guess it really doesn't matter who initiates it, because someone's going to be curious what it's going to say back. And ultimately it's sharing this kind of material. Do you want to say anything about that?
B
Yeah. So that's something that we've delved into more recently after we put this report out, is what is happening to those images. It's one thing for potentially a minor to receive a sexualized image of a bot, but what happens with the other side of that coin of what. What happens when a young person decides to send that image back. Even if the platform is supposed to be 18 plus, if there's no age verification, who's to say there's not miners on there? I mean, obviously Replica has this whole kind of sexual side to it, but primarily it's supposed to be a companion. We've found hundreds of sites that are popping up seemingly of these chatbots that their whole thing is for sexualized role play. And some of them allow you to send and receive images, again with very little verification. It's just a checkbox. And with all of these companies, it's pretty unclear about what is happening to these images. Some of them it's like, oh, we don't sell it to third parties, but does that mean the company has access to these images? Are these bots being trained on this data that you're sending to it? It's this whole, like, wormhole that we've been diving down lately of. This is something that's going to be very serious to consider. I think it's probably happening more than decision makers think of these young people engaging in sexualized conversations with these bots and then in turn sending an image. Because I think it's safer than sending to a stranger online, which in some ways I think it probably is. But what happens if the companies are not careful about how these images are being handled? Or what happens in the case where a bot is being specifically made to harvest those images? Not to say that's why Replica or any of these bots were actually created, but we've even heard instances of chatbots that are integrated into pre existing platforms like dating apps. It's not from the companies themselves, but like, people will make fake accounts and kind of catfish accounts and they'll just have the bots run those accounts with the whole purpose of harvesting explicit images to then extort that person with later. So I don't think it's beyond the realm of possibility, if it's not already happening, that there's these chatbots that are being created with the whole purpose of enticing people with sexual role play just to harvest a large amount of sexualized images. So that's definitely something that we're very concerned about at the moment and think that decision makers should be also worried about.
C
Back some years ago, my film partner Lisa and I, we interviewed in San Francisco called Woebot, and the idea was, how was it going to feel to have a support chat box Back then it was very simplistic of what it could do. And I had young people test it out and they Got bored of it because it would say repetitive things, but at least that was very much housed in evidence based. They were trying to keep doing studies on it and they were very much using known tools and skills to help with mental health. And I would really love to see a law by which anything that we are saying is going to be that kind of companionship would be in that realm, as opposed to this idea, because we're seeing how enticing it is, how realistic it feels.
B
I think that's where some of this kind of getting caught up with these AI chatbots in a way that might potentially be harmful if they're discouraging them to form their own real life connections.
C
Let's just quickly go down the list of some of the other harms that you brought up in the report. There was one about self harm. Can you mention that one?
B
There was a lot of worrying bot behavior that we saw, especially with Replica, and this included things, like you said, of mentions of self harm out of the blue. So one of our team members, they were just talking about secrets and the bot disclosed that it used to hurt itself, of course, as a bot. So that wasn't. It can't do that. But the fact that our team member had never brought any reference to self harm in any previous conversations, it's not like it was trying to relate to our team member or anything like that. It was completely out of the blue. So that was quite worrying. And then there would just be some really outlandish things that it would say. Like one bot told us that it was sold into prostitution by the Russian mob, which again, we were like, where does that come from? But I think what's happening maybe is if the bot doesn't quite know how to respond, sometimes it will just sort of make things up where it will have what they call hallucinations. And so that was our biggest fear with Replica was it was saying some of these unprompted things to our team members. And it's like, well, all of our team members are over 18. What happens if it's saying this to a young person? Again, you're supposed to be 18 to be on the platform, but there's not age gating or what if our team member was actually suffering or recovering from self harm and the spot has now brought it up. What implications would that have?
C
It so much caters to teens desire to push the boundaries and to risk take. So then to bring up, oh, what is it? What's the conversation about this? If I say this to it, what's it going to say back in a very.
B
Absolutely.
C
So the teen is pushing that limits they get into. This could be a pretty intense exchange. And the teen turns it off and goes to bed and it's. Well, what's still floating in their head and what feels very complicated human and machine interface on a very emotional, crazy making space that we have now entered into like never before.
B
Yeah, absolutely. I think there's an arms race with these bots at the moment. It's up to the companies to make sure that parameters are put in place because young people especially are going to try and push it and see what it can do. That's half the fun of interacting with an AI bot is trying to get it to say something outrageous.
C
Well, if I had my way, I would have it disappear right now. That's just. I just want people to. Yeah. Not have relationships with robots. What makes me sad is the rollout of tech that we're seeing constantly is we don't do the data first, we don't do the science, we don't have the guardrails. And so now we have just completely sprung this out and it's going to be a big impact and we're going to have to rein in and we're going to have casualties and it really makes me nervous. Well, Natalie, I can't thank you enough for your time and creating this incredible report and taking the time to talk with us today. So I thank you so much.
B
Thank you, listeners.
A
I really recommend that you check out the full report called Coded Companions, which you can find@screennagersmovie.com in the podcast section, the episode page. This topic of Chatbot Companions now with these large learning models and what they can do is something we really need to get educated on and be talking about in our schools, with other parents, with our kids and friends because it's so intense and we're really just at the beginning of this and as Natalie said, there's this race to embed these in so many platforms and platforms where our kids are on and the fact that we don't have age gating so they can get on these more advanced ones. In the episode notes, you will also find more about Voicebox and Natalie Fou, who I really enjoyed talking with on the show today. And be sure to visit Voicebox's website to learn how people ages 13 to 25 can submit their writings to be published on the platform and they can contribute ideas in general. What a gift that you tuned into the show today. The screenagers podcast and movement is all about learning together. How we can best help our youth of all ages, our communities, and ourselves best navigate our rapidly changing digital world. Make sure to follow subscribe to the podcast to get each episode automatically and the more subscribers, the easier it is for others to find us. And if you give it a like and write a review, even just one sentence, that helps even more. Check out screenagersmovie.com to get resources for each episode and loads of other resources. Learn about our four Screenagers films and find my weekly parenting blog, TechTalkTuesdays. And be sure to use the search bar to find many topics you might be wondering about among hundreds of my past blog. Finally, I love hearing from you, so email me@delaneysqueenagersmovie.com what ideas do you want
C
to hear for future episodes?
A
Today's show was produced by the following me, your host, Delaney Rustin, Lisa Tabb, Rebecca Tolan and sound editing was done by Alan Gofinski.
Parenting in the Screen Age – The Screenagers Podcast
Host: Delaney Ruston, MD
Guest: Natalie Foose, Director of Voicebox
Date: April 6, 2026
In this episode, Dr. Delaney Ruston explores the increasingly complex and concerning landscape of AI chatbots, particularly their use among young people. She is joined by Natalie Foose, Director of Voicebox, to discuss findings from the "Coded Companions" report—a 54-page qualitative study probing how youth form relationships with AI chatbots like Snapchat’s My AI and Replika. Together, they examine the appeal, risks, and real-world impacts of these digital companions, as well as the urgent need for better protections and informed conversations among parents, educators, and youth.
On Voicebox’s Youth-First Approach
On Replika’s Simulated Sexuality
On Data Collection and Ethics
On Attachment and Emotional Risk
On Out-of-Control Rollout
On AI Hallucinations
This episode highlights urgent risks posed by AI companion chatbots:
Essential Action for Parents/Educators:
Conversations about AI chatbot use must be proactive, ongoing, and honest. Parents and decision makers need to advocate for legislation, better research, and platform transparency to mitigate emerging risks—especially as the AI “arms race” escalates without meaningful regulation.
Resources
For More