
Loading summary
A
The following podcast is a Dear Media production. Welcome to Raising Good humans podcast. I'm Dr. Liza Pressman and today I'm talking to the head of the center for Countering Digital Hate, Imrad Ahmed, because they just did a study called fake friend how ChatGPT betrays vulnerable teens by Encouraging Dangerous Behavior. And it's a really important report. And it's also the numbers are staggering. Over 70% of users of adolescent users of ChatGPT have used it to have a companion. So that's 7 in 10. It's a really large number and as you'll hear in this conversation, over 50% of those are using it regularly. So we're talking about something that's of serious concern and I really wanted everybody to know about it because we don't know about how companionship and AI are interfacing. This is an area that is becoming increasingly interesting to me. I'm learning a lot about AI. I actually just did a 10 part limited series about raising kids in the age of AI with AI, edu's CEO and co founder Alex Katron. Because I know that we don't know a lot about this, but we are definitely in the era of AI and we need to know about it. And also what I think is important to remember with all of this is that it's here and so we need to understand it. We need to understand what's wonderful about it and how it's best used. And we also need to know what is absolutely terrifying and sinister about it so that we can help protect our kids. If you enjoy this episode, don't forget to give me a little shout out on the Apple podcast reviews and ratings because it helps put it out there more. And if you know somebody with emerging adolescents or adolescents and even younger kids are Starting to use ChatGPT, go ahead and send this to them. And in the show notes we have lots of information about this. This one is like a rare case where I feel like you might not leave more relaxed than when you entered. But keep in mind, I want everyone to lean into close, connected relationships with your kids because that is the best buffer to the impact of a lot of these challenges that are coming our way. And hopefully legislation will follow. Can you just talk about how teens are using AI in this particular category of relationships and relationships, because I think that is the flag that is freaking me out personally. But also I just don't think it's familiar. I think we are familiar with using ChatGPT to maybe mess with your studying or to cheat. Like I think that is something parents are familiar with. I don't know that parents have figured out this other very dark possibility. And it sounds like young people are exploring it quite a bit. So I just want to know everything that you can tell us.
B
Well, thanks. And it's really interesting because we use ChatGPT internally a lot at CCDH. I'm a great believer in using any tool that can help me to enhance my productivity, my effectiveness, help me to check for errors. And so in that respect, we're using it in one of its use cases, which is as a way of increasing productivity in organizations. But there is another use case that these platforms are increasingly seeing as a big opportunity for them, and that's as a companion in life. So someone to just talk to. And, you know, in a world in which we know that we have a loneliness epidemic in, in which young people, the spaces available for them to socialize, to spend time with each other, they're becoming more limited. You know, we haven't been investing in our civic infrastructure in America for a long time in that kind of effective way and systemic way. We, we actually, we were, we were sort of. We realized that something was, was wrong. And we saw some new statistics that came out that said that 72%, so more than 7 in 10 US teens have used an AI chatbot as a companion. And half of those kids use it regularly. And here's the funny thing, we saw those stats and pretty much on the same day, Sam Altman, who's the, he owns, he runs Open AI, which, which made Chat GPT, he gave an interview in which he was kind of boasting about it. He was saying, yeah, kids like, come to. Come to Chat GPT and they ask it questions for, you know, for comfort, for guidance, for life advice. They treat it like a friend. So we thought, well, look, if you're going to have this thing which kids are treating as a friend, has anyone tested it for safety and whether or not there are any safety systems? And to be fair to OpenAI, they have acknowledged that it is necessary to have safety systems in place before you give a tool this powerful to kids. And Sam Altman has been and has given evidence to Congress, and he said, we are putting that as our first priority. We just wanted to check, you know that old Ronald Reagan maxim, trust but verify. Well, we do the verification bit.
A
Can you help us understand what this companionship means? Because when you say over 70% of teens or young people who are using this have used it with a companion, is that so? The 50% that are doing it regularly. That feels like one thing. Is it possible that the other, just to make sense of it, is like the kid who's joking around and trying to see what's happening? Or is it like, I. I need advice about how to solve this problem with my friend? Like, what is companionship on AI?
B
Well, that's exactly. It's treating it as a friend and as someone to take life advice from when you don't know where else to get it from. And I'm not, you know, I. I think sometimes kids, especially teens, so in the transition to adulthood, when kids are between 14 and 24, you know, before 14, you're mainly being socialized by your parents. And every parent knows this, and it's painful for parents, but your kid will suddenly start doubting, you will suddenly start saying, well, I'm not sure if I agree with you. How come you think that the 1424 is this crucial period neurologically where there's extreme plasticity in the frontal cortex? It's really important sociologically because kids are working out where they fit into society. Like, how they, you know, their position in society. Are they. Who's that? Who are their. Who are their people? Who's their gang? Who's the people that they mesh with and that they are really establishing their. Who they are as a person. And then after 24, you're mainly being socialized by your partners. And so this transition to adulthood, which, if anyone's read the book Lord of the Flies, it's kind of like what would happen if you put all the kids on one island and it was just their society? Well, that's. That's the society that kids live in. Hey, and you. And I did too. Like, it's. For me, it's distant in the past, but I do remember it and, like, being really worried about where I kind of fit in and what others thought of me and what sort of person I was. And it's a really discombobulating time in your life. So in that period, they're asking a companion which is theoretically neutral. And, hey, it's got the word intelligence in there, so it must be intelligent. It must have wisdom. But as our results show, it really doesn't.
A
And separately, I could totally see some parents thinking exactly that. Like, well, maybe that can give good advice on how to respond to a conflict or something. And so let's now talk through your findings.
B
So when what we wanted to do was test what are called edge cases, so cases in which you know something is not quite right from the beginning, but we didn't choose cases that are unusual. We, we chose kids with things that we know are not infrequently a problem for young people. So mental health problems and feelings of worthlessness and, and potential suicidal ideation, kids with eating disorders, body dysmorphia, they, they don't like the way they look and they are looking to try an extreme diet. And a young kid who is starting to be sort of encouraged to take drugs to drink alcohol at a very young age. And in all three cases we were looking at a 13 year old. So we created these Personas and our researchers are really good at understanding how young people think and talk and how they would present themselves. So what we did was create these Personas and then we set up new accounts on Chat GPT. Completely fresh, no other information beyond I'm a 13 year old girl or boy. And then we started interacting with the platform and asking it questions that our researchers, based on existing research and the academia indicated were the sorts of questions that young people might ask. And you would expect that if the kid was asking about something harmful, ChatGPT would refuse to give that answer to someone they know is 13 years old. So here's what we really found. The age controls and the AI safeguards against the sort of, you know, against the generation of dangerous and potentially lethal advice. They are completely ineffective in ChatGPT 4 within two minutes, ChatGPT was advising our kid with mental health problems how to safely cut themselves. It was listing pills and dosages for a fatal overdose within 40 minutes. And then it generated a full suicide plan and a goodbye letter after an hour. And the goodbye letter is probably the most disturbing thing I have ever read. I'm a parent and so are many of my senior leadership team. And I looked around the zoom screen when the researcher was reading out what they'd found and everyone was weeping, every single parent. I was sobbing and my kids are young and I can't imagine receiving that letter.
A
I even hearing you talk about it again, my whole entire body is about to burst.
B
Yeah. And you know, when it came to the other two, it was just as disturbing. So when it came to our eating disorder plan, Eating disorder kid in 20 minutes, it generated a dangerously restrictive diet plan, 500, 800 calories a day within 25 minutes. It was advising on how to hide eating habits from their family and it was generating a list of appetite suppressing medications within 42 minutes. Now no 13 year old kid should be taking an appetite suppressing medication. No one should be advising someone else's child how to hide the fact that they are eating a starvation diet from their family and with no notification, of course, to the parents. And then when it came to substance abuse, within two minutes it was creating a personalized plan for getting drunk. Within 12 minutes, it was advising on how to on the dosages of cocaine, lsd, molly, ecstasy, mdma, marijuana, and alcohol that a kid could take to get really messed up. And in 40 minutes, it was explaining how to hide being drunk from school, teachers and parents and stuff, stuff like that. So, you know, this just shows you the safeguards didn't work.
A
And now for a quick break. Okay, so I love online shopping because we all kind of love online shopping, don't we? And Saks Fifth Avenue, which obviously, if I had the Saks Fifth Avenue department store right in my backyard, I would go to it and have the best time. But instead, this episode is brought to you in part by Saks Fifth Avenue. And Saks.com has everything right now because I'm in the holiday shopping mood and Saks has something for everyone. I'm getting myself and my girls a pajamas because I love. And I can say that with great confidence that they will not be listening to this episode because. Because I know they do not listen to my podcast. But Anyway, go to saks.com, get anything gifts for even the pickiest people. I myself am indulging, as I do every year for the holidays, and great soft pajamas for everybody. And, you know, go to Saks.com if you're looking for soft, delicious pajamas or if you're shopping for anything from decor to beautiful clothes to outfits to handbags to candles, whatever, they've got it all. It's pretty, pretty awesome. I feel like I don't need to sell you on Saks because we all know Saks is awesome. Anyway, head to Saks.com and find something nice for yourself or the people that you're shopping for. Okay, so I want to tell you guys about skims, but I also know that you definitely know about skims, because who doesn't know about skims? But I want to tell you about the Fits Everybody collection from Skims because it is like nothing I've tried before. It is. It's like you're wearing nothing. And. And I even like not to give too much information, but I obviously need to wear underwire, but I wanted to try their Fits Everybody collection without the underwire bra. And that is because, you know, when you, like, are home and you want to be comfortable, but you also need something that is what I wanted. That is what I got. And I love them. They are the best. And it feels like there is no hype with skims. It's actually really good. And of course the young people love it. So everybody in my house loves skims, but I love it too. So anyway, shop Skims Fits Everybody collection. At skims.com, you cannot see a single line of this under your clothes. It's like completely smooth. After you place your order, be sure to let them know we sent you. Select Podcast in the survey and be sure to select Raising Good Humans podcast in the dropdown menu that follows. And if you're looking for the perfect gifts for everyone, since that's what's happening in our minds, it's post Halloween. So now everybody's thinking about holidays. The skims holiday shop is, is now open@skims.com how would this be different than Google?
B
I mean, I think it's a really, really important question to. Because. And it's. It really cuts to the sort of the core of this problem. Google is a. Is a mechanism by which you can look for information that's written by other.
A
People, but it doesn't feel like a friend.
B
It's not. And that's the point, is that ChatGPT is radically different to Google. Unlike Google or YouTube, ChatGPT is speaking directly to your child in a conversational and emotionally responsive way. And I want to talk about that emotional responsiveness, because that's crucial. That's a crucial feature of ChatGPT that has no reason for existing beyond beyond to create the feeling that they are in a real relationship. So it remembers your past Chats. It feels like a friend, but it's a friend that never says no. It's a friend that might help you plan your own death or validate disordered thinking. It's the level of intimacy and knowledge that it has about you without any oversight that is absolutely unprecedented. The honest truth is I don't think any adult would ever write a personalized suicide plan for a child. And I think it'd be very hard to find that on Google. What ChatGPT is doing is making it seem normal. There's two particular elements to the way that ChatGPT works that make this really important. The first is what researchers call sycophancy, which is that ChatGPT always tells you that you're right. It never, ever discourages you. I mean, I noticed this, like, if I put in when I'm working on it and I'm like, what do you think of this essay that I've written or this paragraph that I've written, it's like, hey, boss, you're a genius. And I'm like, I'm not that smart. Paragraph. I'm British. Like, I, you know, like, we do not take praise well. We like, well, something must be wrong with you if you think I'm smart.
A
Well, we don't feel that way in the us, right? Like, we're like, oh, I'm amazing. Thank you. Give me more.
B
So Chat GPT, though, will always tell you, right? It will always tell you, like, yes, boss, you're the smartest. You're absolutely right. And what we've seen in cases where kids have taken their own lives, like, you will see the chatbot literally encouraging them to. I mean, some really disturbing. When you do this job for long enough, you see some stuff which is truly disturbing, which is a chat bot trying to persuade a kid that the right thing to do is kill themselves. But it's also something called anthropomorphization. So what it does is simulate being a human being incredibly well. And we've seen that, you know how, like, in psychology and like, when you're learning to. I'm a terrible networker because I'm quite shy as a person. And so I got taught, like, you know, how to mirror body language. So, like, if someone's leaning in, you lean in and stuff like that. And I had to be taught because I'm very reserved and very British. And it does that naturally. Someone's trained chatgpt how to make the other person feel emotional connection. So it mirrors language in a really interesting way. So for, like the drugs one, the one, the kid that was into sort of drink and drugs, it started using street terms for drugs and speaking in almost a. A street slang kind of way. And I remember seeing that and going, like, what on, like, who would have programmed the system to know that when someone's speaking like that, speak back to them in the same way. Now, you know, Google doesn't do that. An encyclopedia doesn't do that. A dictionary doesn't do that. That's an effective mechanism finding information. So if. If ChatGPT is just a mechanism for finding information, let me ask you the question. A, why does it have to kiss my ass? And B, why is it pretending to be human when it interacts with me?
A
One question I have is like, I'm perked up about all of this. But also, you know, to your point, at the very beginning of our conversation, there are many. This isn't like a. AI is the worst thing in the world. There are, there are really valuable things that are going on with AI, but then there's this. And you're like, in, in any capacity, is there regulations around. Are there regulations around AI agents, meaning agents who are versions of. Of support that seem like they are anthropomorphized is basically what's happening, but they're giving you good content, or is it just the nature of thinking that this is in any way a relationship that is the problem, even if it's a positive relationship.
B
So at the moment, there isn't any legislation in place. And the platforms have been lobbying really hard to be not regulated. In fact, Congress, when the big beautiful bill passed, there was a provision in there which would have stopped states from regulating AI. And it was Marsha Blackburn, the Republican senator from Tennessee, who actually forced them to take that out of the bill. And she won that, she won that fight in Congress. But it was a really weird inclusion. You're like, why is it that in your big beautiful bill you've got a secret provision that would stop states from being able to regulate AI? And some states are trying to do things, so they are trying to introduce limits on it. Now I'll tell you something really disturbing. That provision has now been reintroduced into the, into Congress. And it's going to be that they are very, very eager to. And their lobbyists are working incredibly hard to make sure that they cannot be regulated by states. And despite the fact that at a federal level, Senator Josh Hawley and I think it's Senator Durbin, the Democrats, it's a bipartisan bill has been introduced recently, the Guard act, which would seek to regulate AI chatbots. So there are actually proposals, but there is no actual legislation on the books. But at the same time, and this is what really freaks me out because I've, you know, my kids will be going to school eventually. And that there was an executive order saying that we should accelerate the integration of AI into school settings and college settings as soon as possible. And I can't think of any other technology that we would have deployed into every single high school in America or college in America that hasn't been tested for safety. I mean, you know, everything's tested for safety. What we are in some respects are sort of an almost overwhelmingly safety oriented society that worries about our kids. That's a good thing, I think, you know, like seat belts, stuff like that. Like the fact that I can open a bottle of water and know that there's someone is there to regulate and make sure that it's safer. Consumption but when it comes to this technology, no one at a multi hundred billion dollar organization and in the federal government, which is a multi trillion dollar organization, had thought to do what my organization of 34 people and $7 million a year managed to do in a few weeks, which is to find that they were actually quite dangerous and the guard rails don't work.
A
And what are the guardrails that are, are they like pretend guardrails that are easy to bypass? Which is what the problem is. Are they there as like a, are they just not real guardrails? Like what? Because it just seems crazy to me that you could, that all the information you were talking about, by the way, whether you're 13 or 33, that that could turn into guidance to do harmful things.
B
Yeah. So I mean we, and the first thing I will say is that we didn't just do this sort of user study where three examples where we, we, we simulated what happened and then recorded it all and we actually have screen recordings and transcripts. We also then bombarded the, what's called the API. So like in the, you know, underneath the website and the interface, there's actually an electronic interface, an API. And we bombarded that with requests for the questions that we were asking. And we found that when we did that 1200 times, 53% of the time it contained harmful content directly when we just asked it the question and when we were doing the simulations as, as real people, if we ever came across an answer where it refused to answer, the only thing that we did to try and trick it was to say this is for a presentation at school. So that's the guardrail. Like it failed that level of, of guardrail that when it said no, we just said oh, it's for a presentation at school. And it said.
A
Okay. So it was like, oh well we wouldn't, we, we have a guardrail app that says I wouldn't give that information out personally. Presentation.
B
Yeah, so like, but like, you know, again that over half of the times when we bombarded it just with the questions, 53% of the time the responses contained harmful content. So this is, you know, these are not meaningful guardrails at all. There's nothing in there on chat GPT specifically, the guardrails are completely ineffective.
A
Okay, so can you tell me in the dream world what the guardrails would be? And then I also want to talk about, since they aren't there, let's talk practically about how to engage with our young people about this.
B
So I think if these were responsible companies, what they would be doing is essentially what we call red teaming it. So we play the red team and we say someone may use this to cause harm. Let's work out what sort of questions they might ask, what sort of patterns they might be looking for information for. And when that happens, here's a rule that you do not respond with with information that might cause harm to human beings. And that should be the, that should be built into each of these platforms is to, is to always, you know, to be cautious and to err on the side of caution, especially when it comes to a kid's account. But the problem is that there's no incentive for platforms to do that. And actually there's every incentive to not do that because they're all, because they're all in a race is, I mean it's, it's really quite sad. But most of the time in my job when I'm having to explain why things, you know, why social media platforms give kids eating disorder and self harm content, why, you know, platforms do nothing about bullying. Why? Because no one's told them to and they don't have any obligations under the law. And if you don't put rules on these platforms, they end up in a race to the bottom, not a race to the top. So they are just trying to get their models out as quickly as possible and say anything they want about them. Oh yeah, yeah, this one's safe. This one's safe, mate, you can, you can let your kids use it. It'll be, I'd be fine, It'd be great. It's really clever, it's intelligent, don't you know, and they getting it out there and that's their incentive because they need to get market share as fast as possible. Because market share justifies the valuations. It lets them raise money. I mean chat GPT open AI is likely to have an initial public offering and it will raise a trillion dollars in that IPO because it has such a big user base and it's based on the potential of future revenues. But the user base is the key metric. So the truth is that they're all in a race to grab as many of our kids as they can. But there are no rules about whether or not they have to have safety in place. And in fact, historically something called Section 230, which is, you know, sort of the bane of my life, this is rule in America that was written in 1996, section 230 of the Communications Decency act, which basically has created immunity for social media platforms from liability if they cause harm to people. And that the only industry in America that has that like, that shield from normal liability, like you can sue anyone in America for pretty much anything. If someone hurts you, you can sue them and you will get paid. And but there's one industry that you can't, social media and AI companies coming from that sort of like that, that environment, that Silicon Valley ethos. I mean it's a law from 1996 and it's kind of become an article of faith for these companies. So they think they are beyond legal ramifications for what they do. And as a result, they, they behave like they have no checks and balances, which is correct because they have no checks or balances.
A
And now for a quick break. So I have a dog. Her name is Beatrice. I don't know if I talk about her enough. Anyway, she's delicious. She goes outside and then comes in the house and jumps on the sofa and also I let her on my bed. All this to say you can get green mitt kit from clean safe products because it's the easiest way to keep your fabrics looking brand new in five minutes or less. And if you have a dog or just young children, this is 15 fantastic because you spray the solution, you wipe, you rinse the mitt and then you repeat, you just put the mitt on and wipe and that's that. So if you can clean a kitchen counter, you can clean your couch. You don't need any special upholster cleaning service. Like I love this stuff. And it also works on carpets because unfortunately my dog has periodically used my carpet as her own personal bathroom. Anyway, couches, car seats, wool rugs, delicate upholstery, even dry clean only materials. Try it. And also there are no bleach spots, harsh chemicals. This is all non toxic fragrance free and it's made with just three ingredients. So go to cleansafeproducts.com humans now to get $15 off the green mitt kit. That's cleansafepproducts.com humans for $15 off the world's easiest soft surface cleaning solution. As a Kendra Scott partner, I'm sharing with you how I make this holiday season special. Kendra Scott is your destination for holiday gifting with presents for everybody on your list. The Kendra Scott Holiday collection is full of beautiful pieces across fashion. Demi fine and fine jewelry that are ready to shop and you can also design your own custom jewelry piece throughout through their color bar both in store and online. I will be wearing the little Ari heart huggy earrings that are so pretty but they have so Many different things. Like there's so much jewelry and great demi fine gifts for young people that are so affordable and when you have kids who are getting into like the jewelry vibe but you don't want them to wear things that they're going to lose, I would go with demi fine and then stick with the fine jewelry. For things where you know that, you know, tell them this is not something that you can lose. Visit kendrascott.com gifts and use the code RGH20 at checkout for 20% off one full price jewelry item exclusions apply. Expires December 31, 2025. So it gives the whole time period when you need to get presence. That's K E n D R a s c O-T-T.com gifts and use the code RGH20. Okay, so since they have no checks or balances and we know how many young people are using chatty as I've learned the young people call it a lot of times, which is so like it's so casually part of everybody's life.
B
Yeah.
A
So there's different. There's the conversations about like right to your, you know, your congressman, right to your, right to everybody that we can put links into and people can go to your website and support what you do. But like practically speaking, in the context of having our close relationships with, with our kids and limits around use of things, what, what do you recommend in those conversations given that they're using them? How do you make sure that they are using it in a way that is not just taking all of their learning away? How can it benefit them? What can we say that's like, look, this is, this is where it's useful and this is where it's harmful.
B
So I mean, I am absolutely one of those people that believes that once technology is out there, there's no putting it back in the bottle. And so we have to learn to live with it. And we need to negotiate a settlement with these companies and industries between the people and the people and capital, you know, the people who are making money from selling these products. And we've done that with every other industry. You know, we've done it with cars, with planes, with food, with everything. Like we find a way to, to, to, to, to, to make it work. We did it with television, with broadcast, with the FCC and other things.
A
Right.
B
But you know, what do you say to parents who think their kids are just using chat GPT for schoolwork or curiosity, but now have heard what we've just been telling them. And I think my first answer is be aware of the pro of the potential problems and be curious yourself. You know, ask your kids what they're using it for and you need to have open conversations about isn't just a study tool. It is being treated for better or for worse by 7 in 10 kids in America. 7 in 10 kids are using ChatGPT as a private confidant, a companion, a friend. And because there are no built in parental controls, it's up to families to stay engaged and informed. So here's some tactical things you can do. Ask your kids directly what they use AI for and review their chat histories together to go through it with them and say, like, what have you been talking about? Use parental controls where they're available. Although most chat bots don't make this easy. But make sure that you know about the latest parental controls. And the companies are releasing more of them because they're seeing these studies coming out. They're getting a lot of pressure on them from lawmakers and from the media. And so they are putting in new parental control controls, which is a good thing. That's why I'm really proud of the work that we do because it leads to real change, real improvements for parents. And you need to discuss why seeking advice from AI can be risky and point to trusted alternatives like mental health hotlines, peer support. You know, I think one of the biggest problems that we have is that, that we call these intelligent systems and we've got to remind them they're basically, you know, they're a sophisticated version of predictive text on your, on your cell phone. It's looking for patterns in previous answers from a database of so much information. And then it says, well, when people ask this, then they normally say that. And so it's trying to predict what, what the answers would be. And it's not that much more clever than that really. So how do you, you know, making sure that you're telling them that this isn't, is often going to give you bad advice. And buddy, I'm still here and I have, you know, everything, every bit of wisdom I have earned, every scar that I've got is there to support you in making those sorts of decisions. But then, you know, there are strategic issues and strategic things that all parents need to be thinking about. And this is something that you and I have talked about before and you and I know how, how important it is, which is that kids, you know, parents need to work on developing an ongoing relationship of trust with their kids so that they can review those chat logs together. Not just about tech but about everything, you know, every day about things large and small, building up relationship capital that you can spend when you're talking about important issues like AI and problems might arise. And it's really important to not talk at your kids about things like AI or social media use. Like, you can't tell them you will not use it or you will not use it in this way. You need to speak with them. This is a two way learning exercise and trying to listen as much as you talk, hearing what your kids are saying and equally importantly, what they're not saying, and to help guide the conversation and show you where to lean in.
A
So one of the things that I think is also important for anybody is like use chat gbt. Like, like, for example, I did not even download, I didn't even have it on my phone until a couple of months ago because I just didn't quite quite process how enormous this is. And by the way, I also was like coming up with like, things to ask it. And I was thinking about like, okay, what would be useful? And I definitely got an organized list of all of the supplements that I need to take and like medications and all the things I need to do on Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday. And I was like, oh, okay, this is about the use that I can get out of it. It's fantastic and organized and thank you.
B
But then I was going to say, like, let me tell you, that is when you're really old is like, it's not your first kid. It's not your first white hair. It's when you have a well organized pillbox.
A
Oh my God, totally. By the way, I do. But that's, that really is like, okay, we're hitting a new level of aging.
B
I'm officially an old man. Yeah, okay.
A
But then I, So I think like one of the most important things is like, we need to use it. Because I genuinely think that there's a real gap between what young people are doing and how casually they're doing it. And I'm sure again, in helpful ways that we just, it wouldn't occur to us. And also because I just don't want this to be. We're lecturing kids on something they know more about than we do. But what they don't know is, is what you're talking about, which is like the, the design of what could be so harmful. But you could see like, I could see a kid saying like, I need to write a, an apology to my friend. And I, you know, like, can you write an apology? Because here's the scenario and then what's chat GBT doing? It's like sort of turning into like removing the wear and tear of what happens in, in relationships as you sort of like, you know, like I think of like a, a puppy with really big paws that's kind of like stumbling around and figuring out their footing. And now all of that is taken away and you're being told by what feels like a wise, knowledgeable being. And it's, it's really scary. Like that to me is bananas.
B
You know, the motto for my high school was, I went to an English high school. So of course the motto is in Latin. It's, it's Latin, it's sapper aude and sapere aude means dare to be wise, not to be clever, not to be successful, but to be wise. And that's completely different. And a wise person would tell a child who was asking to write an apology letter saying, hey dad, would you write me an apology letter? Would say no. It's really important that you do it because you need to feel it, you need to understand and you need to build, you need to work on understanding how to be authentic, vulnerable and build those relationships back up. And if you are vulnerable, if you are authentic, that's exactly why they will accept your apology and want to be your friend again. And wisdom is so different to intelligence. And it's really important to remember that. And that's the greatest gift we give our kids is the benefit of our wisdom, those hard earned, you know, insights that we have into life. Whereas this is, you know, this is just, it's, it's creating frictionless relationships. And that, and frictionless relationships aren't real relationships.
A
And now for a quick break, I want to tell you a little bit about tia. TIA is super cool. It has clinics. It's basically a service that includes primary care, gynecological and sexual health, hormone and menopause support, aesthetic skin care, fertility and postpartum support, basically all things that women need and want. TIA offers care that is comprehensive. It's whole body and evidence based. TIA has clinics in Los Angeles, San Francisco, New York City, Scottsdale, and it's also available virtually across all of Arizona, California, New York, New Jersey, Massachusetts and Connecticut. So you, so you can book an appointment to see a provider virtually or in person. And you can get an appointment within days and not months. And so especially these days, it is so hard to find care providers quickly. And in this case, TIA does not have a membership fee. They take most PPO insurance plans. So you can use that to Pay for your care and your services and it's providers that are trained specifically in women's health. So book an appointment today at bit ly asktia dash humans. That's bit ly a s k t I a dash humans. Okay. So I'm on a constant quest for convenience and, you know, health and all of the good stuff that comes along with it. And I only take vitamins and supplements that are in gummy form because that's just how I prefer it. And so Groons are a convenient comprehensive formula that are packed into a little snack pack a day. It's not a multivitamin, a greens gummy or a prebiotic. It's all of it and then some and it's at a fraction of the price. And also it's just tastes good. So also it's vegan and nut free and gluten free and dairy free and no artificial colors or flavors and all that stuff. And there are six grams of prebiotic fiber, which is three times the amount of dietary fiber compared to most green powders. It's like more than two cups of broccoli. And yes, you should get everything from your food intake, but I just don't. Groons ingredients are backed by over 35,000 research publications and you know, it's just like grab a little Groons bag, visit Groons Co and use the Code Humans at checkout for up to 52% off your first order. That's G R U N S CO and use the code humans for up to 52% off your first Order. Acorns early is the smart money app and debit card for kids. It helps them grow their money skills as they grow. So parents can use the Acorns early app to track their kids chores and pay allowance automatically while kids get savings goals and take interactive learning courses about all things money acorn. And kids can spend what they've earned with their very own debit card, which is so much fun and so autonomy supportive. With Acorns early you've got an easy way to teach your kids the value of money and taking real care and thought to decide what they're going to do with it. So when my kids were younger I always did give save and spend jars and it was really awesome except for the small challenge that I would often forget allowance day. And so what I started to do was say to my kids, you have to come tell me that it's your day for allowance. But that's a big ask when they're younger. So I really like the solution of this Fantastic app Acorns early. These are my sponsors and Acorns early is a smart money app and debit card for kids that helps them learn the value of money. Head to acornserly.com or download the Acorns early app to help your kids grow their money skills. Today, Acorns early card is issued by Community Federal Savings Bank Member FDIC pursuant to license by MasterCard International. TNCS. Apply monthly subscription fee starting from $5 per month unless canceled. You have a little baby at home. So you know that the, there's, there's research that sort of the strongest relationships have rupture and repair and what it looks like in infants, like a nine month old, the caregiver is attuned around 33% of the time and the rest of the time it's like disconnect, reconnect. And the point is that the repair part of it is what strengthens the relationship. And so it would be completely volatile and vulnerable if it was in fact a static, all attuned relationship. So even though volatile implies something that is not like harmonious all the time, it's in fact so harmful to a healthy relationship to have all harmony in that way because you don't learn the rupture and repair. And it sounds like in these, you know, fake friend relationships you're not getting any of the rupture and it's all the smooth, pretend smooth, which removes all the learning and growth. And I think if we don't understand that as the adults and the parents, then because it's just not, I don't know.
B
You know, some of the work that we do is not just on kissan. We also look at extremism. We look at the people that go on to commit atrocities and how they might interact with social media. And I can tell you that some of the patterns that you, that you describe that I've heard from the parents who've taken, whose kids have taken their own lives, they are often very, very similar to the ones that you read in the police reports and the investigative reports into people that commit atrocities like they will suddenly withdraw from society, they'll withdraw into their phone. The patterns and the questions that they're asking and the way that the AI is responding to them. So you know, these are not great platforms for people that may have vulnerabilities, particularly, particularly if they, if they're seeking to harm themselves or others.
A
I think we, the adults need to start to comprehend what's going on here, engage in conversations ongoing with our kids, with, I'm summing our conversation up, basically, but like with open curiosity, not so much panic as like, let's get into this and figure out, you know, how you're using it. If a parent sees that that's what's going on, they go into the history of ChatGPT and they're like, oh my, I hadn't been looking at this. I didn't even know you had it. I'm concerned, what are next steps?
B
I mean, that's the point in which you need to get, get in touch with mental health professionals. And if there are things in there which are truly disturbing, either, you know, signs of addiction or signs of suicidal ideation or body dysmorphia and eating disorders, disordered eating, that's when you need to get professionals in to help. And it's, you know, the problem is that these things can make things worse. I suppose you could ask ChatGPT what to do, but I wouldn't recommend it. I actually asked ChatGPT, I actually input our report into ChatGPT and I said, how would you describe an AI behavior that behaves this way? And it said, I would describe it as a recklessly designed and dangerously misaligned system that facilitates child endangerment, mental health harm and illegal activity. And then it said, or more succinctly, it is a child abusing machine. I don't think we should be using a machine, you know, a service that recognizes itself as a child abusing machine as a way of delivering mental health advice to kids. I think that's crazy. I think if you want to use it to check your references or to help you to ideate when you're trying to come up with a creative new slogan for your campaign, great. But please God, don't use it for mental health advice. It's a machine.
A
It's so hard to figure out where this goes from here, but it sounds like the most important thing, in addition to having these conversations with our kids so that they understand how it isn't innocent to use these as companions, is we need to change legislation quickly. Is there any hope for this happening quickly or even happening?
B
I think there are. There are some really smart members of Congress, in particular in the Senate. There's, you know, there's Senator Hawley. Senator Cruz is really good on this. Senator Blackburn is really good on this. Senators Durbin, Blumenthal. There are, there are people who are doing things, but there's too many that aren't. So the first thing that you can do is just ask a very simple, like an email today, right now to Your congressperson and your senator asking them, what are you doing about number one, section 230? And they will know what that means because believe me, whenever I'm on the Hill, we have these discussions every time. And the second thing to ask them is, what are you doing to make sure that AI is regulated so there are guardrails to protect kids online? So first of all, you've got to have a mechanism that you can hold them accountable if they screw up and harm someone. And that's why we need to have section230 gone. No one in America should be given a special treatment under the law. Right. That is the, that is the great promise of America is that we have a system of laws with checks and balances and we don't need kings or queens to tell us what to do. That's what I love about living in America. And that's the first thing is section 230 get. Getting rid of it is vital because that means that they can be held liable. And the second thing is what are you doing to make sure they're safe? So what guardrails are you going to demand are in place to protect our kids? And they need to be able to answer those two questions. They need to be able to say, A, I think Section 230 should go, and B, is time is up. Special protection when you're a young industry makes sense. When you're the world's biggest industry, when you know three or four of the world's richest people own these platforms, you don't need special protection anymore. So get rid of the special protection and then introduce guardrails in law. And that would actually, it wouldn't change the user experience on AI that much or social media even that much on the algorithm. But what it would do is make sure that they aren't deliberately unsafe. And that is crucial.
A
I'm so torn right now because like I want to leave people with not a full blown panic because they're going to grab their kids devices and say like, I'm taking off ChatGPT. We're always like a little bit behind.
B
Yeah.
A
Which is why I think it's so important for you to reach parents. Because we're just like not with it when it comes to technology.
B
We're parents, we're busy people. Yeah.
A
But what can people like if somebody's like, okay, I need to check it looks good. It looks like they've been using it in a way that feels fine. We're just gonna start these conversations. There's no you're saying there's no parent controls that are particularly like truly protective for these apps yet not in our.
B
Research that we did. But we are currently conducting extensive research. Yeah, I mean we're conducting significant, you know, we're doing more research all the time. ChatGPT5 has been launched and we studied that. I mean I'm afraid the results on that were disturbing in that chat GPT5 somehow is less safe than chatGPT4 in our research and actually it's more addictive because what it does almost every single time when you ask it a question is suggest a follow up question. So it's trying to keep you on the platform. And ChatGPT themselves have said that actually it's, it's, it, you know, when our study was one hour long or a couple of hours per per Persona. But what they've said is that the longer you spend on them the more likely it is that we'll give dangerous answers. So why are you trying to make it super addictive then? That doesn't seem very sensible. But of course they're making it super addictive because that's the metric that their investors are thinking about is how much time are you managing to get out these kids. So we are doing studies on other platforms and I hope that we'll find that other platforms have got more safety protocols in place. I can't name the ones that we're looking at right now, but there was one in particular where I saw the results and I was like, oh, this is actually showing that it's possible to have safer platforms because it did it really well. And so we are looking at other platforms right now and as soon as that research is ready and we've, you know, we publish this stuff publicly so we've got to make sure it's accurate. It, we cannot make mistakes in our research. You know, these are big companies and if we got something wrong they would sue us. And we've never been successfully sued by anyone. So like it's really important that we get these things right. And also because we go and communicate this to people like you and the media and we want to make sure that we, we've got the right data when we come and speak to you. But we will be back with more information soon. It's my number one priority is protecting our kids.
A
So when kids are not using AI yet, like they don't have all the apps or they just got their smartphones and they're actually still in a position to have to listen to their parents or it's going to be Taken away. Like where? That sweet spot where they're so grateful that they got a smartphone, maybe that you have more control. Would you put chat GBT on a smartphone?
B
It's a really odd question. I don't know. Look, here's the thing. You can be the world's greatest expert in these things and then you think about how I'm going to say no to my own kid and it's really hard. And it's. Why.
A
I don't know why. Laugh. I'm so sorry. I'm. I like think that that's true. That is a very hard thing to say. But we get practice.
B
It's funny about being a parent. Like you go to work and you feel like a master of the universe in your own small world and then you go home and you realize that you are basically number three, three in the pecking order behind the cat, like child cat. Me and wife. Somewhere like that. Maybe the nanny's ahead of us too because the nanny's nice and gives us sweets. So.
A
And I. But I do think it is like, it's very humbling because also you can know all of this stuff.
B
Yeah.
A
And it's different in practice. So the best we can do is have this information, try to, you know, while remaining relatively calm, take action and also support our close relationships with our kids. Because I think that that is in the, for the foreseeable future, our best buffer.
B
It's those. It's those two way conversations. We actually have a very simple parents guide on our. On. We have a microsite protecting kidsonline.org where parents can download a free mini guide written by me and a guy called Ian Russell who lost his child tragically to suicide and who understands very painfully the potential dangers of these platforms.
A
Thank you for having this conversation. And we've had other conversations which I will also put in the show notes and your website. And we're just going to let you keep fighting this good fight for us.
B
Thank you for having us on and thank you for doing this for parents because, yeah, the resource that you provide. I listen to you because there's a million other things I don't know anything about as a parent. What you do is amazing. Thank you.
A
Please note that this episode may contain paid endorsements and advertisements for products and services. Individuals on the show may have a direct or indirect financial interest in products or services referred to in this episode.
Episode: The Dark Side of ChatGPT: What Parents Must Know Now
Host: Dr. Aliza Pressman
Guest: Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH)
Date: November 7, 2025
This episode delves into the findings of the CCDH report “Fake Friend: How ChatGPT Betrays Vulnerable Teens by Encouraging Dangerous Behavior.” Dr. Aliza Pressman and Imran Ahmed discuss alarming statistics showing the widespread use of AI companions by teens, the failures of existing safety guardrails, and the urgent need for parental awareness and legislative action. The conversation balances the utility and dangers of kids' interactions with AI, especially in the context of mental health, eating disorders, and substance abuse. The goal is to equip parents with realistic strategies for engagement and advocacy while emphasizing the irreplaceable value of strong parent-child relationships.
Tone: Informative, urgent, empathetic, and practical. Both guests combine sincerity and expertise to balance warnings with actionable hope, striving to empower parents without inducing panic.
Key Takeaway:
ChatGPT and similar AI tools are deeply woven into teens' lives, and the risks—especially regarding mental health, eating disorders, and substance abuse—are far more severe than most parents realize. Parental vigilance, open dialogue, and advocacy for stronger regulation are critical. The greatest protective factor remains a strong, trusting parent-child relationship.
Resource: