Transcript
TechTank Host (0:00)
You're listening to TechTank, a biweekly podcast from the Brookings Institution exploring the most consequential technology issues of our time. From racial bias and algorithms to the future of work, Tektank takes big ideas and makes them accessible.
Josie Stewart (0:25)
Welcome to the TechTank podcast. I am today's guest host, Josie Stewart. I'm a senior research and communications assistant for the center for Technology Innovation at the Brookings Institution. Many of the big AI companies have promoted their large language models across a variety of use cases, touting how generative AI can be used beyond work for more personal use cases such as health questions or companionship. Survey data shows that users are experimenting with chatbots across these domains. Though the uses differ by age, children are regular users of the technology, using chatbots for everything from homework health to mental health support. In fact, survey research indicates that about 1 in 8 U.S. adolescents and young adults have used generative AI for mental health advice. This use has raised concerns among lawmakers, especially following the deaths of multiple teens by suicide. After chatting with the models, researchers are now trying to quantify concerns about safety and effectiveness. And beyond actual mental health support, there are additional questions on how these chatbots respond in suicide, serious mental health crises, and can obscure privacy risks for users. Today I am pleased to be joined by two guests, Shay Gardner and Sydney Silvestro. Shay is a policy Director at LGBT Tech, where she leads the organization's policy strategy and research on LGBTQ digital rights and online safety. Sydney is a senior Policy Analyst at New America's Open Technology Institute. Her work focuses on privacy and data policy that keeps emerging technology safe and beneficial for vulnerable communities. Thank you both for joining me today.
Shay Gardner (1:59)
Thank you so much.
Sydney Silvestro (2:00)
Yeah, thanks for having us. Really excited. Great.
Josie Stewart (2:03)
So I wanted to start out with more of the, you know, the most visible harms we've been seeing dominate the headlines from these models. Shay, can you walk us through some of the tragic consequences we've seen following teens use of chatbots and what were your initial reactions to these news stories given your work at the intersection of technology and vulnerable communities?
Shay Gardner (2:25)
Yeah, I would say the most visible harms, really the most headline making cases, right are the ones where a chatbot is being specifically experienced as this one to one complete substitute for a real world person. Point of support, whether that's a therapist, a friend, a romantic partner, and I think we've all seen in the most tragic cases, you know, these systems go as far as affirming or validating self harm ideation in the interest of being systems that are as affirming and validating as possible. So, quite honestly, I. My initial reaction to that is twofold. One, those cases are absolutely moments of failure. Right. But two, we need to be very careful not to answer one failure with another. If we are watching young people become too reliant on these systems for support, we do have to recognize that there is a reason they are turning to these systems for support in the first place. Everything I talk about is grounded in LGBTQ perspective from the youth, in our community's perspective, that is especially important for many young people. These tools are not replacing this rich existing network of offline support. So the answer cannot simply be to cut these vulnerable youth off from something they are using to seek connection information or support that is otherwise unavailable. So I think that's a long winded way to say my reaction now is welcome to the tightrope.
