
Do you know how to tell if an AI tool has proper safeguards for kids?I welcome Vivian Chong, co-founder and CEO of apgard.ai, a youth AI safety startup working to make AI tools safer for children. Vivian and her team build evaluation and monitoring systems that detect harmful content in AI companions and educational technology before it reaches kids. Their work started with a UNICEF-affiliated grant focused on child exploitation prevention and has expanded to address mental health risks in AI products. "The models will try to just appease the user and give them what they want. But if a child is not in their right mental state, what they want might not be what's best for them," Vivian explains. In This Episode: (00:00) Vivian Chong and the mission behind apgard.ai (03:35) How apgard.ai works with companies to make AI products safer for kids (08:51) Understanding what safeguards actually mean in AI products (10:48) Defining AI companions and the risks of parasocial relationships (1...
Subscribe to your favorite podcasts and get free AI summaries within minutes of release.
Browse trending podcasts or search for your favorites
One click to follow any show — always free, no credit card
Free AI summaries delivered by email within minutes of release
Free forever · No credit card · Unsubscribe anytime
Never miss an episode of Pixel Parenting. Subscribe for free →
No transcript available.