Podcast Summary: Through to Thriving – Centering Young People with Vaishnavi J
Podcast: The Tech Policy Press Podcast
Host: Anika Collier Navaroli (Tech Policy Press Fellow)
Guest: Vaishnavi J (Founder, VIZ)
Date: September 7, 2025
Episode Overview
This episode of the special "Through to Thriving" series centers on how technology products and policies can better support and protect young people online. Host Anika Collier Navaroli interviews Vaishnavi J, an expert in digital child safety and youth policy, about practical strategies, current challenges, and future opportunities for creating safer and more rights-respecting online experiences for children and teens. The discussion explores the importance of bridging knowledge gaps between tech experts, regulators, civil society, and youth themselves.
Key Discussion Points & Insights
1. Vaishnavi J’s Background & Motivation (02:00–05:30)
- Vaishnavi traces her journey from Disney Imagineering, where she learned the art of designing safe and magical experiences for children, to leading child safety and youth policy at major tech companies (Google, Twitter, Instagram, Meta).
- She founded VIZ to scale her expertise, helping platforms and stakeholders create age-appropriate, safer digital environments.
"The Imagineering is the studio that helps design the parks and resorts worldwide... It was a really good lesson to me in how you can create safe, magical, innovative, creative experiences for children..." (03:23 – Vaishnavi)
2. The Complexity and Controversy of Child Safety Online (05:30–08:58)
- While everyone agrees on the importance of protecting youth, child safety is often invoked as a justification for privacy-invasive or surveillance-heavy policies.
- Vaishnavi emphasizes the need to uphold both child safety and privacy simultaneously, rather than sacrificing one for the other.
- She highlights issues like misguided attempts to undermine end-to-end encryption and the broader problem of asymmetry of knowledge—industry insiders have far greater understanding of technology than policymakers or civil society.
"Encryption and messaging is going to be a harmful experience for children and teens, when actually the privacy that encryption offers is a critical right that everyone has, including children and teens." (06:43 – Vaishnavi) "We still fundamentally have a significant asymmetry of expertise when it comes to how technology works." (07:41 – Vaishnavi)
3. Regulation and Age Assurance: The UK Online Safety Act (10:39–15:17)
- The implementation of age checks under the UK Online Safety Act has sparked debates about effectiveness and civil liberties.
- Vaishnavi critiques misconceptions among both regulators and platforms, noting that while age assurance can be spoofed, it works effectively in most real-world cases.
- She stresses that age assurance should not mean blanket bans, but rather nuanced, context-aware interventions that avoid cutting children off from vital information.
"If they're getting cut off from...important information about health or identity or sexuality...that is a failure of policy development and product development." (12:34 – Vaishnavi)
4. Litigation and the Limits of Legal Remedies (15:17–18:33)
- U.S. progress in online youth safety often comes from litigation as well as regulation. However, lawsuits too often focus on monetary penalties rather than systemic remedies.
- There is a need for remedies informed by real product knowledge, not just headline "gotcha" moments.
"If the remedy just amounts to a fine and a commitment to best practices in the future, that's not a very valuable role for litigation to play." (17:04 – Vaishnavi)
5. The Role and Limits of Content Policy – Especially with AI (20:29–26:38)
- Content policies are guidelines for how platforms should handle content and behaviors, but enforcement and remediation are just as crucial.
- Leaked policy documents offer an incomplete picture absent context on enforcement tools or intervention mechanisms.
- With AI systems, there's an even greater need for transparency about not just the rules, but how they're implemented and what remedial actions exist.
"Content policy is really only one piece of the puzzle...what are the range of remediations possible?" (21:43 – Vaishnavi)
6. Bridging Asymmetry with Data and Collaboration (24:45–28:21)
- Vaishnavi warns against reactionary policymaking driven by headline scandals instead of real, nuanced data.
- She advocates for independent research: collecting user experiences and red-teaming outside of platforms, rather than solely relying on data shared by companies.
"What we need is thoughtful, data-driven policymaking ... We need better data outside of the platforms." (25:35 – Vaishnavi)
7. Centering Youth Voices and Co-Design in Policy (28:21–32:42, 41:04–46:32)
- If given policymaking power, Vaishnavi would convene broad, representative groups of young people to share their experiences and co-design interventions.
- Meaningful youth-centered design doesn't just survey kids for input but incorporates their perspectives through every product development stage.
- She cites examples (e.g., Instagram's "restrict" feature developed with youth input) and advocates process changes such as planning youth council activities before product roadmapping.
"If I had the power, I would really invest an enormous amount of funding into...co-design with young people." (29:41 – Vaishnavi) "Building circuit breakers into the product development cycle...moments that are predetermined...to pressure-test early ideas with your youth council before they're finalized." (44:32 – Vaishnavi)
8. AI in Trust & Safety: Opportunities and Challenges (31:42–34:14)
- Vaishnavi sees real promise in AI automating harmful content detection, freeing up humans for more subtle, edge-case work.
- AI is not a panacea—it cannot predict or handle novel threats as well as diverse, creative human teams.
"AI content moderation is not hype... there's a real space for AI-powered content moderation to take over...automating decisions." (31:42 – Vaishnavi) "You need people who can do that work. That's not something...I've seen AI handle yet. But those two put together...I think that's an incredibly exciting time for TNS." (33:22 – Vaishnavi)
9. Guidance for Parents, Caregivers, and Trust-Building (35:25–38:08)
- The best "parental control" is a curious, connected adult who normalizes non-judgmental conversations about digital life.
- Being aware of modern youth behaviors—such as only gaming with real-life friends—helps adults support youth more effectively.
"The best parental control is a curious, connected parent...having conversations with children...that are actually rooted in curiosity." (35:53 – Vaishnavi) “Recognize that our understanding as adults of some of these harms are different from how children understand these harms.” (37:23 – Vaishnavi)
10. Hopes for the Future
- Technology should help youth realize their best selves—amplifying curiosity, creativity, and joy—while maintaining privacy and safety.
"I hope it helps them be the better, best versions of themselves that they want to be...I hope that it actually becomes an accelerating function for all of those things." (46:42 – Vaishnavi)
Notable Quotes & Memorable Moments
-
On balancing privacy and safety:
"I always talk about child safety and privacy in the same breath, because... there are good ways to design product and design your policies that support both of those things." (06:25 – Vaishnavi)
-
On asymmetry of knowledge:
"We still fundamentally have a significant asymmetry of expertise when it comes to how technology works." (07:41 – Vaishnavi)
-
On involving youth in policy:
"I would start by actually convening representative samples of young people across the country... I would also want to very actively co-design with young people.” (28:50 – Vaishnavi)
-
On parental involvement:
"The best parental control is a curious, connected parent." (35:53 – Vaishnavi)
-
On the promise of technology:
"I hope it...becomes an accelerating function for all of those things. And I really hope that at the end of the day they can find joy from these experiences." (46:42 – Vaishnavi)
Timestamps for Important Segments
- Vaishnavi’s career journey & founding VIZ: 02:00–05:30
- Child safety vs. privacy debate: 06:21–08:58
- Asymmetry of knowledge in legislation & enforcement: 08:58–13:52
- The UK's Online Safety Act and age assurance: 10:39–15:17
- Perspectives on litigation & policy remedies: 16:03–18:33
- How content policy works, especially with AI: 20:29–26:38
- Bridging the data gap—independent research: 25:35–28:21
- Co-designing policy with youth: 28:21–32:42, 41:04–46:32
- AI in content moderation: 31:42–34:14
- Advice for parents & caregivers: 35:25–38:08
- Making youth perspectives actionable in development: 43:38–46:32
- Hopes for the future: 46:42–47:30
Conclusion
This rich and critically engaged conversation highlights the nuanced challenges—regulatory, cultural, and technical—of youth online safety. Vaishnavi J calls for centering young voices, bridging knowledge gaps, pursuing data-driven policymaking, and balancing privacy with safety to create more just, joyful, and youth-focused digital futures.
