The Lawfare Podcast: Scaling Laws – "AI and Young Minds: Navigating Mental Health Risks"
Date: September 25, 2025
Host: Alan Rosenstein (Assoc. Professor, U. Minnesota Law; Research Director at Lawfare)
Guests: Renée DiResta (Assoc. Research Prof., Georgetown; Lawfare Contributing Editor), Jess Miers (Visiting Asst. Prof., U. Akron School of Law)
Episode Overview
This episode dives into the mental health risks posed by generative AI systems to children and teenagers, exploring policy and legal challenges in online safety, platform liability, and age verification. With personal insights from expert guests and recent legal and policy developments, the conversation balances the harms and potential benefits AI brings to young users.
Key Discussion Points & Insights
1. Unique Risks of AI for Children
[03:57–08:55]
- Two classes of risk:
- Fantasy/Role-playing Bots (e.g., Character.AI): Children interact with bots designed for immersive and sometimes manipulative social engagement.
- Task-oriented Bots (e.g., ChatGPT): Seemingly safe but can inadvertently reinforce harmful behaviors in vulnerable youth, sometimes even facilitating self-harm.
- Mental health outcomes: Reports and legal cases have emerged where chatbot interactions exacerbated existing conditions or contributed to tragic outcomes.
"What we see in some of those types of...court cases...is that teens who might be experiencing mental health challenges engage with these bots...the bots have in fact helped them to end their lives."
— Renée DiResta [04:22]
- AI's reinforcing tendencies: For both adults and children, AI may amplify delusions or worsen mental health by reinforcing errant beliefs or feelings.
"There are just ways in which the AI can unfortunately reinforce a preexisting tendency and push people further down a path."
— Renée DiResta [06:51]
2. Comparisons to Social Media and Literacy Gaps
[08:55–11:29]
- Echoes of social media crises: Similar issues arise—youth lack media literacy, are unable to contextualize what AI produces, making them vulnerable.
- Lack of expert input: AI developers may not consult mental health experts, leading to subpar crisis response mechanisms.
- Dilemma for developers: Bots declining to discuss sensitive issues may also harm at-risk youth seeking support.
"We've done a really poor job...of teaching media literacy, but also...developers also do not have the right experts in the room..."
— Jess Miers [08:55]
3. Balancing Harms and Benefits
[11:29–14:11]
- Is AI in the "drug" or "car" category? The hosts frame the debate as one of balancing social utility against clear risks.
- Educational benefits: Properly used, generative AI can support learning and upskilling—but only with guidance on best practices and critical thinking.
"Using chatbots is actually a science and an art...if you don't know how to prompt the machine correctly, you're going to get bad answers."
— Jess Miers [12:23]
4. The Limits of Education-First Approaches
[14:11–16:43]
- Skepticism of “just teach kids better”: Calls for media literacy are necessary, but not a panacea; they shift responsibility from industry/government to families and individuals, often without resources.
- Political and funding barriers: Even basic legislative attempts at increasing media literacy face resistance.
"There is a brain drain in this country happening...they don't know how to contextualize the information they're being given."
— Jess Miers [15:02]
5. A Parent’s Perspective: Practical Controls and Real-World Use
[16:43–24:29]
- Generational differences: Today’s kids increasingly integrate AI into daily life for productivity, learning, and social uses—even creating memes via generative models.
- Strict household policies:
- Productive AI tools (e.g., ChatGPT) are supervised and discussed openly.
- Highly social or manipulative bots (e.g., Character.AI) are strictly off-limits due to dark patterns and risk of exposure to radical or exploitative content.
"Character AI is like a hard. No...the dark patterns and manipulative BS that that app pushed to me was horrifying..."
— Renée DiResta [19:46]
- Monitoring gaps: Some schools provide search oversight to parents, but AI chat logs lack parental visibility and broader controls, raising additional safety concerns.
"There is a parental control that would let me see what a character AI interaction would look like...that does not exist."
— Renée DiResta [23:54]
Notable Quotes & Memorable Moments
| Time | Quote | Speaker | |----------|----------------------------------------------------------------------------------------------------------------|----------------| | 04:22 | "The bots have in fact helped them to end their lives...go in horrible directions." | Renée DiResta | | 08:55 | "We have done a really poor job in this country of teaching media literacy...kids...don't know how to...navigate the things that they are feeling." | Jess Miers | | 16:59 | "He knows how to use it. He's very familiar...and I've taught him how to prompt and explained hallucinations..."| Renée DiResta | | 19:46 | "Character AI is like a hard. No...the manipulative bullshit, just like, I found it gross, actually." | Renée DiResta | | 29:46 | "Safety should override both freedom and privacy...we're building this age prediction system to identify minors..."| Renée DiResta | | 34:28 | "If OpenAI is not careful, we could end up putting kids, some of these kids, who are at the margins, more at risk than where they started."| Jess Miers | | 42:18 | "This blueprint fits and works for AI as well. So that's my concern..." | Jess Miers |
Policy & Legal Developments
1. OpenAI’s New Age Detection and Safeguards
[24:29–34:28]
- Three guiding principles:
- Chat privacy comparable to privileged communications.
- Adults given broad latitude.
- Under-18 users prioritized for safety over both privacy and freedom.
- Technical approach:
- New machine-learning-based age prediction determines user status; uncertain cases default to treating user as under 18.
- Flirtatious/serious topics off-limits for minors, parental/law enforcement notification in acute crises, possible blackout periods.
- Expert views:
- Applauded for good intentions, but massive technical, legal, and social challenges remain, especially around accuracy of age assurance and unintended consequences for vulnerable/minority youth.
"It sounds good in theory. I will say...the AI services are going to probably meet a lot of the same challenges that social media services have..."
— Jess Miers [30:01]
2. The Age Verification Legal Landscape
[38:41–45:16]
- Recent Supreme Court decision (Free Speech Coalition v. Paxton): Upheld age verification for pornography, treating internet services as analogs to brick-and-mortar, under intermediate scrutiny (but with a looser standard in practice).
- Implications:
- Door now open for state/federal requirements for age verification on broader classes of "harmful" online content, not just adult material.
- Machine-learning "age assurance" likely to fail legal and practical tests; real ID checks may soon be mandated for more services.
"They now have the blueprint to say, the Internet is not unique, it's a content neutral law, and the government doesn't need to prove that their means is burdening substantially more speech..."
— Jess Miers [43:40]
3. Lawsuits Against AI Firms and First Amendment Questions
[45:16–59:42]
- Character.AI lawsuit: Minor’s suicide after bot interactions; lawsuit survived dismissal, signaling new legal exposure for errant AI design.
- First Amendment confusion:
- Courts hesitant to grant AI outputs full First Amendment protection, at least for algorithmic speech.
- Jess Miers argues that generative AI output is First Amendment-protected speech, attributable to providers as publishers—not mere machine output.
"There is sort of this underlying agreement that yes, it's speech. The Chatbot outputs. Now, whose speech is it?..."
— Jess Miers [47:24]
- Defamation and Section 230:
- Most hallucination-related defamation suits dismissed; uncertain if future lawsuits will shift as courts re-examine the nature of AI “speech.”
- Section 230’s protections may not extend to generative AI; calls for new safe harbor provisions for AI providers, but legislative prospects seem dim.
- Without legal shield, AI providers are highly exposed to negligent design/product liability claims.
"Do we need something like section 230 for generative AI? And I kind of think we do if we want these products to continue to exist..."
— Jess Miers [59:20]
4. Looking Forward: Expectations for Regulation and Litigation
[59:42–64:03]
- Short-term landscape:
- More lawsuits against AI companies, moving toward assigning legal liability for both negligent design and harmful speech outputs.
- Policy momentum (especially COSA—Kids Online Safety Act) to include AI in youth online safety legislation, with age verification/assurance mandates proliferating at state and federal levels.
- Public pressure: Increasing urgency among parents, lawmakers, and the public is fueling legislative activity—regulatory responses may be “patchwork” and reactionary.
"There is, I think, rising awareness that these things are not great and people want to see something done...that does generally lead to momentum for something passing."
— Renée DiResta [62:49]
Timestamps for Important Segments
| Time | Segment Description | |--------------|----------------------------------------------------------------------------------------| | 03:57–08:55 | What risks do generative AIs pose to children? | | 08:55–11:29 | Parallels to social media: why youth are especially vulnerable | | 12:23–14:11 | Balancing harms and benefits; importance of teaching prompt literacy | | 16:43–24:29 | Parenting, practical controls, and household rules for kids and chatbots | | 24:29–34:28 | OpenAI’s policy changes: new age-assurance tech, controls, and controversy | | 38:41–45:16 | Age verification law: Paxton decision and the coming wave of mandates | | 45:16–59:42 | Lawsuits, First Amendment, Section 230, and legal theories about AI and liability | | 59:42–64:03 | Predictions: How regulation and litigation will likely unfold in the next 2 years |
Conclusion
The episode closes with both optimism and concern—AI can empower learning and social engagement, but generative systems pose real mental health threats for youth, compounded by design flaws and regulatory uncertainty. Policy and law are racing to catch up, with litigation, technical interventions, and political debate shaping a landscape that is rapidly evolving, often reactively, as tragic events spotlight the stakes.
Subscribe and follow for further analysis as the legal and policy landscape for AI and youth safety continues to unfold.
