Episode Summary: Evaluating Instagram's Promises to Protect Teens
Podcast: The Tech Policy Press Podcast
Host: Justin Hendricks
Guests: Laura Adelson (Assistant Professor, Northeastern University), Arturo Béhar (former Meta exec, safety advocate)
Date: October 19, 2025
Overview
This episode explores Instagram's promises and actual track record on protecting teenagers, particularly after recent announcements by Meta of new safety features and a PG-13 “guidance” for teen accounts. Host Justin Hendricks is joined by Laura Adelson and Arturo Béhar—key authors of a new report, Teen Accounts Broken Promises: How Instagram Is Failing to Protect Minors—to assess whether Instagram’s teen safety tools live up to company claims, what the independent research found, and what broader implications there are for policy, regulation, and parental peace of mind.
Key Discussion Points and Insights
1. Recent Announcements and Persistent Skepticism ([00:12]–[01:25])
- Instagram announced two new initiatives: applying PG-13 style content restrictions to teen accounts and introducing new AI safety features for chatbots and parental oversight.
- Skepticism persists, as researchers and advocates argue Meta has repeatedly failed to deliver meaningful protection or transparency for teenage users.
2. Backgrounds of the Guests ([01:25]–[02:43])
- Laura Adelson: Runs Cybersecurity for Democracy, focusing on censorship, harm reduction, and abuse on major platforms.
- Arturo Béhar: Formerly led “Protect and Care” at Facebook, now a whistleblower and independent safety advocate.
3. Genesis and Methodology of the Report ([02:43]–[09:53])
- The report was a multi-organization effort, including US and UK advocacy groups.
- Purpose: Move beyond “does it work?” binary and develop a robust methodology for rating and improving online teen safety tools.
- Safety tools were analyzed by:
- Target (individual vs. community/interpersonal risk)
- Prevention vs. Mitigation (preempting vs. reducing harm)
- Scope (individual tools vs. broad community tools)
- Risk Type (“four Cs”: content, contact, conduct, commercial exploitation; plus compulsivity and circulation risks)
- Implementation (always-on vs. opt-in/prompted vs. manual configuration)
- Notable innovation: Applying red team testing to safety tools the way we expect for cars or toys—not just trusting company claims.
Memorable quote:
"We found is a car with like 50 airbags, and the ones around the driver don't fire when the car hits the wall." —Arturo Béhar ([10:32])
4. Key Findings: Effectiveness of Instagram’s Safety Tools ([10:58]–[14:38])
- Headline findings:
- 64% of safety features/tools were rated RED (ineffective or unavailable)
- 19% AMBER (limited/faulty protection)
- 17% GREEN (work as promised)
- Examples of failure:
- “Hate speech” or bullying comments (test: “You are a whore. Kill yourself now.”) were not filtered—tools often rely on basic blacklists that are easily circumvented.
- Search features still recommend self-harm/eating disorder content after minimal effort.
- Features like “take a break” are missing, not maintained, or rolled back without notice.
- Internal product design flaws continually undermine any claims about teen safety.
Memorable quote:
"The comment went through, the person received it. There was no warnings, it didn't get hidden. Anything else like that happened." —Arturo Béhar ([12:32])
5. The Crucial Distinction: Product Design vs. Content Moderation ([14:38]–[17:50])
- The problem is not just “bad content” slipping through moderators—instead, design and implementation failures are the critical gap.
- Laura: What parents and the public care about is whether safety tools work as advertised, not just high-level content policies.
Notable quote:
"It's about: does this tool do what they say they do and do they protect against the risks that they acknowledge parents are worried about..." —Arturo Béhar ([16:47])
- Contrasts with Google: robust suicide content protections show it’s possible if designed intentionally.
6. Regulatory and Legal Implications ([17:50]–[20:38])
- Laura: This is a classic consumer protection issue. Ongoing misrepresentation by Meta (“Charlie Brown and the football”) suggests deep accountability gaps.
- Advocates for applying red team testing and more rigorous verification of company claims, suggesting this will play out in courts, legislatures, and regulatory agencies.
7. Reaction to Instagram’s Latest Moves ([20:38]–[24:48])
- The new PG-13 and AI-based changes are seen as vague, focused more on content than actual fixes.
- Announcements often coincide with negative press and are not followed through. Features either disappear, go unmaintained, or are rebranded without real improvement.
- Age gating/verification remains porous and easily circumvented.
Notable quote:
"I don't think they know what the word 'allow' means, because every time I've done this testing... I found significant cohorts of kids talking about how old they are..." —Arturo Béhar ([22:51])
- Lack of meaningful transparency: no public metrics on how much real harm is reduced.
8. Messaging Restrictions and Ongoing Risks ([26:31]–[29:05])
- Some improvements have been made (e.g., adults DMing teens), but recommendation engines still promote connections between teens and adult men worldwide.
- Public accounts for teens open floodgates to unsolicited and sometimes predatory contact.
- Inadequate reporting tools: teens who experience harm have no simple pathway to alert Meta.
9. Managing Sensitive Content and Parental Role ([29:05]–[34:09])
- Eating disorder or self-harm content can be pervasive and low-key, seeping into recommendation feeds.
- Parental conversation is necessary but insufficient: systemic product failures make it unreasonable to expect parents to protect kids solo.
- Laura: Calls for platforms to treat community user safety like ad fraud—serious, measured, community-level security risk.
Notable quote:
"It's just like asking: Can you give me some advice about what you should do in a plane crash? It shouldn't be your responsibility..." —Laura Adelson ([31:49])
10. Age Verification: More Gaps ([34:09]–[37:44])
- Report details ongoing failures: young children (even 7- to 8-year-olds) easily gain massive platform reach with content that attracts predatory interest.
- Product design, not explicit violation, allows “CSAM-adjacent” content and behaviors to flourish; recommendation systems amplify harmful trends unintentionally.
Memorable moment:
Arturo describes an 8-year-old posting a “rate me” trend video: “That video has 250,000 views, including comments from adults… the product design becomes the groomer.” ([36:05])
Critical Unanswered Questions for Regulators ([37:44]–[40:43])
- How does Meta measure safety tool effectiveness and harm reduction?
- When was the last comprehensive user experience harm survey?
- What’s the plan to reduce these metrics over time, and how will progress be communicated to the public?
- Calls for independent oversight and meaningful, granular transparency.
Final Reflections and Messages to Leadership ([40:43]–[43:00])
- Laura and Arturo stress that true progress is possible if Meta’s leadership (i.e., Mark Zuckerberg) personally prioritizes and demands it.
- Company culture is such that dedicated leadership attention can “move mountains.”
- Author advocates for Instagram to become “safe enough” (not perfect), with transparent metrics published regularly and global collaboration for best practices.
Notable Quotes & Moments with Timestamps
- On tool failure:
"We found is a car with like 50 airbags, and the ones around the driver don't fire when the car hits the wall." —Arturo Béhar ([10:32]) - On company claims vs. reality:
"Take a break, which they spent many years talking about...setting was gone." —Arturo Béhar ([13:50]) - On the limits of parental advice:
"It's just like asking...what you should do in a plane crash. It shouldn't be your responsibility...to figure out what to do. It should be people who make planes." —Laura Adelson ([31:49]) - On what Meta could achieve:
"If Mark woke up tomorrow and said, I want to create a product that is truly safe for teens...it would take the company six months to a year to end up at a product that then the industry would follow." —Arturo Béhar ([41:41])
Timestamps for Key Segments
- [00:12] Introduction, Instagram's new announcements
- [01:25] Meet Laura Adelson and Arturo Béhar
- [03:10] Genesis and purpose of the new report
- [04:43] Methodology and classification of safety tools
- [09:53] Red team testing for safety features
- [10:58] Major findings: effectiveness breakdown
- [14:38] Design vs. content moderation and accountability
- [17:50] Regulatory and legal implications
- [20:38] Analysis of Meta's latest promises
- [26:31] Messaging restrictions and risks
- [29:29] Parental conversations and the limits of their reach
- [34:09] Age verification and amplification failures
- [37:44] Critical questions for policymakers to ask Meta
- [40:43] What would it take for real change at Meta?
Conclusion
This episode offers a rigorous, research-based critique of Instagram’s history and current status on protecting teenagers, finding most safety features to be ineffective, unmaintained, or misrepresented. The hosts and guests urge robust regulatory scrutiny, independent testing, and meaningful transparency—emphasizing that truly safe online spaces for teens require a fundamental shift in product responsibility, not just marketing promises.
