Podcast Summary
Podcast: Scrolling 2 Death
Episode: Teen Safety or PR Stunt? Meta's False Promise to Parents (with Arturo Bejar)
Host: Nikit Petrossi
Guests: Arturo Bejar (former Facebook engineering director, Meta whistleblower), Sarah Gardner (Heat Initiative)
Date: October 9, 2025
Overview of the Episode
This episode centers on Meta's (formerly Facebook's) widely publicized "Teen Accounts" safety features on Instagram, scrutinizing whether these updates genuinely protect young users or merely serve as public relations tools. Host Nikit interviews Arturo Bejar, a former Facebook engineering leader and renowned whistleblower, who critiques Instagram's (and by extension, Meta’s) approach to child safety. The episode is supported by recent research and firsthand testimonies—including from adolescents themselves—demonstrating persistent and extensive harms on the platform, despite Meta's safety promises. Sarah Gardner of the Heat Initiative joins later to share fresh survey results from U.S. teens about their real Instagram experiences.
Key Discussion Points & Insights
1. Arturo Bejar’s Background and Early Facebook Experience
-
Arturo’s tenure (2009–2015):
- Led child safety and support tools development.
- Reported directly to top FB leadership.
- The organization valued transparency, academic oversight, and incorporated young people’s feedback.
- “When I left, I was like, yeah… This seems to be a way to work that protects kids as they should be protected, as I would want my own kids protected.” (01:13)
-
Reason for leaving: Personal factors (divorce, need to focus on parenting teens).
2. Arturo’s Return & Firsthand Motivation
- Catalyst for returning (2019):
- His daughter’s alarming experience on Instagram at age 14: unsolicited explicit images (known as “dick pics”) from peers, with no effective reporting tools.
- “Shortly after she went on [Instagram], she started telling me about getting, like, unsolicited dick pics at dinner. And we're like, who does that? Yeah. At 14.” (03:10)
3. Nature and Scale of Harms to Teens
- Law of large numbers:
- Even a single incident likely means thousands or millions are affected.
- Staggering research findings from Meta’s own surveys:
- 1 in 8 teens reported unwanted sexual advances.
- 1 in 10 reported bullying and harassment.
- 1 in 3 witnessed bullying in just the past week. (04:10–06:13)
- Company response:
- Leadership (Mark Zuckerberg, Sheryl Sandberg, Adam Mosseri) were made aware yet took no meaningful action.
- “In my experience, I send an email like that, and the next day I get a meeting, and the next day there's already things to make it better. Wow. That's not what happened… And they didn’t do anything.” (06:13)
4. Shift in Company Culture & Safety Practices
-
Removal of prior protections:
- Many tools and teams focused on youth safety were disbanded by 2019.
- “[They] deleted the tools, and by the time I came back to Instagram, there wasn't even memory of the lessons learned from that work.” (07:47)
-
Introduction of algorithmic feeds:
- Algorithmic changes led to compulsive usage, rabbit holes in eating disorder and self-harm content.
- “With that [algorithmic feed] comes compulsive use, addiction… not an intended consequence, but if you ignore it for long enough, what can you say, right?” (08:32)
5. Meta’s Prioritization & PR Spin
-
Why hasn’t Meta fixed it?
- Bejar: Not about profit per se; the company simply "doesn't care about meaningfully reducing harm to kids."
- Essential safety improvements could be made with minimal resources.
- “...they were asking for 88 people [out of 80,000 employees] to reduce this harm. And those didn’t get given.” (09:38–11:12)
-
Meta’s narrative:
- Continually minimizes harm as “edge cases,” mischaracterizes or downplays real risks.
- “You can't call a suicide out of bullying an edge case... Addressing those things is the price of admission.” (12:22)
-
Misleading statistics:
- Company frequently uses selective measurement (“number of tools” instead of number of affected kids) and glosses over real impact.
- “For a company that measures everything with such precision, I thought that the fact that they kept talking about their safety program by number of tools and not by number of kids was extremely suspect.” (15:11)
6. Testing Instagram “Teen Accounts” — Are Kids Safer?
-
Arturo’s and researchers’ findings:
- Every core promise of “Teen Accounts” broken:
- Harmful content (violence, nudity, sexual content) still pushed to new teen accounts on default settings.
- Features touted as “on by default” often require navigating through dozens of toggles/screens (e.g., 50 toggles over 10 screens). (17:22–18:47)
- “I’d be the first person to be, that's amazing… it's a tragedy that [some features] weren't on by default… [Notification settings have] 50 toggles over 10 screens to change your notification preferences.” (15:11 and 18:47)
- Every core promise of “Teen Accounts” broken:
-
Algorithms still promote risky connections and content:
- Recommendation systems continue to suggest strangers, including adults, as “friends.”
- Search auto-completes and recommendation functions easily evade safety blocks (e.g., misspelled eating disorder or self-harm terms still produce harmful results).
- “If you mistype starve, Meta AI understands that the query is about eating disorders and gives you a little paragraph… but it’ll recommend the content.” (24:50–26:25)
-
Inadequate reporting features:
- Teens still lack a simple “report inappropriate contact” button.
- After years of research and testimony—including before Congress—no substantive fixes in sight.
- “It’s been two years from that [Congress testimony] and there’s still not a button when a teenager can say that they've experienced an unwanted sexual advance.” (14:05)
7. Real Experiences: Mental Health Impact & Researcher Well-being
-
Direct negative impact on adults and teens alike:
- Even researchers felt depressed or disturbed after simulating Instagram teen experiences.
- “For my own mental health, I had to stop for a period of time.” (19:43)
-
The “not interested” button fails:
- Continues to feed similar violent content despite repeated negative feedback.
- “Hit, not interested. The next video. Violent… broken bones… You have to be [graphic], that's the ground truth.” (27:13–27:53)
8. Input from the Heat Initiative: The Teen Perspective
-
Sarah Gardner presents new survey data:
- Polled U.S. teens: nearly half recently experienced unwanted contact or harmful content; half were recommended strangers, often adults.
- “One of the other really sort of scary stats was that around half also said that they were suggested friends who were strangers and who they thought were adults… So the whole, like, not contacting or not getting, you know, adults not being able to be recommended to kids is like, that is not happening.” (32:26–35:00)
-
Reality vs. Meta’s marketing:
- The number of teens using Instagram Teen Accounts does not equate to safety; it only demonstrates effective PR.
9. Parental Actions and Advice
-
Practical advice for parents (Arturo):
- Acknowledge to your children that bad things will happen online; make your home a “safe harbor” for sharing uncomfortable experiences.
- Encourage communication, support, and, if not with you, with a trusted adult.
- Advocate for regulatory and legislative change. (39:46–41:12)
-
Direct call to action (Sarah):
- Parents should tag and DM Instagram head Adam Mosseri with concerns and research findings.
- “We want you to take the content that we've made about this report and DM it to Adam Mosseri and say, please address this issue… that is a concrete way you can get involved and send a signal to Instagram.” (36:41)
Memorable Quotes & Moments
-
Arturo Bejar:
- “The largest scale sexual harassment of teens to have ever happened, organized and brought to you by Instagram.” (13:47)
- “They just don't care about meaningfully reducing harm to kids because it wouldn't hurt their bottom line to do it.” (09:38)
-
Sarah Gardner:
- “[Meta says]…that doesn't represent the experience of kids on the platform. That's always their fallback... But let’s ask them. So we pulled teens… The results were really scary because… their experiences have not improved very much from the previous environment of just Instagram.” (32:26–35:00)
-
Nikit Petrossi:
- “It should not be normal for a teenage girl… to think that receiving a dick pic is just part of their day.” (13:47)
Important Timestamps
- 00:34–02:14: Arturo’s role and first exit from Facebook.
- 03:10–03:36: Daughter’s IG experience motivates return.
- 04:10–06:13: Statistics from internal teen safety research.
- 07:47–08:32: Meta’s removal of youth safety features.
- 12:10–12:22: Minimization tactics; whistleblowers forced to delete data.
- 13:47: “Largest scale sexual harassment of teens ever.”
- 17:22–18:47: Realities vs. marketing of Instagram Teen Accounts.
- 19:23–19:43: The mental toll of test accounts on researchers.
- 24:08–26:36: Search and recommendation loopholes for harmful content.
- 27:13–27:53: “Not interested” button ineffectiveness.
- 32:26–35:00: Sarah Gardner shares results of new teen survey.
- 36:41: Parent call to action—contact Adam Mosseri directly.
- 39:46–41:12: Arturo’s advice and concluding call for advocacy.
Final Takeaways
The episode makes it glaringly clear: Meta’s headline safety initiatives for teens are largely ineffective and present a dangerous façade. A vast number of young people are still exposed to unsolicited sexual material, bullying, violent content, and connections with predatory adults, all the while Meta minimizes, deflects, and fails to enact feasible fixes.
For parents:
- Rely on open dialogue and proactive support for your teens.
- Join collective advocacy—voicing concerns to Meta and legislators is crucial.
- Don’t believe the hype: “Teen Accounts” do not mean “safe accounts.”
For listeners wanting to engage:
- Find resources and survey links at HEAT Initiative
- Directly message Adam Mosseri and amplify calls for transparency and actual child safety changes.
