Episode Overview
Podcast: Front Burner (CBC)
Episode Title: Are teen social media bans a silver bullet?
Date: May 6, 2026
Host: Jamie Poisson
Guest: Taylor Owen (Beaver Brook Chair in Media Ethics and Communications, McGill University; member of the federal government’s Expert Advisory Group for Online Safety and AI Strategy Task Force)
Theme:
This episode critically examines recent and proposed bans on social media for those under 16 in Canada and abroad, interrogating whether such measures are truly effective or if they come with significant caveats and unintended consequences. The conversation explores how bans are being enforced, privacy and surveillance implications, alternatives to outright prohibition, and regulatory models for safer online spaces, with a close eye on the Australian experiment and the policy direction in Canada.
Key Discussion Points & Insights
1. The Policy Surge: Social Media/AI Bans for Teens
- Many jurisdictions, including Australia and Manitoba, have embraced or announced plans to ban social media and AI chatbot access for kids under 16.
- A strong majority of Canadians (75%) support the idea ([00:37]–[03:08]), creating political momentum at both provincial and federal levels.
- The federal government is seriously considering some form of age restriction as part of upcoming online harms legislation ([03:17]–[05:01]).
Quote:
“The intent of the government’s pretty clear that they want some form of age restriction included in their package of policies to address online safety.” — Taylor Owen [04:53]
2. Australia’s Ban: A Work in Progress
- Australia’s ban, active since December for kids under 16, mandates platforms to prevent underage account creation or face hefty fines (up to $50 million) ([05:39]–[08:19]).
- Initial enforcement is patchy:
- Roughly 4.7 million accounts closed, but this likely overstates the real compliance.
- Surveys show ~70% effectiveness claimed by regulator, but parents and estimates suggest at best 30% are actually prevented from using (many use VPNs or other workarounds).
- Platforms are generally slow and reluctant in robust enforcement.
Quote:
“I think it’s kind of a mixed record, frankly. We’ve closed down a bunch of accounts and kids...are perhaps unsurprisingly, finding a whole host of ways of getting around it. And part of that is because it doesn’t look like the platforms are trying very hard to implement it.” — Taylor Owen [07:44]
- Enforcement methods are controversial:
- Options range from requiring government ID upload, facial scans, third-party verification, or shifting responsibility to app stores; all carry varying privacy/accuracy trade-offs ([09:03]).
Quote:
“At some point you have to know who’s a kid or not. And you have to know within some degree of accuracy.” — Taylor Owen [09:53]
3. Is the Ban Actually Effective & Beneficial?
- The ban might give parents valuable leverage (“It gives them an excuse to use when trying to pry that phone away from their 13 year old.” — Jamie Poisson [11:46]), but it largely shifts responsibility and may fail to protect vulnerable youth.
- Risks include driving youth into less regulated, potentially more harmful online spaces (e.g. Discord, gaming chatrooms) ([13:34]–[15:44]).
- Policy must be paired with an empowered and independent regulator; otherwise, bans alone are essentially toothless ([10:50]–[11:46]).
Quote:
“A ban on its own just fundamentally won’t work. But Australia has this regulator...” — Taylor Owen [11:17]
4. Privacy, Surveillance, and Digital Identity Concerns
- Proper enforcement may require intrusive age verification for all users, affecting adults as well.
- Canada lacks strong digital privacy laws, exacerbating fears of surveillance and data exploitation ([18:00]–[20:46]).
- There are alternative, less privacy-invasive options (probabilistic age estimation; third-party verification; digital IDs where platforms never learn identifying details).
Quote:
“Jimmy Wales is totally right. These companies are collecting mass amounts of data... What would be needed exactly in order to comply with this kind of age limitation...depends on how certain we expect them to be.” — Taylor Owen [18:24]
5. “Punishing Users, Not Products”: Can Social Media Be Made Safe?
- A blanket ban assumes platforms cannot be made safe, contradicting models used in the UK and proposed here: platforms can be required to undertake design changes to protect users, especially children ([21:06]–[23:27]).
- Design requirements could include:
- No data collection on children
- No infinite scroll
- No contact between kids and non-friends/adults
- Stricter transparency and risk mitigation obligations via an “age appropriate design code” ([23:27]–[26:02])
Quote:
“These products can be designed to be safe, but...the companies are not choosing to. They’re prioritizing other incentives...over the safety of the users, particularly the kids users.” — Taylor Owen [21:28]
6. The Case for Conditional or Temporary Bans
- Time is a factor: While best-practice regulation takes years to implement, parents right now seek immediate protection ([23:27]–[26:02]).
- Owen suggests a temporary access restriction, lifted only when platforms prove (via compliance with regulation) that products are safe for minors ([26:02]–[27:20]).
Quote:
“Maybe what we need to do here is temporarily limit access...until the companies can show via the regulation that they’re safe.” — Taylor Owen [25:45]
- Permanent bans imply platforms could never be made safe, which Owen argues is untrue.
7. Beyond Children: Broader Digital Safety for All
- A teens-only ban ignores risks for adults; design harm doesn’t suddenly disappear on someone’s 16th birthday ([30:01]–[31:38]).
- Regulatory models should provide robust protections for all users, while imposing stricter safeguards for children.
Quote:
“The idea that something magically happens when you turn 16 and ... all of a sudden we’re going to give a generation of kids access to these tools ... that have no regulations on them whatsoever makes no sense at all.” — Taylor Owen [31:20]
8. AI Chatbots Enter the Conversation
- Manitoba and others have bundled bans on AI chatbots with those on social media ([32:05]–[34:04]).
- Owen urges caution: Limited evidence of AI harm to children so far, and broad bans risk barring a wide range of general-purpose technology including beneficial uses.
- Emphasizes that AI products, like social media, should be subject to age-appropriate design, risk assessment, and regulation — but not necessarily an outright ban.
Quote:
“A quick ban on [AI] now feels premature to me.” — Taylor Owen [33:45]
Notable Quotes & Memorable Moments
- ”A ban on its own just fundamentally won’t work.” — Taylor Owen [11:17]
- “At some point you have to know who’s a kid or not. And you have to know within some degree of accuracy.” — Taylor Owen [09:53]
- “Parents are at their wit’s end... I think these are much bigger problems. They’re societal problems and no parent can push back against this on their own.” — Taylor Owen [12:12]
- “We’ve created something nefarious out of these tools...and if kids think they’re going to get in trouble for using them, they’re going to be less likely to talk about them for sure.” — Taylor Owen [13:44]
- “We have nothing. Which is why the push to the Online Harms act is so critical.” — Taylor Owen [15:47]
- “At least the main platforms would not comply, but if they choose not to, then we know the products are safe for the kids that are using them. I mean, same effect.” — Taylor Owen [29:42]
- “Do you want to be in a world where we’re saying you can’t have infinite scroll for adults? I’m not totally sure I’m comfortable with that... With kids, I probably am.” — Taylor Owen [30:23]
Timestamps for Key Segments
- Background on bans & Canadian policy momentum: [00:37]–[05:01]
- Australia’s rollout and enforcement methods: [05:39]–[09:03]
- Challenges of age verification: [09:03]–[10:14]
- Discussion on regulator power and parent leverage: [10:50]–[13:14]
- Unintended consequences & risk displacement: [13:34]–[15:44]
- Canada’s current (lack of) protections: [15:44]–[15:47]
- Privacy and surveillance state anxieties: [17:05]–[20:46]
- Design-based safety regulation vs. bans: [21:06]–[27:20]
- Adult/child regulatory divide: [30:01]–[31:38]
- AI chatbots and the extending ban boundary: [32:05]–[34:04]
Overall Takeaway
This episode of Front Burner delivers a critical, nuanced look at the wave of age-based social media bans. Taylor Owen, drawing on international evidence and regulatory expertise, argues that bans alone are blunt instruments fraught with enforceability, privacy, and displacement risks. Robust online safety requires independent regulators and product-focused reform: platforms can be made safer for young people, but only with proper enforcement, design obligations, and data privacy rules — not through silver bullet solutions.
For listeners seeking to understand the real dimensions of digital safety for youth, this episode is a must.