Podcast Summary:
New Books Network
Episode: Tamar Mitts, "Safe Havens for Hate: The Challenge of Moderating Online Extremism" (Princeton UP, 2025)
Date: October 31, 2025
Host: Dr. Miranda Melcher
Guest: Dr. Tamar Mitts
Overview
This episode features an in-depth discussion with Dr. Tamar Mitts about her new book, Safe Havens for Hate: The Challenge of Moderating Online Extremism. The book dissects the persistent problem of extremist content online, explores why it has been so difficult to curtail, and examines how both governments and social media platforms are struggling—and sometimes failing—to address the spread of violent hate and extremist ideology on the internet. Dr. Mitts’ research highlights the unintended consequences of current moderation policies and offers insight into the adaptive strategies of extremist groups as they navigate the ever-evolving online ecosystem.
Main Discussion Points & Insights
1. Dr. Mitts’ Background and Motivation (03:07–04:35)
- Dr. Mitts is an associate professor at Columbia’s School of International and Public Affairs with a 15-year focus on the online information environment.
- She became intrigued by the recurring social failure to address problems like online hate, violence, & discrimination.
- The core question: Why do solutions remain elusive despite technological and regulatory advancements?
“It seems to be, on one hand, an old issue and an old problem, but on the other hand ... raising new questions on what to do in a world where we are more connected than ever before.” – Dr. Tamar Mitts (03:33)
2. Key Research Questions & Focus (04:35–07:06)
- The book narrows in on extremist groups—specifically those designated by states or the international community as promoting violent ideologies.
- Dr. Mitts documents how these groups are banned from platforms, observe creative adaptations, and notes the lack of a “multi-platform perspective” in current approaches.
“My motivation...was that we are kind of looking at the problem in the wrong way. And that’s kind of the underlying motivation for the book: to explain why and perhaps how we should be thinking about it.” (06:29)
3. Defining Extremism and Harmful Content (07:06–11:13)
- Dr. Mitts resists normative stances, instead surveying traditions:
- Deviation from majority norms (idea-centric view)
- Intergroup conflict (harm to “other” groups)
- Violence-centric perspectives
- She emphasizes the fluidity and politicization of terms:
“Definitions are so key...I’m not coming here with a normative stand. But I just want to examine, once we tag an actor as extremist, how can we understand their behavior...” – Dr. Tamar Mitts (10:35)
4. How Democratic Governments Regulate Extremism Online (11:51–15:27)
- Most democracies focus regulatory efforts on large platforms (defined by user-base thresholds).
- Smaller platforms usually receive lighter or no regulation, creating a differentiated landscape.
- This size-based differentiation shapes uneven enforcement and, unintentionally, creates “safe havens.”
“What you see in all the cases...they basically tend to differentiate platforms based on how big they are.” (13:23)
5. The Multi-Platform Challenge: Platform Policies & Extremist Adaptation (16:12–19:14)
- Variation in moderation standards is rooted in user base, features, and legal pressures—especially in the US (Section 230).
- Larger platforms have more restrictive rules and invest more in moderation; smaller or mid-sized platforms are more lenient.
- Result: an “uneven moderation ecosystem” where extremist actors can play one platform off another.
- Data consistently shows more restrictive moderation correlates with platform size.
6. Evasive Tactics by Extremist Groups on Major Platforms (19:42–24:13)
- To remain on mainstream platforms, groups alter behaviors and use obfuscation:
- Manipulating keywords, altering images/logos, adding “noise” to avoid detection.
- Motivation: maximize access to large audiences, even at the cost of adapting language/media.
“They want to be on these platforms. So it’s kind of worth going through all of these efforts...to try to evade the classifiers so they can stay.” – Dr. Tamar Mitts (23:28)
7. Migrating to Alternative Platforms: The Logic of Where Extremists Go (24:40–27:39)
- When groups are banned, migration is not random.
- Core logic: maximize both audience size and the freedom to communicate authentically.
- Platforms like Telegram have become key hubs—large enough for reach, lenient enough for expression.
- Empirical findings show migration tends toward the “middle” of the size/leniency spectrum.
“Where they go is to these platforms that allow them to maximize both audience reach as well as...authenticity.” (25:32)
8. Does Ideology Affect Platform Choice? (28:15–30:31)
- Though far-right and jihadi groups differ, the modus operandi is similar: seek the best blend of reach and leniency.
- Differences emerge in which platforms are favored, but both go through adaptation and convergence on environments that suit their communication needs.
- Anecdote: Far-right groups tried alternative platforms but abandoned those with negligible audience (“shouting to the void”).
9. The Individual User Experience: Micro-Level Impacts of Moderation (31:12–36:25)
- Many individuals have experienced account/content bans amid heightened moderation efforts (esp. post-2020).
- Emotional response: anger, frustration, unfairness. Users moved to lenient “alternative platforms,” where they often found echo chambers and greater recruitment from extremist groups.
- This migration fosters insular communities, sometimes increasing engagement with extremist content.
“Many of those who got banned from the mainstream platforms became really, really angry...and they used the alternative platform to express that.” (33:58)
10. Policy Solutions: The Promise and Pitfalls of Standardization (37:03–41:15)
- There have been international efforts towards unified moderation (e.g., Christchurch Call, Global Internet Forum to Counter Terrorism).
- In practice, standardization is elusive; most policy is still platform-by-platform.
- Central caveat: Centralized tools can be misused to suppress unpopular but legitimate opinions.
“If we start centralizing for real...we may end up with tools that could be easily misused to silence unpopular opinions or opinions that maybe somebody who has the power doesn’t like.” – Dr. Tamar Mitts (40:22)
11. Reflections and Future Research Directions (41:35–45:09)
- Dr. Mitts suggests further research should safeguard against the misuse of moderation tools.
- Her next project shifts focus to state actors and their use of online information operations in elections and conflicts.
“We went from an online environment ... very organically...to a very professional sort of world where the content that is being produced can be very strategically created, curated...” (43:11)
Notable Quotes & Memorable Moments
-
On definitions:
“I always try to caveat and say, I’m not coming here with a normative stand. But I just want to examine, once we tag an actor as extremist, how can we understand their behavior in that context?”
— Dr. Tamar Mitts (10:31) -
On migration logic:
“It’s definitely not random...in the book I kind of go back to this logic that what these groups really want is to maximize both the audience reach...and authenticity.”
— Dr. Tamar Mitts (24:41) -
On platform policy fragmentation:
“The way we have been thinking about the issue of extremism and violence and hate online is kind of platform by platform...we’re actually missing out on the problem that there needs to be kind of a more standardized solution to this...”
— Dr. Tamar Mitts (39:18) -
On the dangers of centralization:
“We may end up with tools that could be easily misused to silence unpopular opinions or opinions that maybe somebody who has the power doesn’t like.”
— Dr. Tamar Mitts (40:23)
Key Timestamps
| Timestamp | Topic | |-----------|--------------------------------------------------------------| | 03:07 | Dr. Mitts introduces herself & motivation for the book | | 07:25 | Defining extremism & harmful content | | 11:51 | Democratic governments’ regulatory approaches | | 16:12 | Impact of platform differences on extremist group behavior | | 19:42 | Extremists adapting content to stay on major platforms | | 24:40 | Logic of migration to alternative platforms | | 28:15 | Effect of ideology on migration choices | | 31:12 | Individual-level user experience and radicalization | | 37:03 | Policy implications: standardizing moderation | | 41:35 | Implications and future research directions | | 43:11 | Next research area: State actors and information operations |
Conclusion
Dr. Tamar Mitts’ research draws attention to the complex, multi-layered challenge of moderating online extremism. The interplay between government regulation, platform policies, group and user adaptation reveals that without cross-platform, nuanced strategies, actors will continue to exploit emerging “safe havens.” Calls for unified moderation must reckon with both the promise and peril—namely, the risk that such tools may themselves become instruments of overreach or censorship. Dr. Mitts leaves listeners with a sense of ongoing, urgent inquiry—pointing towards the emerging frontier of state-directed influence and the evolving responses required to protect both safety and open discourse online.
