Detailed Summary of “Social Media’s Original Gatekeepers On Moderation’s Rise And Fall”
On with Kara Swisher
Host: Kara Swisher
Guests:
- Del Harvey: Former Head of Trust and Safety at Twitter
- Dave Wilner: Former Head of Content Policy at Facebook
- Nicole Wong: First Amendment Lawyer, Former VP and Deputy General Counsel at Google, Twitter's Legal Director for Products, and Deputy Chief Technology Officer during the Obama Administration
Release Date: January 27, 2025
Introduction
In this illuminating episode of On with Kara Swisher, award-winning journalist Kara Swisher engages in a deep dive with three pivotal figures in the realm of social media content moderation—Del Harvey, Dave Wilner, and Nicole Wong. These guests, who have been at the forefront of designing safety and content policies on platforms like Twitter and Facebook, offer invaluable insights into the evolution, challenges, and future of trust and safety in the digital age.
1. The Shift in Content Moderation Policies
Kara opens the discussion by highlighting Meta's recent decision to eliminate fact-checkers in favor of Community Notes, a move that has sparked significant debate. Del Harvey provides a critical analysis of this shift:
“They're turning off the dampening on that. That feels to me like a much bigger deal...”
— Del Harvey [05:20]
He emphasizes that removing algorithmic mitigation of misinformation will significantly alter information flow on the platform, potentially amplifying harmful content.
Nicole Wong adds to the conversation by critiquing Meta's strategic realignment to appease specific audiences:
“If you want the other Internet that we started with, we have to change the goal.”
— Nicole Wong [09:25]
She argues that the foundational pillars of personalization, engagement, and speed are being compromised to create echo chambers, thereby transforming platforms into propaganda machines rather than venues for diverse communication.
2. Foundations of Trust and Safety in Social Media
The guests recount the early days of content moderation, offering a glimpse into the foundational decisions that shaped today's policies. Nicole Wong shares her experience at YouTube in 2006 when confronted with graphic content depicting Saddam Hussein's execution:
“We took down the one of the corpse. We kept the one of the execution.”
— Nicole Wong [17:18]
This decision underscored the delicate balance between preserving historical significance and removing gratuitous violence.
Del Harvey reflects on Facebook's initial stance on controversial content, such as Holocaust denial groups:
“The initial reluctance to sort of get into the initial stance on holocaust denial when we took it was downstream of an intuition that we frankly weren't capable of figuring out how to reliably police what was true.”
— Del Harvey [22:45]
He acknowledges the challenges of enforcing truth-based policies without sufficient tools and expertise.
3. Gamergate and Its Impact
The conversation shifts to the 2014 Gamergate controversy, a pivotal moment that significantly influenced content moderation strategies. Kara Swisher asks the guests to elaborate on how Gamergate reshaped their approach to trust and safety.
Dave Wilner discusses the limitations of early content moderation tools:
“The same way that certain policies may have existed at Facebook because there was no way to operationalize them, similar ones certainly existed at Twitter.”
— Dave Wilner [30:22]
He underscores the necessity of proactive monitoring and product design in mitigating abuse before it escalates to harmful levels.
4. Responsibility and Platform Influence
The devastating Rohingya genocide in 2017 serves as a stark example of the profound responsibilities social media platforms hold. Nicole Wong connects this tragedy to the broader implications of content moderation:
“What we have is a moment where it became broadcast. Where it became about propaganda and pushing people into a certain direction that was very, very toxic and. And harmful and. And had terrible consequences on the ground.”
— Nicole Wong [33:11]
She urges platforms to recognize their obligation to intervene in the face of orchestrated hate speech that can incite real-world violence.
5. The Rise of AI in Content Moderation
As artificial intelligence (AI) becomes increasingly integral to content moderation, the guests discuss its potential and pitfalls. Dave Wilner is optimistic about AI's role when combined with human oversight:
“AI is a tool like any other. It depends on how you use it.”
— Dave Wilner [52:44]
He advocates for a balanced approach where AI handles scalability while humans ensure accuracy and nuance in moderation decisions.
6. Ownership and Future of Social Media Platforms
Elon Musk's acquisition of Twitter (rebranded as X) introduces a new dynamic in content moderation. The guests analyze the implications of Musk's leadership and the platform's evolving policies.
Del Harvey critiques Musk's strategy and its alignment with broader business interests:
“It doesn't seem to me like Elon necessarily makes plans. And whatever it is that Facebook's gambit is here seems to basically be a bet that maybe Trump will be mean to Europe for them and hopefully then somehow they won't have to do this.”
— Del Harvey [47:43]
Nicole Wong voices concerns over TikTok's potential acquisition by Musk, highlighting the platform's existing issues:
“If we want to Have a conversation about like propaganda and misinformation spreading on social media. Let's have that conversation. But not about TikTok.”
— Nicole Wong [56:50]
7. The Role of Platforms in Democracy and Society
The discussion broadens to encompass the profound influence social media platforms wield over democratic processes and societal discourse. Dave Wilner warns of the dire consequences if marginalized groups lose protections:
“There are only so many people who they can appeal to in terms of this sort of pro fascism, anti woke United States number one opinion of things. And the EU is just not going to be chill with this at all.”
— Dave Wilner [58:36]
This highlights the precarious balance between free expression and the suppression of harmful rhetoric.
8. Looking Ahead: Predictions and Philosophical Insights
In the concluding segments, the guests ponder the future trajectory of social media. They discuss the potential fragmentation of platforms and the increasing role of AI in shaping user experiences.
Del Harvey offers a philosophical reflection inspired by Paul Virilio's concept of total visibility:
“It's going to be dream of escape because. Because we're not, I don't think, prepared for that much awareness.”
— Del Harvey [66:43]
This underscores the existential challenges posed by pervasive surveillance and data transparency.
Conclusion
Kara Swisher wraps up the episode by appreciating the thoughtful contributions of her guests, emphasizing the critical role that trust and safety professionals play in navigating the complex landscape of social media. The conversation serves as a poignant reminder of the ongoing struggle to balance free expression with the imperative to safeguard users from harm, signaling the need for continued vigilance and innovation in content moderation strategies.
Notable Quotes:
-
Del Harvey [05:20]: “They're turning off the dampening on that. That feels to me like a much bigger deal..."
-
Nicole Wong [09:25]: “If you want the other Internet that we started with, we have to change the goal.”
-
Nicole Wong [17:18]: “We took down the one of the corpse. We kept the one of the execution.”
-
Del Harvey [22:45]: “The initial reluctance to sort of get into the initial stance on holocaust denial when we took it was downstream of an intuition that we frankly weren't capable of figuring out how to reliably police what was true.”
-
Dave Wilner [30:22]: “The same way that certain policies may have existed at Facebook because there was no way to operationalize them, similar ones certainly existed at Twitter.”
-
Nicole Wong [33:11]: “What we have is a moment where it became broadcast. Where it became about propaganda and pushing people into a certain direction that was very, very toxic and. And harmful and. And had terrible consequences on the ground.”
-
Dave Wilner [52:44]: “AI is a tool like any other. It depends on how you use it.”
-
Del Harvey [47:43]: “It doesn't seem to me like Elon necessarily makes plans. And whatever it is that Facebook's gambit is here seems to basically be a bet that maybe Trump will be mean to Europe for them and hopefully then somehow they won't have to do this.”
-
Nicole Wong [56:50]: “If we want to Have a conversation about like propaganda and misinformation spreading on social media. Let's have that conversation. But not about TikTok.”
-
Dave Wilner [58:36]: “There are only so many people who they can appeal to in terms of this sort of pro fascism, anti woke United States number one opinion of things. And the EU is just not going to be chill with this at all.”
-
Del Harvey [66:43]: “It's going to be dream of escape because. Because we're not, I don't think, prepared for that much awareness.”
This comprehensive summary captures the essence of the episode, highlighting the critical discussions on content moderation’s evolution, the influence of platform ownership, and the future challenges posed by AI and regulatory environments. The inclusion of notable quotes with timestamps provides depth and authenticity, ensuring that readers grasp the nuanced perspectives of the guests.
