Podcast Summary
Podcast: Ask Haviv Anything
Episode: 65: The Unseen Editors Rigging the Information War, with Ashley Rindsberg
Date: December 2, 2025
Host: Haviv Rettig Gur
Guest: Ashley Rindsberg (journalist and researcher, investigative reporting on digital media manipulation)
Overview
This eye-opening episode delves into how small, coordinated groups of online editors and activists are able to dominate major information platforms—particularly Wikipedia and Reddit—to shape public understanding of controversial topics, especially regarding Israel, Zionism, and the Israel-Palestine conflict. Ashley Rindsberg, whose investigative pieces have exposed systematic bias and manipulation on Wikipedia and Reddit, joins Haviv Rettig Gur to unpack these structural vulnerabilities, their far-reaching consequences for the information ecosystem, and the growing, underappreciated risks posed by generative AI relying on compromised knowledge sources.
Key Discussion Points & Insights
1. The Illusion of Neutrality: Wikipedia’s “Crowdsourcing” Problem
-
Wikipedia projects itself as neutral because it is ‘crowdsourced’, but in reality, small groups of highly trained, ideologically motivated editors—what Ashley calls the “Gang of 40”—have captured and locked down highly controversial, high-profile articles, especially those related to Israel and Zionism.
- [07:16] Ashley: “It’s about three dozen plus who have worked tirelessly for years in coordination…to manipulate the most important topics...Anything having to do with Zionism, for sure.”
- [10:38] Haviv: “These are phenomenally productive Wikipedia editors…10,000 edits per person…almost the entirety of their Wikipedia activity.”
-
Wikipedia information is not siloed: It gets piped directly into Google’s knowledge panels and, critically, into AI LLMs (like ChatGPT and Claude), making Wikipedia’s biases “downstream” into how billions interact with online information.
- [07:16] Ashley: “You might not go to Wikipedia, but Wikipedia comes to you...billions of other people, without anyone actually realizing.”
-
Entryism and Rule Manipulation: Only a handful of editors are needed to dominate a controversial article, leveraging parliamentary rules, lockouts (like the ‘moratorium’), and hierarchies of editorship to keep their version the only accessible one.
- [16:26] Ashley: "...even two or three on a given article can completely dominate that article."
- [23:58] Ashley: “They locked it. They put a freeze on that opening section. They call it a moratorium….They freeze it for a year so no other editor can touch that sentence.”
-
Bias Always Flows One Way: Despite the premise, there are no known examples of organized right-leaning campaigns successfully capturing controversial Wikipedia content—bans, lockouts, and the platform’s dominant “GASP” worldview (Global, Academic, Secular, Progressive) ensure the left/progressive bias is self-reinforcing.
- [21:22] Ashley: “It almost always works in one direction. [Wikipedia] has a worldview.”
- [29:58] Ashley: “Their head of revenue is making about the same at around 400…So the idea that they’re understaffed is not the case…it is about not caring.”
2. Wikipedia’s Influence in the Age of AI
- Wikipedia’s privileged relationship with Google gives it unparalleled SEO value and exposure, and its content provides foundational training for AI models, entrenching editorial bias into the “knowledge” dispensed by LLMs.
- [39:42] Ashley: “There are certain select users on the site…called check users, who have the ability to track IP addresses…we don’t know who those people are.”
- [42:34] Ashley: “Wikipedia had a complete and total monopoly on topical information online…a knowledge cartel between two organizations locked together [Wikipedia & Google].”
3. Reddit as the Next Front: Coordinated Propaganda Networks
-
How Reddit Works: Communities (“subreddits”) are controlled by volunteer moderators, with up/downvoting driving visibility. It is structurally attractive for data mining by AI, and its data is sold widely since Reddit isn’t building its own AI model.
- [43:06] Ashley explains Reddit's mechanics.
-
Coordinated Manipulation—Not Organic Activism: After October 7, there was a marked, orchestrated amplification of pro-terror, anti-Israel, and anti-Semitic messaging across Reddit, Discord, X, and Instagram, often translating and laundered directly from official Hamas/Hezbollah telegram channels.
- [48:24] Haviv: “Literally goes to these Hamas, actual Hamas [telegram channels], correct?”
- [49:53] Ashley: “One of the top boosters of Hamas online today is called Zay Squirrel…these are not random users.”
- [50:09] Haviv: “How many people are we talking about?...What does it look like on Reddit?”
- [50:17] Ashley: “Dozens of core group members, including a lot of moderators….What they’ve done is infiltrated mainstream, non-political communities…that is even more effective than just posting crazy screedy stuff into their own communities.”
-
Insider Coordination/External Inaction: Internal logs and Discord backups (infiltrated by whistleblowers) show explicit planned campaigns by a small cadre, but Reddit management, when notified, took action only to quarantine or restrict Jewish users—creating a digital “ghetto” and failing to stop the problem.
- [54:08] Ashley: “Reddit...instituted...policy changes for the Jewish users that...locked them into their own communities...what these sources describe as a digital ghetto.”
- [58:17] Ashley: "The problem is when you’re not taking that kind of action when it comes to terror laundering pipelines."
4. Flooding the Zone: The AI Disinformation Age
-
The failure of Wikipedia and Reddit’s information hygiene massively scales into AI-generated answers—meaning propagandists, with modest numbers and resources, shape the majority of global answers to questions about controversial events.
- [60:57] Haviv: “If I’m doing it [using ChatGPT for fact-checking], everybody’s doing it...That’s now being trained on those Reddit platforms.”
- [62:45] Ashley: “You can ask [LLMs] a question...you’re going to get posts from Reddit by this group of r/palestine activists...it pulls directly from those posts into your ChatGPT…and that’s the core issue here.”
-
The Recursive Propaganda Problem: Bad information gets cited by AIs, which gets echoed and laundered into even more sources, creating “fantasies built upon fantasies built upon fantasies,” as explained in the Kurzgesagt science channel example.
- [64:00] Haviv shares the example of how AIs hallucinate sources, further compounding fake information downstream.
5. The Big Picture—and What Now?
-
We are more easily manipulated than ever: The breakdown in online information trust is systemic; small, committed groups or state actors are able to dominate platforms and, by proxy, the AI training data they feed into.
- [66:30] Ashley: “It's about staying informed, aware of what's happening here.…this is now incumbent upon us to have that same level of awareness that the threat is real.”
-
Solutions Are Elusive: Immediate fix unlikely; awareness and investigative exposure are crucial first steps. The only genuine long-term remedy may come from the emergence of true competitors (e.g., Elon Musk’s “Grokopedia”) and breaking up the Google-Wikipedia “knowledge cartel.”
- [37:41] Ashley: “The biggest thing I think that we can do right now is what we’re doing…talking about this. People just don’t know about it.”
- [42:39] Ashley: “The biggest backdoor into the information ecosystem by far. It’s...the easiest to exploit vulnerability with by far the biggest effect...”
Notable Quotes & Memorable Moments
-
On Wikipedia’s systemic vulnerabilities:
- [12:21] Ashley: “If you or I even were to create an account today on Wikipedia...I promise you it will not stick. And if you tried again, you’d be outmaneuvered, you’re going to be exhausted, and throw up your hands in the air.”
- [23:58] Ashley: “They locked it. They put a freeze on that opening section. They call it a moratorium…They freeze it for a year so no other editor can touch that sentence they’ve manipulated.”
-
The knowledge cartel:
- [42:34] Ashley: “There was no competitor...a knowledge cartel between two organizations locked together [Wikipedia and Google].”
-
On the downstream dangers:
- [62:45] Ashley: “That’s the core issue here, is that there is no filter mechanism for what’s coming through from Reddit or Wikipedia…propagandists know this.”
-
What it means for the future:
- [66:30] Ashley: “ChatGPT and the LLMs are a wonder…we can get caught up in the awe...but you have to pay attention to what's actually happening because…it could be as simple as miseducating your kids accidentally without knowing.”
Timestamps for Important Segments
- [03:58] — Why Wikipedia fails to accurately represent mainstream Zionist or Israeli perspectives
- [07:16] — How small, ideologically motivated groups capture information architecture
- [16:26] — How little it actually takes (in numbers/effort) to seize control of key topics
- [23:58] — The “moratorium” and procedural abuses to lock out alternate views
- [29:58] — The role and priorities of the Wikimedia Foundation; the myth of being understaffed
- [34:35] — The claim to crowdsourcing: How Wikipedia has become a political tool “under the banner of neutrality”
- [39:42] — The hidden power structure within Wikipedia—no transparency about admins/‘check users’
- [42:34] — The monopoly: Wikipedia, Google, and the downstream power over global knowledge
- [43:03] — What is Reddit and how its structure enables exploitation
- [46:31] — The cross-platform manipulation network (Reddit, Discord, Telegram, X, etc.)
- [50:17] — The infiltration of mainstream subreddits by coordinated anti-Israel moderators
- [54:08] — How Reddit’s management responded to whistleblowers—creating a “digital ghetto” for Jewish users
- [62:45] — Real-world downstream effect: AI models laundering and amplifying manipulated content
- [66:30] — Final thoughts: Technology has made us more manipulable than ever
Takeaways
- Wikipedia and Reddit, widely treated as reliable, crowdsourced platforms, are highly susceptible to capture by small, ideologically motivated groups through procedural manipulation and organizational negligence or complicity.
- This compromised information—especially on hot-button issues—is then amplified globally by generative AI, compounding misinformation on an unprecedented scale.
- Solutions are elusive; public awareness, journalism, and technological competition (breaking knowledge monopolies) are the most urgent countermeasures.
- The threat is massive and goes far beyond Israel or any single issue—this is a global phenomenon with serious ramifications for democracy, education, and individual understanding.
Closing Note:
Ashley Rindsberg’s work underscores that we cannot take the neutrality or authority of online information platforms for granted—especially as AI increasingly mediates our access to knowledge. The fight for truthful, representative information is both technical and political, and it now requires vigilance from every digital citizen.
[End of Summary]
