Podcast Summary: "How to Get Paid to Polarize on TikTok"
Podcast: The Tech Policy Press Podcast
Host: Justin Hendricks
Guests: Carlos Hernandez Echoarria (Associate Director, Fundación Maldita), Marina sa Cristan Hidalgo (Public Policy Officer, Maldita)
Date: February 22, 2026
Overview
This episode delves into a startling investigation by the Spanish nonprofit Maldita into TikTok’s role in supporting—and inadvertently financing—polarizing, AI-generated protest videos in at least 18 countries. The discussion details Maldita’s investigative methods, the economic incentives driving disinformation on TikTok, platform policy shortcomings, and the challenges of ensuring platform accountability in the context of the European Union's Digital Services Act (DSA).
Key Discussion Points & Insights
1. Background on Maldita and Its Mission
- Maldita’s Origins & Evolving Focus:
- Registered as a nonprofit in Spain since 2017, Maldita began with fact-checking but quickly broadened its mission to address education, engineering, public policy, and platform accountability.
- The organization believes that understanding and investigating the roles and responsibilities of major platforms is essential for information integrity and democratic safety.
- Carlos Hernandez Echoarria:
“We needed to do many other things, including education, engineering, and particularly public policy, in order to get the impact we were pursuing.” (01:36)
2. Methodology of the Investigation
-
Uncovering the Scheme:
- The investigation started when Maldita noted suspiciously staged, AI-generated protests in Spain on TikTok that hadn’t occurred in real life.
- The team realized the scale was far larger, involving similar “protests” across other European nations (Italy, Germany, the UK, etc.).
- They systematically collected and annotated account data to track coordination and scale.
- Marina sa Cristan Hidalgo:
“We started annotating all of them to see kind of the impact and tried to investigate what the reason could be for them to... play along with this type of content.” (02:48)
-
AI Detection & Forensics:
- Some participants left AI watermarks (e.g., Sora, Gemini) visible, making detection simple; others blurred these marks or repurposed content across platforms where labeling was absent.
- Marina:
“For this investigation... some of the creators... didn’t really care that you knew it was AI, so they were just putting out the content.” (05:22)
3. Scale of the Phenomenon
-
Numbers at a Glance:
- Over two months, Maldita identified 550 TikTok accounts across 18 countries.
- They catalogued more than 5,800 AI-generated protest videos.
- Criteria for the investigation were: (a) AI-generated, and (b) focused specifically on protests (to filter out other forms of artificial but less directly impactful content). (07:15)
-
Not Exhaustive:
- The researchers stress that their numbers likely undercount the true scale, as their goal was to expose the issue rather than to produce a full census.
4. The Monetization Incentive & Business Model
-
Economic, Not Ideological, Motivation:
- The key driver for these coordinated campaigns is financial gain, not political agenda.
- Creators game TikTok's monetization criteria—using VPNs to spoof eligible countries, running multiple accounts, and openly selling monetized accounts.
- Carlos:
“It's such a clear indicator of a problem that would never occur if not for the failed policies of a platform” (09:16)
“They weren’t into this kind of content per se. It's just that they found that it's the content that TikTok's algorithm wants and rewards...” (09:16)
-
Tactics Used:
- Use of VPNs to access both AI generation tools and TikTok’s revenue share program.
- Bulk account creation until achieving the 10,000-follower threshold required for monetization, then selling high-engagement accounts.
- Marina:
“So you really need to create an account that is based in one of those countries to be able to monetize after you reach a threshold of 10k followers.” (11:56)
-
Account Variety:
- Accounts rotate among commercial, political, and innocuous content to appeal to algorithms and buyers.
- Some accounts openly advertised being for sale; others switched topics rapidly.
5. Policy & Regulatory Analysis
-
TikTok’s Stated Policies vs. Enforcement:
- The problem is not policy formulation but lax or inconsistent enforcement.
- TikTok policies ban AI-generated content that misleads about current events, but large-scale violations slip through.
- Carlos:
“The policies are not the problem in this case... it's just that there is this massive hole in enforcement.” (16:40)
-
European Regulatory Context (DSA):
-
The Digital Services Act (DSA) mandates platforms to enforce their own rules, provide clear notice and action for content moderation, and facilitate research access.
-
DSA could force platforms to supply researchers with crucial data (e.g., monetization status of accounts) and strengthen complaints processes for both over- and under-enforcement.
-
Carlos:
“Article 20 of the DSA... doesn't only say when a platform has, was supposed to delete an account and did not, but it also says the other way around... when there is lack of enforcement, which is what we are discussing here.” (16:40) -
Marina:
“This example specifically is very obvious... It's money going from a platform to an individual which is going against the community guidelines.” (28:09)
-
-
Importance of Researcher Access:
- Maldita highlights the critical need for independent access to platform data to understand the scale and monetization connections in disinformation phenomena.
- Carlos:
“Either we unblock, particularly in the European Union regulatory space, the possibilities that DSA brings for research in terms of data access, or we are going to be flying blind really quickly.” (21:33)
6. Broader Implications and Response
-
Other Platforms & Monetization Risks:
- Maldita is monitoring monetization and financial incentives on other platforms (Meta, YouTube, etc.), but data access remains insufficient for robust research.
- Transparency around financial flows and ad placements is essential to tracking and curbing harmful incentive structures.
-
Regulatory and Industry Response:
- The investigation has sparked significant interest from researchers and regulators, who see it as concrete evidence of business models fueling online polarization.
- Carlos:
“I don't think we have ever had such a strong response as we have had, you know, our last investigations, particularly on TikTok policies on this matter.” (26:56)
Notable Quotes & Memorable Moments
-
“We are very much convinced that the role that the big platforms play on this information is so crucial that we need to be there and investigate them...”
— Carlos, 01:36 -
“We collected over 5,800 videos that all had the same characteristics.”
— Marina, 07:15 -
“Many of these people were probably running not only several accounts but also some of them, you know, with an eye of selling them online, some others just to keep producing content to be monetized.”
— Carlos, 09:16 -
“If the enforcement of that specific line in the community guidelines was actually taking place, they wouldn't even be able to monetize the content afterwards.”
— Marina, 11:56 -
“It's just that there is this massive hole in enforcement that... it's hard to think, like, how... this hasn't been spot by the platform.”
— Carlos, 16:40 -
“Either we unblock... the possibilities that DSA brings for research in terms of data access, or we are going to be flying blind really quickly...”
— Carlos, 21:33
Timestamps for Key Segments
- Introduction to Maldita and investigation context: 00:12–02:26
- Investigation methodology and tools: 02:48–05:22
- Scale of findings (accounts/countries): 07:01–08:39
- Monetization mechanics, creator motivations: 09:16–13:49
- Policy enforcement, DSA implications: 16:14–21:33
- Platform transparency and the future of regulation: 21:33–26:46
- Investigation impact and regulatory response: 26:46–29:03
Conclusion
This episode spotlights how TikTok's creator monetization scheme is being gamed at scale to pump out polarizing, AI-generated protest content, presenting new systemic risks to information integrity and democracy—not out of ideology but profit. The researchers demonstrate the need for better enforcement, transparency, and regulatory mechanisms, emphasizing the urgency for both platforms and lawmakers to close loopholes and grant necessary data access to independent experts.
