The 404 Media Podcast: "This Site Unmasks Cops With Facial Recognition"
Release Date: June 25, 2025
Introduction and Upcoming Plans
In the opening segment, Joseph introduces the episode, highlighting an upcoming week-long hiatus for the 404 Media team—a first since their launch in August 2023. He mentions potential reruns and exclusive interview content slated for subscribers during their absence.
Unveiling FuckLAPD.com: A Facial Recognition Tool for Police Identification
Discovery and Initial Impressions
Joseph and Emmanuel Mayberg delve into the story of FuckLAPD.com, a newly launched website designed to identify LAPD officers using facial recognition technology.
-
Joseph recounts his first encounter:
“I saw somebody on TikTok using it, points their iPhone into the face, brings up the guy, the cop's name, reads out his salary... and the cop starts laughing a little bit.” (02:10)
-
Emmanuel shares his discovery via a subreddit focused on monitoring aggressive ICE crackdowns and raids:
“They were just sharing it as a possibly useful tool.” (01:46)
Jason Kebler initially mistook the tool as a recollection from a year prior, reflecting on similar tools like "Watch the Watchers" from 2023.
Functionality and Purpose of FuckLAPD.com
Emmanuel provides an in-depth explanation of the website:
“Fucklapd.com is a site where it has a very simple interface. All you can do is upload a single image of a LAPD officer's face and it will use facial recognition tech and a database of, from what I can tell, most of the LAPD workforce to automatically identify which police officer it is, pull up their name, badge number and then also yes, their salary.” (03:21)
The tool was launched by artist Kyle McDonald in response to heightened tensions during anti-ICE protests in LA, aiming to enhance transparency and accountability amidst reports of police misconduct.
Kyle's Motivation:
“We deserve to know who is shooting us in the face, even when they have their badge covered up.” (07:28)
Comparative Analysis: Watch the Watchers vs. FuckLAPD.com
Jason draws parallels between FuckLAPD.com and the earlier "Watch the Watchers" initiative:
“Watch the Watchers is a database of officers... billed as counter surveillance.” (08:12)
Emmanuel clarifies that while both tools serve similar purposes, their data sources differ. "Watch the Watchers" is part of the Stop LAPD Spying Coalition, relying on public records and requiring legal action to compile their database.
Testing the Tool's Efficacy
Emmanuel conducted a test using a screenshot from an LAPD press conference:
“Within a few seconds, it pulled up nine results... the first one was the correct one. So it worked perfectly in this case.” (10:08)
However, limitations are acknowledged:
- Blurry or distant images may hinder accuracy.
- ICE officers are not included in the database, limiting the tool's scope.
Implications for ICE and Public Safety
The conversation shifts to the potential of similar tools for identifying ICE officers, especially given the prevalence of masked enforcement actions.
Joseph highlights the practical advantages:
“Rather than searching a badge number, you search a face... easier just to point a camera at somebody and then get their photo.” (09:37)
Emmanuel discusses the dire need for such tools amidst rising incidents of masked ICE officers involved in violent enforcement actions:
“ICE officials are covering their faces... a tool like this could help people figure out... at least file a lawsuit or contact the agency.” (12:17)
Legal and Practical Challenges
Jason raises concerns about the practicality of using such tools in volatile situations:
“If you're like, let me take a picture of you and look you up, like while you're trying to arrest me is like, it's just a very, very volatile situation.” (20:37)
Emmanuel echoes the skepticism regarding the legal system's capacity to handle large-scale copyright and identification issues:
“It's a crime on such a scale that you can't really do anything about it. And the system is not built to deal with it.” (21:19)
Legal Battles: AI Training on Authors' Books
Transitioning to a complex legal issue, Jason Kebler discusses a landmark case involving Anthropic, an AI company, and three authors challenging the legality of training AI models on their copyrighted books.
Case Overview
Three authors sued Anthropic, alleging unauthorized use and lack of compensation for training their AI tool, Claude, with their books.
“The judge decided that it was fair use for Anthropic to train on these authors' books, but that it was not legal for them to pirate the books to do so.” (27:03)
Judge's Ruling: A Nuanced Decision
The judge's ruling segmented the legality based on how the books were acquired:
-
Fair Use for Legally Obtained Books:
“Training the AI was transformative.” (28:40)
-
Illegal Acquisition Not Protected:
“Downloading 7.5 million books illegally... straight up piracy.” (42:14)
Sam Cole reflects on the paradox of basic legal principles resurfacing in the context of advanced AI technologies:
“Is theft bad... we're putting this together backwards.” (29:47)
Implications for the AI Industry
Jason analyzes the broader impact:
“A lot of these AI companies have committed like millions of instances of piracy... potential punishment and liability for that is like billions and billions of dollars.” (30:08)
He expresses doubt about the legal system's ability to contain the issue, predicting minimal impact on major AI players:
“The industry is going to keep going. I don't see a world in which this becomes an existential issue for these AI giants because there's too much riding on it.” (34:49)
Future Prospects and Industry Outlook
The discussion concludes with reflections on ongoing and future legal challenges:
-
Jason anticipates numerous similar cases escalating to higher courts, potentially reaching the Supreme Court, due to the vast financial and societal stakes involved.
-
Emmanuel underscores the inadequacy of current copyright laws to address the complexities introduced by AI, hinting at a likely absence of effective legal remedies.
“If you steal from more people that you can count more times than you can count... it's just a crime on such a scale that you can't really do anything about it.” (38:48)
Conclusion: The 404 Media team expresses a pessimistic outlook on the ability of the legal system to effectively regulate AI's use of copyrighted material, emphasizing the need for updated legislation to address these modern challenges.
Final Thoughts
This episode of The 404 Media Podcast provides a critical examination of emerging technologies' role in societal accountability and the evolving legal landscape surrounding AI and intellectual property. Through engaging discussions and expert insights, Joseph, Sam, Emmanuel, and Jason shed light on tools like FuckLAPD.com and landmark legal cases, offering listeners a comprehensive understanding of these pressing issues.
Subscribe to 404 Media for exclusive content and support independent journalism at 404media.co.
