Podcast Summary: The 404 Media Podcast
Episode: The ICE Tool That Tracks Entire Neighborhoods
Date: January 14, 2026
Hosts: Joseph, Sam Cole, Emanuel Maiberg, Jason Koebler
Episode Overview
This episode dives into an exclusive 404 Media investigation into “WeBlock,” a surveillance tool purchased by ICE (U.S. Immigration & Customs Enforcement) that can monitor the movement of mobile phones on a neighborhood scale. The hosts break down WeBlock’s technical operations, how law enforcement uses it, and its implications for privacy and protests. They also connect this to broader ICE crackdowns and discuss the controversial use of AI tools such as Grok for generating non-consensual images online.
Detailed Breakdown
1. Zine Update and Introduction (00:04 – 02:56)
- Quick update from the 404 Media team on their printed zine production and shipping process.
- Emphasis on the personal, manual nature of making the zine:
“We wanted it to be like very human, very manual…We’re very happy. But a couple thousand of you bought them…” – Host A (01:46) - Transition: “Should we move to the first story? Jason, do you want to take the lead on this one?” – Joseph (02:38)
2. Inside ICE’s Neighborhood-Scale Surveillance Tool (02:56 – 16:16)
How WeBlock Works and What It Can Do
- WeBlock is designed to easily manipulate and analyze mobile location data for law enforcement, acquired via US procurement contracts.
“WeBlock is a location data tool…It can track movements of mobile phones…you could look in a certain area, figure out what phones are there and that sort of thing.” – Joseph (03:56) - The interface allows ICE officers to:
- Toggle between “daytime” and “nighttime” modes to learn home and work locations.
- Visualize exact travel routes of targeted phones.
- Draw geofences (circles/shapes) to collect all phone data within a defined area, for targeted or broad (“entire neighborhood”) surveillance.
Potential Uses and Civil Liberties Concerns
- While no direct evidence yet that ICE used WeBlock against protestors, there’s local reporting suggesting activist surveillance.
“There are reports or suspicions that ICE DHS may be using data to go after activists and intimidate them in some way…” – Joseph (08:42) - Originally purchased for HSI (Homeland Security Investigations), historically for criminal investigations, but now HSI’s focus has shifted to deportations, broadening potential surveillance uses.
Protest Surveillance and Marketing Material
- Previous marketing/training docs for similar tools encouraged law enforcement to surveil protests (e.g., BLM), raising fears of abuse:
- “The people making, developing and selling the tool are explicitly telling potential customers, hey, you could use this to [monitor] protests.” – Joseph (11:55)
Where the Data Comes From
- Data is almost certainly not from telecom companies; instead, it's harvested via commercial advertising processes:
- SDKs (less common now): Code inside apps that collect location data.
- Real-Time Bidding (RTB): Ad tech industry allows data brokers to siphon user location as part of the programmatic ad auction process.
- “A bunch of companies can simply monitor that process…that can include location data and a specific person’s unique advertising ID.” – Joseph (13:50)
3. Connecting ICE Surveillance to National Events (16:16 – 18:02)
- Context: Surveillance discussion is set against the backdrop of the recent police killing of Renee Nicole Goode in Minneapolis.
- The 404 Media team highlights contradictions between government statements and video evidence, calling out misleading official narratives:
- “As the headline says, DHS is lying to you and absolutely believe that based on the statements and the videos themselves." – Joseph (16:52)
4. Non-Consensual AI Imagery and Grok Abuse (23:00 – 40:44)
Grok’s Vulnerability to Abuse: Telegram Jailbreaking and More
- Emanuel investigates a Telegram community dedicated to “jailbreaking” Grok and other AI image tools to create nonconsensual explicit images.
- Prompts have become increasingly sophisticated, focusing on bypassing content filters to generate both non-nude and pornographic images of real people.
- “They keep coming up with extremely elaborate, more sophisticated prompts to do that, which can produce far worse things than what we’ve seen on X.” – Emanuel (25:13)
- Noted the phenomenon’s viral potential, referencing the Taylor Swift image scandal as an example of how fringe communities’ tactics move mainstream.
Cycle of Exploit Discovery and Response
- Group members iterate on exploits; when one loophole is closed, another is quickly found.
- “Everybody’s like, grok is dead, dead game. And then somebody comes in with a new exploit and the whole cycle begins again.” – Emanuel (30:34)
- After public attention, the original Telegram group dissolved but quickly reformed elsewhere, underscoring the resilience of these abusive communities.
X’s (Twitter’s) “Solution”: Monetizing Access to Grok
- Sam reports that, in response to news coverage and backlash, X restricted Grok’s image generation to only paying users—but did not actually fix the underlying abuse.
- “It turns out that actually what was happening was Grok was giving a reply to people who weren’t X premium or…paid X users saying image generation and editing were turned off and limited only to paying subscribers to X. And then it would give a link to how to subscribe, which is not turning it off.” – Sam (33:42)
- The restriction did nothing to limit harm, as explicit images still appeared in public feeds, and the paywall was later dropped.
App Store Double Standards
- Despite violations of Apple/Google policies on non-consensual and pornographic content, Grok remains easily accessible through mainstream app stores.
Reflections on X’s Devolution
- Hosts share personal noting that X (Twitter) is now flooded with “very extreme, very pornographic and in this case non-consensual” content.
- “It’s as if you’re swimming in like the outbrain ad gutter ads of the Internet. It just [is] very extreme, very pornographic and in this case non consensual.” – Emanuel (39:14)
- Most hosts have stopped using X socially, visiting only for research or journalism purposes.
Memorable Quotes & Timestamps
-
On WeBlock’s Capabilities:
“You can draw a circle or another shape around a particular area…that's going to show…all the phones that we have in that location at that day or across that hour...” – Joseph (07:09) -
On Surveillance at Protests:
“Cobwebs itself was saying, hey look, you could use this to monitor Black Lives Matter and protests.” – Joseph (11:54) -
On Data Sources:
“SDK stuff has like gone out of fashion…[now] it's connected through real time bidding…companies can simply monitor that process...and that can include location data and a specific person's unique advertising id.” – Joseph (13:50) -
On Resistance to AI Image Abuse:
“People go from app to app…for the past three to six months, the entire channel is devoted to Grok.” – Emanuel (31:00) -
On X’s Monetization Response:
“It’s just pushing people to pay for it…You can pay $8 a month to still get these images generated in your feed…” – Sam (36:13)
Key Timestamps
- 02:56 – Introduction to WeBlock/ICE surveillance tool
- 05:30 – Detailed features and interface of WeBlock
- 11:55 – Discussion of use in monitoring protests
- 13:00 – Sources of commercial location data (SDKs vs. RTB)
- 16:16 – Connection to Minneapolis and DHS/ICE narratives
- 23:35 – AI image jailbreaking communities (Grok discussion)
- 33:42 – X’s response to AI image abuse and “monetization” angle
- 39:14 – On X devolving into unmoderated, abusive content
Conclusion
The episode delivers a thorough exposé of the shadowy world of commercial surveillance technology and its real-world impact, linking ICE’s potential abuse of location data tools with parallel issues of AI-generated abuse online. The hosts highlight the technical, legal, and ethical gaps in both U.S. government and big tech responses—showing how tools meant for public safety or engagement are repeatedly weaponized for harm.
[End of Summary]
