The Plan to Use AI to Purge Voter Rolls
Episode Title: The Plan to Use AI to Purge Voter Rolls
Release Date: November 6, 2024
Host: Joseph (with co-hosts Sam Cole and Jason Kebler)
Podcast: The 404 Media Podcast
Introduction
In this episode of The 404 Media Podcast, hosted by Joseph alongside Sam Cole and Jason Kebler, the team delves into the controversial use of artificial intelligence (AI) in purging voter rolls. The episode, released on November 6, 2024, provides an in-depth exploration of Eagle AI—a tool purportedly designed to identify and challenge voter eligibility across various states, with a particular focus on Georgia.
Eagle AI: An Overview
[01:01] Joseph:
"...the first story we have is one exactly about that... the plan to use an AI to purge voter rolls."
Jason Kebler explains that Eagle AI, despite its name, may not fully qualify as true artificial intelligence. Instead, it functions as a sophisticated database program developed in the aftermath of the 2020 election to automate the process of challenging voter eligibility.
[01:47] Jason Kebler:
"...Eagle AI is basically like a sophisticated database program that was created in the aftermath of the 2020 election... it combines national address change information, newspaper obituaries, felony arrest records, property records, and more into a search tool..."
Eagle AI aggregates various data sources to compile a comprehensive list of potentially ineligible voters. This includes publicly available voter rolls from states like Georgia, where the Secretary of State maintains and disseminates voter registration information.
Mechanics of Eagle AI in Voter Purging
[04:10] Joseph:
"...someone using this piece of software could then say, oh, look, you had a bunch of voters who shouldn't exist or shouldn't be allowed to vote..."
The tool allows users—be they private citizens or county election boards—to generate large numbers of voter challenges efficiently. In Georgia, where the Secretary of State's voter list is public, Eagle AI cross-references this data with other databases to flag voters who may have discrepancies, such as typographical errors in addresses or indicators of deceased individuals.
[04:41] Jason Kebler:
"...Eagle AI is making it very simple to generate these challenges in huge numbers... They have contracts with counties that overwhelmingly support certain political parties to review these challenges."
This automation poses significant risks, as it can lead to the removal of eligible voters based on faulty or misinterpreted data. The Georgia Secretary of State has expressly advised against using Eagle AI due to concerns over data accuracy.
Audio Evidence from Election Board Meetings
[14:10] Jason Kebler:
"...I requested video of these meetings and they said... they've been recording it on a cell phone for a year."
To substantiate their claims, the team obtained audio recordings from Columbia County's election board meetings. These recordings reveal discussions that align with the tactics promoted by Eagle AI, including the generation of voter challenges based on questionable data sources.
[16:43] Dr. Rick Richards (Creator of Eagle AI):
"...they won't know they're being researched because something triggers the research. If you can look at a timeline... Eagle AI would be inserted there as a tool to help with the research."
Timestamps: [16:43] - [17:44]
Dr. Rick Richards asserts that Eagle AI simply aids in the research phase of voter eligibility verification without directly altering voter rolls. However, the ease with which challenges can be generated raises concerns about the potential for widespread disenfranchisement.
Implications and Concerns
[08:28] Jason Kebler:
"...Eagle AI is being used right now across the country... private citizens in North Carolina are creating huge numbers of voter challenges."
Timestamps: [08:28] - [09:35]
The use of Eagle AI is not isolated to Georgia. Similar practices are emerging in other states, where election boards may adopt such tools despite official warnings. This widespread adoption can undermine the integrity of voter rolls and exacerbate existing voter suppression issues.
[12:36] Joseph:
"...malicious or potentially malicious requests that could be powered by something like Eagle AI."
Timestamps: [12:36] - [13:46]
The podcast highlights the broader implications of using AI in electoral processes, emphasizing the potential for misuse and the erosion of public trust in electoral integrity.
Transition to AI in Facial Recognition and Content Moderation
Following the in-depth discussion on Eagle AI, the podcast transitions to another critical issue: the misuse of AI in gender detection and content moderation.
Microsoft’s Gender Detection AI
[24:29] Sam Cole:
"...Microsoft's Azure face services could infer emotional states, gender, age, and more... They retired these capabilities in 2022..."
Timestamps: [24:29] - [25:47]
Sam Cole discusses Microsoft's discontinued facial recognition features, which attempted to infer sensitive personal attributes. Microsoft ceased these services in response to ethical concerns and the potential for misuse, highlighting the rapid evolution and challenges in AI governance.
[26:33] Sam Cole:
"...the development of AI is so quick... Microsoft realized they shouldn't have released facial recognition that can identify gender or emotional states."
Timestamps: [26:33] - [28:15]
The podcast underscores the speed at which AI technologies advance, often outpacing ethical safeguards and regulatory measures. This gap can lead to technologies being deployed without fully understanding or mitigating their societal impacts.
Instagram’s Nipple Detection Policy
[32:20] Sam Cole:
"...Instagram’s policies on nudity are inconsistent and disproportionately affect marginalized groups..."
Timestamps: [32:20] - [34:59]
Sam Cole elaborates on a project by ADA, which involved documenting the inconsistencies in Instagram’s nipple detection algorithms. The platform’s moderation policies often lead to arbitrary and discriminatory enforcement, particularly impacting transgender individuals, people of color, and sex workers.
[33:15] Jason Kebler:
"...Instagram applies its rules so irregularly that it's confusing and disproportionately affects marginalized communities."
Timestamps: [33:15] - [34:59]
Jason Kebler shares personal experiences of being banned from Instagram and Threads for sharing content related to their investigations, highlighting the opaque and punitive nature of social media content moderation.
Conclusion and Broader Impacts
[41:30] Sam Cole:
"...algorithmic enforcement threatens everyone, especially marginalized groups... It's a universal issue."
Timestamps: [41:30] - [44:04]
Sam Cole emphasizes that algorithmic biases in AI extend beyond specific groups, posing a universal challenge to digital rights and personal freedoms. The episode advocates for greater transparency and accountability in AI deployment across various sectors.
[44:04] Joseph:
"...paying 404 Media subscribers can access exclusive content on info stealers and a new partnership with Wired."
Timestamps: [44:04] - End
While the episode concludes with a mention of subscriber-exclusive content, the primary focus remains on the ethical dilemmas posed by AI in both electoral processes and content moderation.
Key Takeaways
-
Eagle AI's Role in Voter Purging:
Eagle AI automates the generation of voter eligibility challenges by cross-referencing public voter rolls with various databases, raising concerns about data accuracy and potential voter suppression. -
Ethical Concerns in AI Deployment:
The rapid advancement of AI technologies often outpaces ethical considerations, leading to tools like Microsoft's gender detection AI and Instagram’s inconsistent content moderation policies that can discriminate against marginalized groups. -
Need for Transparency and Regulation:
There is a pressing need for transparent AI practices and robust regulatory frameworks to ensure that AI applications do not undermine democratic processes or perpetuate societal biases. -
Impact on Social Media Users:
Arbitrary and opaque content moderation on platforms like Instagram and Threads can lead to unjust bans and censorship, disproportionately affecting vulnerable communities.
Notable Quotes
-
Jason Kebler on Eagle AI's Functionality:
"Eagle AI is making it very simple to generate these challenges in huge numbers... They have contracts with counties that overwhelmingly support certain political parties to review these challenges."
[04:41] -
Dr. Rick Richards on Research Phase:
"They won't know they're being researched because they're going to be researched because something triggers the research."
[16:43] -
Sam Cole on AI Development Pace:
"The development of AI is so quick... Microsoft realized they shouldn't have released facial recognition that can identify gender or emotional states."
[26:33] -
Jason Kebler on Content Moderation:
"Gender detection, facial recognition, and gender detection, like AI is a pseudoscience... if you're going to use it, the only ways that you can use it are to discriminate against, against people."
[29:01] -
Sam Cole on Algorithmic Enforcement:
"Algorithmic enforcement threatens everyone, especially marginalized groups... It's a universal issue."
[41:30]
This episode of The 404 Media Podcast provides a critical examination of AI's role in electoral integrity and content moderation, highlighting the ethical challenges and societal impacts of deploying such technologies without adequate oversight and accountability.
