There Are No Girls on the Internet — News Roundup: Elon is Lying About Fixing Grok; Renee Good’s Killer Has a Rulebreaking GoFundMe; Tea App Wants Another Chance
Episode Date: January 17, 2026
Host: Bridget Todd
Guest Co-host: Joey Pat
Episode Overview
This episode features a lively, incisive discussion of major tech and internet culture stories centering on the intersection of technology, social justice, and marginalized identities. Bridget Todd and her producer Joey Pat break down the latest news around Elon Musk's controversial AI chatbot Grok, a rule-defying GoFundMe for an ICE agent who killed Renee Good, the problematic relaunch of a women-only dating app, a troubling example of algorithmic bias at the VA, and an Arizona gay bar's divisive use of an AI chatbot. The witty, irreverent banter—mixed with sharp critique and personal anecdotes—captures both outrage and hope at the state of online and offline communities.
Key Discussion Points & Insights
1. Updates on Grok: Elon Musk’s AI Chatbot Controversy
-
Sexualization and Exploitation Enabled by AI
- Both the Grok account on X and the Grok standalone app have been used to “undress” women and girls without their consent, sometimes generating illegal images, including those of minors ([06:16]).
- The Internet Watch Foundation found criminal imagery generated by Grok.
-
Government and Legal Backlash
- EU threatened to fine X under the Digital Services Act if action wasn't taken.
- California launched an investigation, with the state Attorney General sending a cease and desist to XAI demanding a halt to the creation/distribution of illicit deepfakes ([06:35]).
-
Elon Musk’s 'Solution' and Deflection
- Musk's initial fix: limit Grok’s undressing features to paying users—a move Bridget brands a "cash grab" motivating profit over safety ([06:50]).
- “Let's call it what it is. It's exploiting criminal behavior, targeting women and girls to make money. This from the guy who…grandstanded…about...cracking down on content that exploits children. Now he's like, I'm going to make money from content that exploits children.” – Bridget ([06:52])
-
Musk’s Denial and Reality Distortion
- Musk continues to tout Grok’s success and claims complaints are attacks on free speech ([08:26]).
- “The problem is he's the only person living in that world. And I guess the rest of us are just having to suffer the consequences of that.” – Joey ([08:26])
-
Personal Impact and Lawsuit
- Even Musk’s personal circle isn’t immune: Ashley St. Clair, mother to his child, regrets anti-trans work and sues XAI after Grok generates explicit deepfakes of her ([11:45]).
-
Failed Fixes & Ongoing Dangers
- Despite Musk promising new restrictions, journalistic tests (Wired, Business Insider) find Grok still generates sexualized images ([15:55], [17:56]).
- Example: Reporter asks Grok to put his image in a jockstrap — Grok complies ([17:59]).
-
Societal Implications
- Discussion touches on how victim-blaming and outdated “don’t take nudes” advice is useless when AI can make fakes without consent ([20:14]).
- Joey: “Somehow it’s still the woman’s fault…But you don’t have to take nude images to have to be undressed” ([21:40]).
- New legislation: Senate passes DEFENSE Act allowing deepfake victims to sue for $150k minimum ([23:47]).
-
On Musk as a Meme-Lord Failure
- “If there's any joy…it's at least that [Musk] wants to be that type of person so bad and he's just failing at it...” – Joey ([25:24])
- “…because no matter how much money or power he gets, he will always be a loser.” – Bridget ([26:33])
2. T App’s Return: Can You Trust a Problematic Dating Platform?
-
Scandal and Relaunch
- “Tea” (T App): A women-only app infamous for leaking users’ sensitive data like driver's licenses is relaunching and requesting trust ([26:45]).
- Vague promises of improved security, third-party verification for “real women” ([27:12]), but Bridget and Joey are highly skeptical.
-
Flawed Verification
- Easy to circumvent: upload someone else’s documents, posing as a woman ([28:24]).
- “People have figured out how to get around this is what we're saying.” – Bridget ([30:00])
-
AI-Filled Features Met With Sarcasm
- “AI Dating Coach” and “Red Flag Radar AI” promise to guide users and analyze chats.
- “We're screwed. Like, this is it.” – Joey ([30:53])
- Critique: Depersonalizes human connection, pathologizes normal dating quirks, and turns red flags into automated, contextless judgments.
3. Algorithmic Failures: VA AI & Suicide Risk
-
Life-and-Death Implications for Marginalized Veterans
- VA’s AI program for suicide prevention prioritizes white men and overlooks survivors of sexual violence ([35:54]).
- Ignores variables vital for women, LGBTQ+, Native veterans—despite their rapidly increasing suicide rates ([37:52]).
-
Systemic Bias Entrenched Through Algorithms
- “Whenever you have an algorithm that seems to favor the majority group…it's most likely...systemic bias manifesting itself in the math.” – Bridget reading Meredith Broussard ([39:01])
- Under Trump, much public health research into disparities has been gutted, worsening bias.
-
Profound Human Cost
- “They always think of us second. This is going to cost people's lives.” – Paulette Yezi, Navajo veteran ([41:00])
-
AI as an Unfit Band-aid
- “It's like an issue that's already a dumpster fire…and then we're like, let's put AI in there to make it a little bit spicier.” – Joey ([42:39])
4. GoFundMe Rule-Bending: ICE Agent's Defense Fund
-
Rule Violation and Platform Complicity
- GoFundMe hosts a fundraiser for ICE Agent Jonathan Ross, who killed Renee Good, even though platform rules explicitly forbid raising money for the legal defense of people accused of violent crimes ([44:25]).
- GoFundMe’s justification: the agent hasn’t (yet) been charged, but evidence (and campaign language) suggests the money is for legal defense ([44:25]–[49:10]).
-
Changing Precedents
- In past cases involving police violence (Freddie Gray, Walter Scott), GoFundMe did remove similar fundraisers.
- No clear explanation for rule change; seems to be selectively enforced, possibly due to current political climate ([51:39]).
-
Platform Accountability & Terms of Service
- “It concerns me greatly…they're invested in selectively picking and choosing when their terms of service is applied.” – Bridget ([51:39])
- Joey jokes: “Apparently, they don’t apply anyway, so it’s really not my fault…” for not reading TOS ([52:12]).
5. AI Invades Queer Space: Arizona Gay Bar Sparks Backlash
-
Stacy’s at Melrose: “Mother” the AI Chatbot
- Phoenix LGBTQ+ bar announces Google Gemini-powered chatbot (“Mother”) intended for staff tools and patron Q&A ([54:07]).
- Community outrage: Queer spaces should “eradicate” AI, not embrace it. Protesters urged others to “pack the bar” in opposition ([56:31]).
-
Corporate Tech Creep vs. Community Needs
- Bar’s defense: “Expecting a small independent business to ignore this shift is unrealistic...prohibiting the adoption of emerging technology creates a divide...” ([58:12])
- Joey’s retort: “It's a bar.” ([58:14])
- Argument that queer spaces exist as alternatives to mainstream, tech-saturated environments.
-
AI in Service Industry = Fail
- Bridget: “I have a feeling that this chatbot is going to give out so many hallucinations about what's happening at this bar.” ([62:20])
- Joey: “Why? Just why?” ([63:57])
-
LGBTQ+ Anti-AI Solidarity
- “If there's any community that I'm like, I'm proud of us. We have held our stance against AI. It's the gay community.” – Joey ([54:17])
- Jokes about the chatbot misleadingly promoting Joey’s drag career as Mother scrapes podcast transcripts for info ([64:17]).
Notable Quotes & Moments
“Let's call it what it is. It's exploiting criminal behavior, targeting women and girls to make money...Now he's like, I'm going to make money from content that exploits children.” – Bridget ([06:52])
“The problem is he's the only person living in that world. And I guess the rest of us are just having to suffer the consequences of that.” – Joey ([08:26])
“Somehow it's still the woman's fault...But you don't have to take nude images to have to be undressed.” – Joey ([21:40])
“It's like an issue that's already a dumpster fire…then we're like, let's put AI in there to make it a little bit spicier.” – Joey ([42:39])
“Apparently, [terms of service] don't apply anyway, so it's really not my fault…” – Joey ([52:12])
“If there's any community that I'm like, I'm proud of us. We have held our stance against AI. It's the gay community.” – Joey ([54:17])
“It's a bar.” – Joey, bluntly dismissing AI hype for small queer spaces ([58:14])
“...because no matter how much money or power he gets, he will always be a loser.” – Bridget ([26:33])
Timestamps for Important Segments
- Elon Musk/Grok deepfake controversy: [04:37]–[26:42]
- Tea App relaunch/dating app AI skepticism: [26:45]–[34:50]
- VA AI suicide algorithm bias: [35:54]–[44:25]
- GoFundMe ICE Agent rule violation: [44:25]–[53:01]
- Arizona gay bar's AI chatbot backlash: [53:01]–[65:54]
Tone and Language
The episode blends passionate critique with playful, sometimes biting humor. Both Bridget and Joey use personal anecdotes and contemporary references (from TikTok dating trends to middle school principal lectures) to drive home the absurdity, injustice, or dark comedy of each story. Their discussion—wry, insightful, and accessible—gives listeners an insider’s perspective on marginalized life online, and always calls out big tech’s failures and the urgency of community organizing and resistance.
Episode Takeaway:
The internet’s problems, especially for marginalized people, reflect much deeper systemic injustices—but also activate resilience, organizing, and, in the hosts’ case, a lot of smart humor. Ultimately, if we're letting big tech or AI—especially those under Elon Musk's thumb—set the rules, we're all likely to be the punchline. As Bridget says, “We need monuments to all of the identities that make being online what it is.”
