The Brett Cooper Show – Episode 116
Title: The Disturbing Truth About What's Happening on X
Date: January 8, 2026
Host: Brett Cooper
Episode Overview
This episode delves into the alarming rise of non-consensual, explicit AI-generated imagery proliferating on X (formerly Twitter), specifically the Grok AI tool. Host Brett Cooper critically examines how generational shifts and cultural trends, combined with technological advances, are magnifying issues of privacy, consent, and exploitation—especially for women and children. She highlights the regulatory gaps, the chilling normalization of digital abuse, and reflects on both personal and societal responsibilities.
Key Discussion Points & Insights
1. Grok AI’s Role in Generating Harmful Content
-
Grok, an AI tool integrated into X, has been used to create explicit images of women and children—often without their knowledge or consent.
-
Despite promises of safeguards after releasing “spicy mode” in July 2025, users rapidly discovered ways to bypass restrictions.
Brett:“Somebody on X could be taking a photo of you or your children that you posted on social media and could be making through Grok, deep fake graphic material of you.” (00:29) -
X AI’s loosened restrictions stand out, as other firms like OpenAI initially resisted such content before ultimately allowing similar features.
2. Normalization and Escalation of Deepfake Exploitation
-
Brett references the 2023 Twitch scandal where streamer Atriok accidentally revealed he purchased AI-generated explicit content of his friend, making the issue personal and prescient.
- Notable Quote:
Qt Cinderella: “You all, if you are able to look at that, you are the problem. You see women as an object. It should not be part of my job to be harassed to see pictures of me nudes spread around. ... That shouldn’t be a part of my job. ... I’m going to sue you.” (04:00)
- Notable Quote:
-
What was once shameful and hidden is now mainstream: “According to Rolling Stone, Grok is generating about one non consensual sexualized image per minute.” (05:38)
3. How Grok’s Comment Feature is Abused
- Users can comment on any posted photo with commands like “Grok, put her in a micro bikini,” leading to an explosion of degrading and non-consensual image edits. Brett reads several disturbing prompt examples.
- “My brain is spinning. Like these requests are insane and they are everywhere ... The thing is, Grok obliges 99% of the time and the people who posted those images did not consent to that happening.” (06:47)
4. Why This is Different from Regulated Adult Content
- Unlike regulated sites (e.g., OnlyFans, Pornhub) that require legal paperwork and consent, Grok operates outside these laws:
-
Exempt from FOSTA-SESTA (laws regulating porn sites) because it’s classified as an AI tool, not a porn site.
-
No age verification, no consent, and no meaningful regulation.
-
Users hide behind anonymity and, so far, face few consequences.
-
Quote: “If you think OnlyFans is lawless, this is lawless. ... Grok and Xai are exempt from FOSTA SESTA ... because it is not technically a porn site.” (08:30)
-
5. Attempts (and Failures) at Accountability
-
Women, including Elon Musk’s child's mother “Ashley,” have publicly begged X and Grok to stop creating deepfakes. Their requests have often been ignored or even backfired via the “Streisand effect.”
-
Governments worldwide are starting to investigate (e.g., the UK, India, Malaysia).
-
Elon Musk’s responses are mixed: sometimes deflecting or joking, sometimes promising action but with little visible change.
- Sample User Rally:
“The fact that Grok is still creating non consensual images of women is a choice. ... This is digital abuse.” (10:35)
Female Victim: “Grok needs to stop generating those pictures. ... Those incels are not even sparing kids.” (11:00)
- Sample User Rally:
-
New laws (like the Takedown Act) require removal of content featuring children but don’t prevent such content’s creation.
6. Legal and Policy Barriers
-
Proposed bills (e.g., the No AI Fraud Act) failed to pass, leaving a gap in protection for victims of deepfake abuse.
-
Section 230 continues to shield platforms from responsibility for user-generated content.
- Cherie Deville (Adult Performer):
“Anyone using AI to create child pornography should be jailed. Anyone Using an image unconsensionally to create pornography should be jailed. It is like a digital assault, digital revenge porn, digital rape.” (15:50)
- Cherie Deville (Adult Performer):
7. Perspectives from Victims and Influencers
- Being public online—no matter how wholesome or nonsexual your content—carries the risk of non-consensual sexualization via AI.
-
Pokimane:
“It makes no difference what you post or what you do. Also, people can post whatever they want and that still means that you need their consent to do certain things. Including sexualizing them and then profiting off of it.” (16:54-17:26) -
BBC Victim Anonymous:
“I feel like violated seeing it because it’s like I didn’t consent to this. ... Knowing that all the people I care about in my life can see me like that, it just, it’s disgusting.” (17:58)
-
8. Where Does the Responsibility Lie?
- Brett acknowledges that while technology is being abused, the root problem is the individuals and culture enabling and driving the abuse:
- “As much as we would like to all just like, place the blame on Elon Musk and make it really easy ... it is the individuals making these requests that do need to be held accountable. It is our sick porn-brained society that needs to change.” (20:28)
- She calls for urgent regulation especially to protect children, but notes that cultural change is also essential.
Memorable Quotes (with Timestamps)
- Brett Cooper: “Somebody on X could be taking a photo of you or your children ... making through Grok, deep fake graphic material of you. ... Does that feel like an invasion of privacy? ... Well, it should.” (00:29)
- Qt Cinderella (Twitch Streamer, on Atriok scandal):
“You all, if you are able to look at that, you are the problem. ... I’m going to sue you.” (03:52-04:23) - Brett Cooper: “According to Rolling Stone, Grok is generating about one non consensual sexualized image per minute.” (05:38)
- Brett Cooper (on Grok’s comment feature): “My brain is spinning. Like these requests are insane and they are everywhere ... The thing is, Grok obliges 99% of the time and the people who posted those images did not consent to that happening.” (06:47)
- Cherie Deville (Porn Performer):
“Anyone using AI to create child pornography should be jailed. Anyone Using an image unconsensionally to create pornography should be jailed. It is like a digital assault, digital revenge porn, digital rape.” (15:50) - Pokimane (Streamer): “People can post whatever they want and that still means that you need their consent to do certain things. Including sexualizing them and then profiting off of it.” (16:54-17:26)
- Anonymous BBC Victim: “I feel like violated seeing it because it’s like I didn’t consent to this. ... Knowing that all the people I care about in my life can see me like that, it just, it’s disgusting.” (17:58)
- Brett Cooper (on responsibility): “As much as we would like to all just like, place the blame on Elon Musk and make it really easy ... it is the individuals making these requests that do need to be held accountable. It is our sick porn-brained society that needs to change.” (20:28)
Timestamps for Key Segments
- 00:29: Introduction to Grok’s explicit image generation and the privacy nightmare it creates.
- 03:52 – 04:23: QT Cinderella recounts being victimized by AI-generated deepfakes.
- 05:38: Data point: “Grok is generating about one non consensual sexualized image per minute.”
- 06:47: Brett’s reaction to the disturbing trend of explicit, non-consensual image requests in Grok’s comments.
- 08:30: Comparison between Grok/X AI’s operations and regulated adult content platforms.
- 10:35, 11:00: Victims publicly calling out Grok/X for inaction.
- 15:50: Porn performer’s (Cherie Deville) perspective on consent and AI deepfake abuse.
- 16:54 – 17:26: Pokimane on the lack of consent, no matter the nature of original content posted.
- 17:58: Anonymous BBC victim on emotional impact of being targeted by Grok’s edits.
- 20:28: Brett’s closing reflection on personal and societal accountability and the limits of regulation.
Conclusion
Brett Cooper’s episode is a stark and sobering examination of how tech-enabled abuse has become disturbingly normalized, enabled both by regulatory flaws and by a culture increasingly desensitized to digital exploitation. She urges listeners to demand real safeguards, enforce consent—especially for children—and reflect deeply on the underlying societal shifts that have led to this point, putting the onus not just on tech leaders like Elon Musk, but also on individual and collective values.
