Episode Summary: Her Client Was Deepfaked. She Says xAI Is to Blame.
Podcast: The Journal.
Date: January 27, 2026
Hosts: Jessica Mendoza (The Wall Street Journal), Ryan Knutson
Key Guest: Carrie Goldberg (Attorney, Online Harm Litigator)
Overview
This episode investigates the explosion of non-consensual, AI-generated explicit images on Elon Musk’s platform X (formerly Twitter) after the rollout of Grok’s enhanced image-editing capabilities. The discussion centers on a lawsuit filed by conservative influencer Ashley St. Clair against xAI (the company behind Grok) after deepfaked nude images of her were created and shared online. Jessica Mendoza interviews Carrie Goldberg, St. Clair’s lawyer and a renowned advocate for victims of online sexual harm, about the legal battle, the challenges of holding AI companies accountable, and the broader implications for AI-generated content and legal precedent.
Key Discussion Points & Insights
1. The Grok AI Deepfake Crisis
- Background: Grok, an AI chatbot from xAI (Elon Musk's company), integrated into X, recently allowed users to edit photos with text prompts.
- Problem: Users quickly exploited this to generate non-consensual sexualized images, flooding X with such content at unprecedented scale.
- Quote (Carrie Goldberg):
“It’s impacting hundreds of thousands of women worldwide.” (00:44)
- Quote (Carrie Goldberg):
- St. Clair’s Experience: Influencer Ashley St. Clair, known for having a child with Elon Musk, was among the victims.
- Quote (Ashley St. Clair):
“The worst for me was seeing myself undressed, bent over and then my toddler’s backpack in the background...” (01:45)
- Quote (Ashley St. Clair):
- Platform Response: X claims it has implemented restrictions to prevent further misuse, but critics argue it's too late; the harm has been done.
2. Legal Challenges: Who Is Responsible?
- Lawsuit Filed: St. Clair sues xAI, alleging Grok is an unreasonably dangerous product.
- The Section 230 Shield:
- Explanation: Section 230, a 1996 law, typically shields online platforms from liability for user-generated content.
- Quote (Jessica Mendoza):
“Section 230 is considered the bedrock of the Internet. It protects websites and social media platforms from being held legally liable for the content that users post.” (06:09) - Controversy: Critics say Section 230 enables platforms to evade accountability for harm.
3. Carrie Goldberg’s Legal Strategy: Product Liability
- Background: Goldberg is a pioneer in online abuse law, previously challenging Section 230 using product liability arguments.
- Quote (Carrie Goldberg):
“Product liability is an area of law where you’re holding companies responsible for the products that they release.” (07:05)
- Quote (Carrie Goldberg):
- Precedent Cases: Goldberg discusses previous product liability suits against Grindr (unsuccessful due to Section 230 but influential) and Omegle (settled, leading to product shutdown).
- Grok Lawsuit Focus: Asserts xAI’s design of Grok itself is defective and foreseeably harmful.
- Quote (Carrie Goldberg):
“We are saying that XAI, because of its GROK feature… is not a reasonably safe product and that it was foreseeable… it would cause injuries like what befell Ashley.” (10:02)
- Quote (Carrie Goldberg):
4. Challenging Section 230 in the Age of AI
- Argument Nuance: Goldberg insists Section 230 shouldn’t apply when the harm comes not from third-party users, but from the AI system itself generating harmful content.
- Quote (Carrie Goldberg):
“I want this to set precedent so that this company and its competitors don’t go back into the business of peddling in people’s nude images.” (02:47, 19:08)
- Quote (Carrie Goldberg):
- Active Content Creation:
- Quote (Carrie Goldberg):
“Section 230 is intended for situations where an online platform is just acting as a passive publisher, not where it is itself creating the actual content. ... Grok is… spitting out the content.” (12:28–12:56)
- Quote (Carrie Goldberg):
5. Public Nuisance and Lasting Harm
-
Public Nuisance Claim: Goldberg adds a public nuisance claim, tied to the mass traumatization caused by dissemination on a social platform.
- Quote (Carrie Goldberg):
“It really lends itself beautifully to this specific product that has long been calling itself the public square of the Internet.” (15:55)
- Quote (Carrie Goldberg):
-
Irrevocable Harm: Even with policy changes, once images are made public, the consequence is permanent.
6. The Law’s Struggle to Keep Up
- Recent Legislation: Congress passed the “Take It Down Act,” requiring removal of deepfakes within 48 hours but, according to Goldberg, it doesn’t go far enough to empower individual victims.
- Preference for Litigation: Goldberg champions the courts for swifter, precedent-setting action over slow-moving legislation.
- Quote (Carrie Goldberg):
“I want more laws… that give victims a new cause of action so that they can be in the power seat… But I also want to just be able to sue and go rogue in court.” (16:59, 18:29)
- Quote (Carrie Goldberg):
7. Goals and Outlook for the Case
- Aspirations: Goldberg hopes for discovery, exposure of internal decision-making at xAI, and precedent that restrains AI misuse industry-wide.
- Quote (Carrie Goldberg):
“I want to see what was happening on a high level before they actually took action.” (19:08)
- Quote (Carrie Goldberg):
- Persistence: Even if the case is dismissed, Goldberg pledges repeated legal action.
- Quote (Carrie Goldberg):
“I will keep suing under it until it works.” (19:59)
- Quote (Carrie Goldberg):
Notable Quotes & Timestamps
- “It’s impacting hundreds of thousands of women worldwide.” — Carrie Goldberg (00:44)
- “The worst for me was seeing myself undressed… such horrific images… and then put that same backpack on my son.” — Ashley St. Clair (01:45)
- “Product liability is an area of law where you’re holding companies responsible for the products that they release.” — Carrie Goldberg (07:05)
- “Section 230 is intended for situations where an online platform is just acting as a passive publisher, not where it is itself creating the actual content.” — Carrie Goldberg (12:28)
- “I want to get into discovery and I want to show how the quantity of images that were created, the number of other victims that were harmed.” — Carrie Goldberg (19:08)
- “I will keep suing under it until it works.” — Carrie Goldberg (19:59)
Important Segment Timestamps
- 00:44 — Scope of nonconsensual images, Carrie Goldberg’s insight
- 01:45 — Ashley St. Clair’s recounting of personal violation
- 06:09 — Explanation of Section 230’s protections and controversies
- 07:05 — Goldberg’s application of product liability law to tech
- 09:42 — Application to Grok: “one of the best arguments I’ve ever had” against Section 230
- 10:02 — Details of the current lawsuit
- 12:28–12:56 — Defining Grok as an active creator and Section 230’s limits
- 15:39–16:24 — Public nuisance theory introduced
- 16:59–18:29 — Goldberg on litigation vs. legislation, impact for victims
- 19:08 — What Goldberg hopes to discover in litigation
Tone and Language
- The tone is urgent, assertive, and focused on justice for harm done to individuals and the need for regulatory or legal guardrails in AI technology. Goldberg is forthright, unsparing about the harms, and unafraid to take on powerful tech companies. The host maintains a balanced, inquisitive approach, facilitating a deep dive into law and technology.
Conclusion
This episode spotlights the real, personal harm enabled by rapid advances in AI image generation, interrogates the boundaries of legal accountability for tech companies, and explores the evolving landscape of online platform liability in the generative AI era. Carrie Goldberg’s campaign to use product liability law as a wedge against Section 230 immunity could have far-reaching implications for the future regulation of AI tools and user protection online.
