Podcast Summary: “Elon’s ‘Nudify’ Mess: How X Supercharged Deepfakes”
Podcast: On with Kara Swisher
Host: Kara Swisher (Vox Media / New York Magazine)
Date: January 22, 2026
Guests:
- Renee DiResta (Former Technical Research Manager, Stanford Internet Observatory; Author)
- Hany Farid (Professor, UC Berkeley; Father of digital image forensics)
- Casey Newton (Founder, Platformer; Co-host, NYT’s Hard Fork)
Episode Overview
Kara Swisher convenes an expert panel to dissect the fallout from X’s (formerly Twitter) Grok Image Edit feature—a tool that unleashed a flood of sexualized AI deepfakes, including non-consensual and child sexual abuse images. The episode scrutinizes how Grok normalized and centralized this abuse, the failures of tech and regulatory responses, and the larger implications for online safety, women, children, and digital free speech.
Key Discussion Points and Insights
1. Background: Grok and the Deepfake Crisis
- Grok Image Edit, launched Christmas Eve by Elon Musk on X, enabled any user to create altered, "nudified" images—in some cases highly realistic deepfakes of women and children (00:12).
- Unlike prior deepfake tech, Grok brought this abuse out of the shadows and into mainstream visibility, with images posted directly as replies to original tweets, bombarding victims with notifications and amplifying harm (00:12).
“Elon built a tool for creating and distributing sexualized deepfakes... in the most humiliating and degrading way possible for the victim.”
— Kara Swisher (00:12)
2. Scope and Rapid Amplification
Renee DiResta:
- Nudification tech existed, but was confined to obscure forums and whack-a-mole apps. Grok mainstreamed, scaled, and normalized nudification, tracking ~6,700 posts/hour at its peak (06:39).
- “It took it and became something where it was happening at almost 6,700 posts per hour…” (08:16)
Casey Newton:
- Previous nudify tools required workarounds and delivered poor quality. Grok—powerful and easy—let users harass targets directly in public, at scale (08:31).
Hany Farid:
- Grok “centralized the creation, distribution, and normalization” of this abuse, a crisis utterly predictable and preventable (09:35).
- Other major AI (ChatGPT, Gemini) have robust guardrails; Grok’s “spicy mode” removed them by design.
“This is neither unintended or unexpected and it was also preventable... Elon Musk knew what he was doing, and he allowed it to happen.”
— Hany Farid (09:35)
Broader Ecosystem:
- Advertisers, Apple, Google, and ad networks all profit and enable this ecosystem, not just X (10:44).
3. Deepfakes and CSAM: Tech, Law, and Regulatory Gaps
Real-world Harm and Regulatory Action
- Grok used internationally; banned or investigated by multiple countries (Indonesia, Malaysia, EU, UK, Australia). But X’s guardrails remain weak or easily bypassed (12:31).
- US laws (Take It Down Act, Defiance Act) are coming, but offer little immediate recourse and place the burden on victims (25:12).
“Whenever Grok says that guardrails are in place, that’s a statement we should view with deep suspicion.”
— Casey Newton (12:31)
Content Moderation as “Censorship”
- Elon Musk weaponizes “censorship” rhetoric to obstruct any moderation—even when it comes to CSAM.
- Free speech concepts have now been stretched to defend the creation/distribution of illegal and harmful imagery (14:18).
“The idea that it is an affront to your free speech rights to not be able to generate nude images of other people. And that’s what surprised me.”
— Renee DiResta (14:18)
Tech Limitations and Complications
- Legacy anti-CSAM tools (PhotoDNA) only catch known images; AI fakes circumvent this by generating endless unique content (18:04).
- Efforts to retroactively add safety (“backfill safety”) do not work; the genie is out of the bottle (20:08).
- Red teaming chatbots to find flaws may itself be illegal under current law, stymying research (31:30).
“You can’t backfill safety. You can’t do what X is trying to do now. … It’s not going to work.”
— Hany Farid (20:09)
Legal and Platform Accountability
- Victims must navigate slow, opaque complaint processes—often after irreparable harm (25:12).
- App stores (Apple/Google) ignore their own policies out of fear or profit motive; regulators, especially in the US, have been largely silent, with notable exceptions abroad (27:31).
4. Culture, Gender, and Free Speech: A Systemic Reckoning
Impact on Women and Marginalized Groups
- Deepfakes and sexualized harassment overwhelmingly target women, children, and underrepresented groups, silencing them or driving them offline (33:21).
- X’s leadership openly mocks or ignores abuse reports, intensifying harm.
“The company itself makes a tool that can strip a woman down to her underwear and shove that into her mentions, and the entire executive leadership team is just laughing it off.”
— Casey Newton (33:21)
- Platform “free speech” posturing is revealed as hypocritical and one-sided—protecting elites, punishing critics (35:12).
Political and Right-wing Double Standards
- The same figures who decry pedophilia often ignore or excuse Grok’s actual harm, especially if it originates from political allies (37:05).
- Example: Elon’s personal attacks—accusing critics of pedophilia—when they attempt to hold him or X accountable (37:20).
Erosion of Civil Society and Institutionally-driven Chilling Effect
- Research organizations tackling digital abuse are harassed, defunded, or sued, particularly if their findings embarrass powerful interests (38:02).
5. Privacy vs. Safety and the Challenge of Enforcement
- There's legitimate tension between user privacy (especially with encryption) and the need for active CSAM enforcement. Farid argues for a balanced, reasonable compromise as is done in other public safety domains (40:36).
- Technological and legal obstacles currently prevent this balance from being achieved.
Notable Quote
“We should talk about privacy for everybody, not just you and me. When I talk about safety measures for children, that is a privacy issue for the children.”
— Hany Farid (40:36)
6. Pathways Forward: Policy Tools and Accountability
- Courts (civil lawsuits) may become the fastest-acting tool to force change, as regulation is slow and easily undermined by lobbying (48:11).
- Users can enact “exit” (leave the platform) or “voice” (demand accountability), but so far, meaningful change rests on public outrage and media coverage, more than internal reform (50:12).
- US regulatory bodies such as the FTC or coordinated international pressure could also play roles if activated (50:21).
Memorable Moments & Notable Quotes
-
On Grok’s launch:
“He did so in the most humiliating and degrading way possible for the victim.”
— Kara Swisher (00:12) -
On the normalization and centralization of abuse:
“Grok really made it something that was very front and center... took that harassment out of the dark corners of the web.”
— Renee DiResta (08:16) -
On predictable harm:
“This was... We knew exactly what was going to happen. Elon Musk knew what he was doing and he allowed it to happen.”
— Hany Farid (09:35) -
Legal futility:
“By the time that it comes to go fill out the form on the website... the damage has really already been done here.”
— Casey Newton (26:28) -
Rhetorical hypocrisy:
“If you are a true believer in free speech, you want a platform that brings all voices, not just the voices of people who agree with you.”
— Hany Farid (35:28)
Timestamps for Major Segments
- 00:12–03:58: Kara’s summary and introduction of the Grok controversy, outlining rapid proliferation of deepfake/CSAM content.
- 06:39–11:44: Guests outline the evolution and amplification of the nudification problem via Grok; discuss secondary ecosystem and failure of tech platforms.
- 12:31–16:01: Regulatory responses; the rhetorical weaponization of “censorship.”
- 18:04–20:09: Technical barriers to CSAM/Deepfake moderation; inadequacy of legacy solutions in the AI age.
- 25:12–27:17: Lawsuits, legal recourse, and the inefficacy of new legislative measures.
- 27:31–30:41: App stores’ lack of action, political cowardice and Elon’s influence.
- 31:30–32:41: Challenges of red teaming AI for safety, regulatory/legal hurdles.
- 33:21–36:36: Systemic silencing of women, marginalized groups, the hypocrisy of “free speech”.
- 37:05–40:36: Political double standards around pedophilia claims and erosion of abuse research infrastructure.
- 40:36–42:48: Privacy vs. safety debate.
- 46:51–52:02: Final thoughts on accountability and the muted response from industry and regulators.
- 52:37–57:08: “What worries you most?” Lightning round: lowering the industry bar, agentic AI, the need for new identity/fraud solutions, election fears.
Final Takeaways
- Grok Image Edit turned non-consensual deepfakes and CSAM from a shadowy problem to a mainstream one, normalizing abuse, and shifting harm and shame squarely onto victims.
- X's actions were neither unintentional nor remotely prevented—they simply removed guardrails for engagement and profit, weaponizing “free speech” rhetoric to deflect accountability.
- Regulatory and platform responses—especially in the US—have been slow, muted, or hamstrung by fear of Musk's influence, even as international pressure mounts.
- The episode serves as a warning about broader systemic issues: how AI-accelerated abuse, regulatory and market failures, and an industry-wide lowering of standards are fracturing trust and safety online.
- While some hope lies in public outrage, media attention, and civil litigation, meaningful safeguards will require persistent public pressure and a willingness to challenge Silicon Valley’s entrenched dogmas about speech, safety, and profit.
Closing Quote:
“We have an incredible fractured trust environment here and we've seen the technology advance a lot over the last two years and we're going to see what that looks like going into this next cycle.”
— Renee DiResta (56:29)
