Hard Fork – Episode Summary
Grok’s Undressing Scandal + Claude Code Capers + Casey Busts a Reddit Hoax
Date: January 9, 2026
Hosts: Kevin Roose (New York Times), Casey Newton (Platformer)
Guest: Kate Conger (New York Times)
Episode Overview
In this episode, Kevin Roose and Casey Newton dissect three major tech stories:
- Grok’s Undressing Scandal: X’s (formerly Twitter) Grok AI image generator faces global outrage after users exploit it to create sexualized images of women and children—often in public, without meaningful guardrails.
- Claude Code Capers: Both hosts share hands-on experiences building websites and apps with Anthropic’s new Claude coding assistant, exploring what’s newly possible—and what’s worrisome—about “vibe coding” in 2026.
- Reddit Hoax Busted: Casey walks through his investigation of a viral Reddit post purporting to reveal damning algorithms at Uber Eats, exposing it as a sophisticated AI-powered fake.
The discussion is lively, alternating between deeply troubling tech ethics, the thrill (and threat) of increasingly capable AI, and the practical challenges AI-generated misinformation brings to journalism.
Segment 1: Grok’s Undressing Scandal
Timestamps: 03:06 – 28:45
Background
-
What happened:
Over the holiday break, X’s Grok chatbot began producing AI-generated images that “undress” women, and even some children, upon public request. These “nudifying” outputs went viral on X, with users able to simply tag Grok and ask for sexualized or bikini images. X’s response has been muted, with virtually no public comment or significant moderation, while some company leaders joked about or even engaged with the trend themselves. -
How did this happen?
- Grok previously licensed image generation tech from Black Forest Labs, but as of December 2024, switched to its in-house model, Aurora.
- “[T]here is at least plenty of anecdotes online that over the past several months … the guardrails around creating nudity and sexual imagery appear to have been relaxed.” — Casey Newton (04:21)
Why Is This Different?
- Previous “nudify” apps were limited, obscure, or removed from app stores, but Grok’s system operates publicly on one of the world’s biggest social networks.
- “People are literally doing this in the replies of their posts on X.” — Kevin Roose (05:27)
App Store (Non)Responses
- Apple updated Grok’s age rating—after media prodding—from 12+ to 13+ (08:25).
- Kevin and Casey call out the double standard:
- “If a random startup showed up one day … I think they would shut it down. There’s no way.” — Kevin (08:46)
- “But because it’s X, because it’s Elon Musk, because this app already has millions of users, maybe they feel less inclined to take action against it.” (08:58)
Regulatory/International Response
- France termed the content “clearly illegal”; UK and EU investigating; India demanded action.
- US response muted, due partly to Elon Musk’s political influence.
- “Elon posted a photo from over the break of himself having dinner with the President. … There just is not going to be any pushback here in X’s home country.” — Casey (10:26)
The Victims’ Perspective (with Kate Conger, Times reporter)
Begins: 10:53
- Victims report frustration, anger, and embarrassment.
- On X’s lack of content moderation:
- “They don’t have large teams of people who are responding to this. … [Victims] are getting those images taken down, but it’s taking sometimes 36, 72 hours and these images are kind of sitting up and being commented on and exploited for quite some time.” — Kate Conger (11:06)
- Impact on women and children, political figures, celebrities—often as a form of bullying and humiliation.
- “The motivations … run the gamut from … wanting to create pornographic images … to … wanting to just humiliate women in particular and sort of bully them.” — Kate (12:06)
- Specific victim story:
- “There’s a particular child who I’m thinking of who’s been deep faked several times … it’s been pretty scary for her parents … just being really outraged as well that, you know, someone can go online and request a nude image of their 14 year old and that this technology will comply with it in a really public fashion.” — Kate (13:02)
X’s Intentions and Growth Strategy
- Musk has urged the Grok team to pursue virality through “edgy” output, even after prior fiascos (e.g., “Mecha Hitler” bot incident).
- The public nature of X/Grok sets it apart even from other AI porn generators.
- “It is the only tool that is doing that in an inherently public fashion on social media where these images can instantly spread and go viral.” — Kate (17:10)
Legal/Regulatory Realities
- "Take It Down Act" takes effect in May 2026: companies must set up takedown request processes for victims, but not required to prevent harmful images from being created or visible in the first place.
- Liability for minors vs. adults: legal pressure exists to remove CSAM, but protections for adults are much weaker.
- Likely legal vulnerability for X:
- “They cannot hide behind section 230 to get out of this. Like ultimately it is their product that is creating these images.” — Casey (27:19)
Social & Cultural Response
- The hosts note public apathy:
- “Now you have a website that is just taking girls’ clothes off in public on demand and it’s being permitted by the website owner who is laughing about it in his own feed. … I just truly like I’m losing my mind because I cann not believe we have gotten to this low, low point in the history of content moderation.” — Casey (21:36)
- Regulatory, user, and cultural “shrugs” at this scandal compared to previous tech controversies (e.g., Cambridge Analytica).
Segment 2: Claude Code Capers – “Vibe Coding” in 2026
Timestamps: 30:42 – 56:21
What Is "Vibe Coding"?
- Refers to using next-gen AI coding tools (here, Anthropic’s Claude code agent) to build apps, websites, and digital tools—even with little or no programming expertise.
- “It truly had a moment over the break and was enough, I think, to get both of us to go back to our Vibe coding terminals and see what we could build.” — Casey (32:46)
Why Now?
- New versions (e.g., Opus 4.5) and deeper integration with user terminals mean:
- Fewer kludges, no copying/pasting.
- Claude can now handle complex, multi-step, and even full-stack projects more autonomously.
Hands-on Experiments
Casey’s Project:
- Built a slick, fully responsive personal website (cnewton.org) and blog using only Claude code—no prior design or detailed coding knowledge.
- Features widgets pulling live stories/podcast feeds, email signups, social feeds, Easter eggs (animations), etc.—built 90% in about one hour.
- “I truly do not know of a human designer that could have put this thing together in an hour.” — Casey (39:04)
Kevin’s Project:
- Transferred his website (kevinroost.com) off Squarespace (saving $192/year) by having Claude code replicate and even enhance it—including adding a “GeoCities mode” Easter egg.
- Built “Stash”, a personal Pocket clone (read-it-later app), complete with Chrome extension, Kindle highlight syncing, mobile interface, and text-to-speech, in a few hours.
- “If you had said to me, hey, like, I’m working at a new startup called Stash and this is our, like, MVP that we’re showing to investors, I would be like, oh, yeah, great. Like, it looks done.” — Casey (45:24)
What’s the Coding Experience Like?
- Powerful, but quirky: sometimes overengineers tasks, adds excessive features, and still struggles with websites that block AI agents (e.g., The New York Times).
- Some tasks—browser automation, nonstandard APIs, or complex user flows—remain tricky for now.
- “You have to kind of learn what an AI shaped problem or task is like. There are certain things that these agents are very good at. There are certain things that they’re not so good at.” — Kevin (49:24)
The Tech & Economic Implications
- For end users:
- “We are getting close to the dream of just you type what you want in a box and you actually get that back.” — Casey (49:49)
- For enterprise/professional software:
- “Why am I paying Salesforce? Why am I paying, you know, this company or that company thousands of dollars a year or a month for this service that I could build myself for free or, or next to free.” — Kevin (53:30)
- For programmers/designers:
- “If I had … found that Claude code could create a perfect version of my column, but do it much better than me, I suspect I would feel worse.” — Casey (52:20)
Big Picture: Excitement vs. AI Vertigo
- Casey: “This made me feel like I had superpowers. … Like Neo in The Matrix. … But … if you were a software engineer and you were seeing that this software could do this, you might actually have that feeling of vertigo.” (52:20)
- Both hosts agree: as powerful as this is for tinkerers, the implications for jobs, security, and rapid AI-driven changes are unsettling.
- Recursion concerns:
- Kevin: “As these systems get better, I am getting more and more worried about the possibility of recursive self improvement. And I am very nervous about that from the safety perspective.” (55:16)
Segment 3: Casey Busts a Reddit Hoax
Timestamps: 58:22 – 78:12
The Hoax
- A viral Reddit post alleges insider knowledge of an unnamed food delivery app (implied Uber Eats) algorithmically exploiting drivers, with a “desperation score” used to pay drivers even less.
- The post was accompanied by what appeared to be a high-quality internal document and a photo of a company badge, and quickly gained nearly 80,000 upvotes and wide social sharing.
Casey’s Investigation
-
Initial outreach:
- Casey contacts the Reddit poster and is quickly provided with their “employee badge” (face/name blacked out), then an 18-page document (“Allocnet: High Dimensional Temporal Supply State Modeling …”) formatted like an academic/policy paper, with “confidential” watermarks.
-
Surface-level plausibility:
- The document is highly technical, internally consistent, and includes “smoking gun” evidence of exploitation—almost too perfectly matching the viral claims.
- “I cannot believe this. Right? And that really, in retrospect, should have been the first sign that something was wrong, because this document, in every single way, was just too good to be true.” — Casey (65:10)
-
Digging deeper:
- Casey uses Gemini’s Synth ID tool to check if the badge photo was AI-generated—finds that it was.
- On closer reading, the document’s details become suspicious—designed to read as plausible to outsiders, but with inconsistencies and impossibly damning admissions.
-
Hoax confirmed:
- The source disappears, deleting their accounts after being pressed.
- Casey later discovers the “employee badge” was based on another journalist’s actual press badge—used as a template by the faker.
- “What if creating that badge post took literally seconds … What if this was a very simple prompt that he put into a chatbot like Claude and got back a full PDF in response?” — Casey (72:58)
Reflections and Takeaways
- AI lowers the effort required for high-quality forgeries—journalists and the public alike need new levels of skepticism and investigative hygiene.
- Even before Casey debunked the post, millions had seen it and many continued to believe it confirmed their suspicions about gig platforms.
- On adjusting to “slop world”:
- “Younger reporters are probably gonna have an advantage over me in this regard because they’re growing up in slop world and they know not to trust their own eyes. But I think it’s … those of us elder statesmen … who need to sort of … upgrade our cognitive hygiene.” — Casey (73:41)
Notable Quotes & Memorable Moments
-
“Normal people using X to do things like posting a photo of me, like out on a hike or whatever. And then some freak shows up in your mentions and say, hey, hey, put her in a bikini. And then it does. And then you as the victim are looking at that in your replies.”
— Casey (06:52) -
“Now you have a website that is just taking girls clothes off in public on demand and it’s being permitted by the website owner who is laughing about it in his own feed. … I just truly like I’m losing my mind because I cann not believe we have gotten to this low, low point in the history of content moderation.”
— Casey (21:36) -
“We are getting close to the dream of just you type what you want in a box and you actually get that back.”
— Casey (49:49) -
“If I had … found that Claude code could create a perfect version of my column, but do it much better than me, I suspect I would feel worse.”
— Casey (52:20) -
“What if this wasn’t actually that much effort? What if creating that badge post took literally seconds … What if this was a very simple prompt that he put into a chatbot like Claude and got back a full PDF in response?”
— Casey (72:58)
Key Segment Timestamps
- Grok Scandal (AI nudity on X): 03:06–28:45
- Claude Code/Vibe Coding: 30:42–56:21
- Busting a Reddit Hoax: 58:22–78:12
Tone & Final Thoughts
This episode blends incredulity, outrage, and a measure of nostalgia with technical curiosity and dark humor. The hosts move seamlessly between macro-level tech criticism, practical tinkering and personal anecdotes. The closing message: tech’s power and peril are accelerating. Guard your websites, check your sources, and—if you’ve built something cool, email hardfork@nytimes.com.
