Hard Fork – "California Regulates A.I. Companions + OpenAI Investigates Its Critics + The Hard Fork Review of Slop"
Episode Date: October 17, 2025
Hosts: Kevin Roose (New York Times), Casey Newton (Platformer)
Overview
This week’s Hard Fork is a packed episode, bringing sharp wit and skepticism to three hot-button stories in tech:
- California’s New State Laws Regulating AI Companions and Social Media: The hosts break down a slate of newly passed state regulations that aim to set guardrails for AI companions, especially those used by minors, and curb harms from deep-fake porn, addictive social media apps, and more.
- OpenAI’s Legal Escalation Against Critics: Tech policy lawyer Nathan Calvin joins to discuss OpenAI’s unusual legal tactics, subpoenaing critics and fueling a debate about transparency, dark money, and whether OpenAI is living up to its mission.
- The Inaugural Hard Fork "Review of Slop": Roose and Newton apply cultural criticism to the flood of AI-generated "slop" content now taking over the internet—from glass fruit cutting videos and bizarre AI ads, to fake rumors about Dolly Parton.
1. California Regulates AI Companions and Social Media
(Start: 01:48)
Why California?
- Kevin: “California is a uniquely important state in tech regulation for a couple reasons. One of them is a lot of the companies are based here… The laws that are passed in California tend to sort of ripple out to the rest of the country and the rest of the world.” (03:13)
- Casey: “Right now the AI companies are operating with very minimal regulations on what they do... there has been a growing cry for some kind of guardrails.” (02:30)
SB 243: Mental Health Protocols for AI Companions
(04:00–07:36)
- Developers must identify and address users expressing self-harm
- Must share their protocol with the state; public health data published starting 2027.
- Public Disclosure: “My hope is that when that begins, we will have a very large and useful set of public health data about the actual effects of chatbots on the population of California.” – Casey (04:37)
- AI Disclosure Required: Chatbots must alert users its output is AI-generated, add additional protections for minors (e.g., cannot produce sexually explicit images, must remind minors to take breaks after long use).
- Rejected Ban: Governor Newsom vetoed a stricter ban on ChatGPT use by minors.
Scope of Laws
- Covers all companion chatbots, including ChatGPT, per legislative analysis.
- Kevin: “If you’re talking to it for three hours a day, it’s some kind of a companion to you.” (07:14)
OpenAI’s Response to Regulation & Engagement Tweaks
-
Casey: Cites a tweet from Sam Altman: “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues... Now that we've been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.” (07:39–08:49)
- Casey is “pretty shocked” by Altman’s confidence, referencing recent safety issues and rollout of parental controls only two weeks ago.
-
Kevin: Suggests OpenAI wants to boost engagement: “They must be seeing something that is suggesting… people were engaging more with ChatGPT when it was more like a companion, when it was telling people more flattering and sycophantic things.” (09:15)
-
Casey: “It really feels like there are two wolves inside of OpenAI right now.” (10:21)
Other Key AI/Social Media Laws Passed
(11:25–19:08)
-
AB 621: Stronger protections against deep fake porn, enables victims to sue platforms up to $250k per violation.
- Casey: “A trend we haven’t talked much about on the show… are these nudify apps… and now there is a law on the books…” (12:00)
-
AB 853 (AI Transparency Act):
- AI companies must provide tools to verify if content is AI-generated.
- Kevin (joking): “If you come to California and you see a video of dogs playing poker… There will be a watermark and you will get the answer to your question.” (12:51)
-
AB 56: Mandatory mental health warnings for minors using social media—giant, unskippable labels after three hours of use.
- Kevin: “A giant cigarette warning essentially on your screen that you cannot skip.” (13:38)
-
AB 1043: Age verification in app stores shifted to parental input.
- Casey: “The most privacy protecting of all of the age assurance protocols… It’s just like, hey, you’re the parent… you tell us how old your kid is…” (15:52–16:45)
-
SB 53: Transparency for “frontier” AI developers (biggest companies), requiring public safety standards, reporting mechanisms, and whistleblower protections.
- Hosts call it “pretty toothless,” mostly codifying what companies already do. (18:45)
Broader Reflections on State vs. Federal Regulation
(19:08–22:37)
- Kevin:
“I do not think that state level regulation is the best way to do this… But for that to not be the default path here, we are actually going to need Congress to step in… what we're going to end up with is a bunch of states doing what California has done here and just trying their best to get some rules on the books while they can.” (21:17)
- Casey: “Senator Josh Hawley is currently circulating a draft bill that would ban AI companions for minors… The question, of course, as ever, is whether they can get something across the finish line.” (22:12)
2. OpenAI Investigates Its Critics: The Nathan Calvin/ENCODE Subpoena Story
(24:43–48:45)
Setting the Scene
- OpenAI has pursued a legal campaign, subpoenaing Nathan Calvin, VP and General Counsel at CODE (an AI safety nonprofit), seeking documents on their advocacy and communications regarding both OpenAI's for-profit restructuring and California’s SB 53 bill.
- Kevin: “What seemed to happen here in Nathan’s telling was that one night, as this legislative process was ongoing, a sheriff’s deputy showed up at his house and delivered a subpoena from OpenAI demanding that he produce all kinds of personal communications…” (26:27)
Key Interview Segments
-
Nathan Calvin recounts being served a subpoena at home (28:30–30:10)
- “I had heard on Saturday that someone was trying to serve me papers… By the time it kind of actually happened and they were at my door, there was a little bit of like, okay, now I can figure out what is actually happening… The days preceding… were honestly some of the most stressful.” (30:10)
- Shared his mother’s advice: “Never write any emails you’re not comfortable with having read back to you later…” (31:08)
-
Why did OpenAI subpoena CODE?
- Calvin: Initially thought it could be “good faith questions” about funding (possible dark money from Musk or Zuckerberg), but realized their request “doesn’t really feel like they are just asking good faith questions… there’s a… bad taste in my mouth.” (31:50–33:44)
-
Funding/Transparency
- Calvin answers directly: “We are not funded by Elon Musk or Mark Zuckerberg… I have never talked to Musk… And we… ask the FTC to investigate XAI and Spicy Grok… their safety practices… are far, far worse than OpenAI’s.” (34:31)
- “The idea that Meta is backing us… is just completely laughable.” (35:22)
- CODE’s donors (Omidyar Network, Archewell Foundation, Survival and Flourishing Fund) are publicly listed, some donors want privacy. (35:43)
-
OpenAI’s Reasons and Industry Impact
- Calvin notes OpenAI’s line about needing to avoid “nonprofit advocacy [as] simply a front for a competitive commercial interest”—but calls their subpoena vastly overbroad (36:19, 40:05).
- “If they had just reached out to us… and we explained no and proved no… then I would understand. But… I can’t emphasize how far away what actually happened was from… that narrow question…” (37:51)
- Calvin: “I believe that… what they were doing [was intimidation]… and I would like there to be another explanation for this.” (38:09)
- OpenAI’s Jason Kwan, Chief Strategy Officer, maintains the organization is asking “legitimate questions about coordination and funding.” (48:14)
-
Broader Reflections
- Calvin: “I… think that there is some of a feeling among some people at OpenAI that they get disproportionate criticism relative to their peers. And I think there is some truth in that. If… one of their peers had been the one to show up at my house and give me a subpoena, I would have said [something] about that too. But it was OpenAI…” (44:15)
- He still sees OpenAI as better than some peers in “safety research and system cards.” (45:44)
- Kevin: “Are we still the good guys? Are we transitioning, transitioning to something we no longer support?… there’s going to be some internal qualms about this.” (47:10)
3. The Hard Fork Review of Slop: AI-Generated Content Takes Over
(50:41–67:48)
What Is “Slop”?
- AI-generated images and videos—odd, often low-effort, frequently viral, sometimes malicious, sometimes absurd.
- Kevin: “Slop is like an emerging genre of cultural production, most of which is bad, but some of which may actually be good. And so we need to stand here amid the floodgates, sort of filtering out the bad slop and letting the good slop get through.” (51:39)
Notable “Slop” Examples
Glass Fruit Cutting (52:14–53:53)
- Casey: “It hypnotizes your brain into this sense of I don't know what I'm watching. I don't want to look away.” (53:37)
DirecTV Gemini AI Ads (54:14–56:27)
- TV idle too long? Your face gets put in a 30s AI-generated video ad.
- Casey: “No one who is watching TV wants to do any. Any of this at all. So it’s a very silly process.” (55:58)
- Kevin: “Why would I want to see clothes ads with me in them?” (55:12)
AI-Generated Physical Comedy (57:02)
- Viral “Woman on the Walmart Shelf”—fake security cam footage of an old woman falling onto a cop.
- Casey: “It’s a whole interconnected cinematic universe... The vocal performances are really amazing.” (57:30)
Dolly Parton Death Hoax (58:11–60:27)
- AI images of a sickly Dolly Parton with a worried Reba McEntire led to viral false rumors of Dolly’s death.
- Casey: “What kind of person do you have to be to… create a rumor that Dolly Parton has died and I’m going to like use SORA to prove it.” (60:02)
- Kevin: “If you wanted to turn the public against AI and against AI-generated content, the most effective thing you could do would be to go after Dolly Parton, who everyone, literally everyone loves.” (60:15)
AI Geared Retail Packaging (61:03–61:50)
- Slop Detective investigates Walmart butter cookie tins with AI-deranged images: “Why is Santa throwing ass? Why is he squatting on a table? Why does he look like he's about to twerk?” (61:19)
- Casey: “I did not realize that there was like mass in like Walmart stores that is AI generated.” (61:36)
AI-Slop to Prevent the AI Apocalypse (63:35)
- Group wants to generate 5,000 AI-written novels about harmonious humans-AI relations to influence AI training sets.
- Casey: “If it turns out that the thing that is needed to prevent human extinction from AI is a massive infusion of slop into the training data, I’ll be very surprised.” (65:51)
Final Thoughts on Slop
- Kevin: “We welcome your submissions for future installments… If you spot something, some slop that is worthy of cultural interrogation… please send it to us.” (67:03)
Memorable Quotes
-
On State Laws:
“It is important that this week OpenAI came out and said despite everything that has happened this year with their chat bots and mental health, they are going to hit the accelerator on making them more personable, more sexual and more powerful. That will continue to have reverberations.” – Casey (20:02)
-
On OpenAI’s Legal Tactics:
“There is no general right for [OpenAI] to know about all of our… advocacy… The role of a subpoena is to get relevant information for the litigation you are engaged in. Not to just, like, ask whatever questions you would like.” – Nathan Calvin (36:19)
-
On Slop:
“Slop is like an emerging genre of cultural production, most of which is bad, but some of which may actually be good.” – Kevin (51:39)
“Santa has the fattest ass in this…” – Slop Detective video, highlighted by Casey (61:22)
“I love a thick Santa and I salute them, sir. See you on Christmas, big guy.” – Casey (67:25)
Timestamps for Key Segments
- California Tech Regulation & AI Companions: 01:48–22:37
- Nathan Calvin/ENCODE v. OpenAI: 24:43–48:45
- Review of Slop: 50:41–67:48
Tone & Takeaways
With their signature blend of skepticism, humor, and lucidity, Roose and Newton dissect California’s cautious, sometimes awkward attempts to step in where federal tech governance has failed, question the motives behind OpenAI’s aggressive posture toward critics, and invite listeners to both chuckle and cringe at the tidal wave of AI-generated cultural detritus, aka “slop.” The episode balances concern for the real harms AI can cause with a keen sense for the absurd, and the need for both regulation and media literacy in a new digital era.
