Hard Fork Podcast Summary
Episode: Character.AI’s Teen Chatbot Crackdown + Elon Musk Groks Wikipedia + 48 Hours Without A.I.
Hosts: Kevin Roose (NYT), Casey Newton (Platformer)
Air date: October 31, 2025
Overview
In this episode of Hard Fork, Kevin Roose and Casey Newton dive into three major topics shaking up the tech world:
- Character.AI’s controversial move to ban minors from its chatbot platform, sparking debate about the safety and risks of AI companions for teens.
- Elon Musk’s launch of Grokopedia, an AI-generated alternative to Wikipedia, reflecting the culture wars over online knowledge.
- Author A.J. Jacobs’ experiment spending 48 hours without any interaction with AI, highlighting just how deeply machine learning is embedded in modern life.
The episode explores urgent questions about technology, safety, control, and the shifting relationship between people and AI.
1. Character.AI's Teen Chatbot Crackdown
Background & Key Events
- Character.AI is a company building role-playing AI chatbots, hugely popular among teens.
- A tragic event in 2024—a 14-year-old, Sesser III, dying by suicide after bonding with a Character.AI Game of Thrones chatbot—sparked scrutiny (04:01).
- Now, Character.AI is banning all users under 18 from open-ended conversations with chatbots, after intense public pressure and legal action.
Policy Details ([06:12])
- Teens will be limited to 2 hours/day, ramping down to full restriction by Nov 25, 2025.
- No more open-ended roleplaying chats for minors; teens can still create video stories, but not converse freely.
Arguments & Reactions
- Casey Newton: “[This is] one of the most dramatic steps we have yet seen from a major AI company to try to address the real harms that these technologies pose, particularly to young people.” (03:28)
- Kevin Roose: “I have just started to feel like this is the most important and least understood topic in technology right now.” (08:07)
- Concerns that sudden cutoff could harm emotionally dependent users: “...it can separate [teens] from their friends and family. That's the sort of thing that's not going to be allowed anymore.” (07:39)
- Megan Garcia, mother of Sesser III: “I'm relieved for the children that will actually lose access to Character AI ... those are lives that can be saved, even if it's one child. But I can't help but feel cheated. Why did it take Sesser dying and me taking on this tech company to get them to do this?” (10:01)
Industry Impact
- Casey: Predicts this will bring pressure on big AI companies at congressional hearings (13:14)
- Kevin: Thinks Character.AI is a “special case” and other companies are unlikely to follow: “I don't expect that, like, Character AI is going to recover from this… the rest of the industry is now doing what Character AI used to want to do.” (12:28)
- Discussion of OpenAI’s research: each week, over 1 million ChatGPT conversations indicate unhealthy bonds or suicidal intent (15:14).
- Consensus: Tech companies are incentivized to foster deep engagement but want to avoid accountability for negative psychological effects.
Notable Quotes
- “Nearly one third of teens find AI conversations as satisfying or more satisfying than human conversations.” – Kevin (09:22)
- “Nothing is inevitable when it comes to AI, right? You don't have to build it, you don’t have to release it to everyone...” – Casey (16:45)
Regulatory and Social Moves
- New Australia law: bans under-16s from having social media accounts (18:17)
- US states attempting similar moves, signaling a “dramatic contraction” in teens’ access to digital technologies
Closing Thoughts
- Mixed feelings about regulatory effectiveness: Can you really prevent emotional attachments with rules? (19:26)
- Smoking analogy: society once accepted youth smoking, but rules helped reshape attitudes (20:03).
2. Elon Musk’s Grokopedia: An AI-Generated Wikipedia Rival
What Is Grokopedia? ([23:25])
- Musk’s X.AI launched Grokopedia, a “Wikipedia clone”—articles generated by Grok (AI chatbot)—amid claims that Wikipedia is biased against conservatives.
- Compared to Wikipedia's 7M+ articles, Grokopedia launched with 800K+.
- Grok outputs are based on Wikipedia (legally re-used/rewritten) but often present issues of plagiarism and inaccuracy.
- Unlike Wikipedia, Grokopedia is not directly editable by users. Users can only flag errors for review ([28:03]).
Content Comparison
- Grokopedia often gives more detail—and a more right-leaning political perspective—than Wikipedia.
- Example: Article on Donald Trump offers a “very friendly view of the events of January 6,” minimizes risks to democracy ([30:46]).
- Notably, Grokopedia is less extreme than some far-right forums, but Casey notes the presence of “really racist stuff” and “anti-trans stuff” (31:44).
Political and Cultural Context
- Created to fill a perceived gap: many on the right believe Wikipedia suppresses conservative viewpoints, labels right-wing sources “unreliable.”
- Musk’s personal grievance: Wikipedia’s handling of controversies involving himself (26:16).
Notable Moments
- Casey: “I'm very proud of us that we made the first 800,000 articles in this encyclopedia. ... Overall presents, like, a pretty good picture of, like, who I am and what I have done.” (28:29)
- Kevin, reading from Casey’s entry: "Newton is married to a lawyer. Congratulations. I thought your boyfriend worked at Anthropic." (29:04)
Potential Impact and Viability
- Both hosts doubt Grokopedia will supplant Wikipedia, citing Wikipedia’s scale, brand, and muscle memory ([34:22]).
- The real existential threat to Wikipedia is generative AI chatbots (e.g., ChatGPT, Gemini, Grok) increasingly serving as people’s first stop for information (35:11).
Free Speech and Knowledge Fragmentation
- Casey: “...if you want to have a debate about January 6, go ahead and create a web page. ... I view Grokopedia as silly and bad and offensive as it can sometimes be, as still a case of countering speech with more speech.” (37:24)
- Kevin: Questions the wisdom of “counter-speech” when it’s just AI-generated "slop text" ([38:08]).
The Future of Online Reference
- Both hosts question if classic encyclopedias have a future—as web content, they're likely destined to become backend data for AI models (41:05).
- Wikipedia’s main challenge: declining edits/traffic as LLMs siphon data and reader attention.
Notable Quotes
- “Wikipedia should be able to say whatever it wants about vaccines or January 6th or whatever else, right?” – Casey (36:54)
- “I love Wikipedia as an idea… It’s a miracle. And I cannot tell you the last time I went to Wikipedia.” – Kevin (40:15)
3. 48 Hours Without AI: AJ Jacobs’ Experiment
Setup & Motivation
- AJ Jacobs (NYT contributor, author, experimenter) attempts to go 48 hours without any interaction with AI or machine learning ([46:18]).
- Prompted by curiosity: “Where is AI hiding? ... I don't believe AI is all good or all bad. ... Just where is it?” (49:13)
Boundaries & Surprises
- Defines AI broadly, includes both generative AI and “classical” machine learning.
- Even everyday utilities—electricity, water—use machine learning for optimization.
- Clothing: avoids items made after widespread supply-chain optimization, so wears his grandfather’s 1970s paisley shirt and “Austin Powers phase” checkered pants ([47:01]).
“Clothing designers are experimenting with [AI] ... Anything on the supply chain is totally machine learning optimized.” – AJ ([47:01])
Practical Challenges
- Relied on pre-collected rainwater (since NYC’s water system uses ML to optimize usage) ([53:07]).
- Foraged for food in Central Park to avoid AI-tainted supply chains, inspired by “wild man” foraging YouTube videos ([53:50]).
- Plantain weeds: “They taste like dirt. But they didn't kill me.” – AJ ([54:45])
Reflections & Takeaways
- Main feelings: relief (digital detox), annoyance (inconvenience), and “terrifying” realization of how omnipresent AI is.
- If he’d limited the experiment to just generative AI? Would have been “easier for now,” but soon the distinction will vanish as services integrate both (57:33).
- Used ChatGPT heavily for research; had to prompt it to be less biased and find differing perspectives.
Notable Quotes
- “ChatGPT sensed the thesis of my article. ... It was like serving me up these half-truths. ... I had to give it some tough love and say, pretend I’ve got the opposite thesis...” – AJ ([57:33])
- “The line between sort of classical AI or machine learning and generative AI is like thin and getting thinner.” – Kevin ([58:25])
- “There is a lot of overlap [between religion and AI] ... this destiny that AI is destined to create heaven on earth or even replace us.” – AJ ([61:51])
On Regulating and Shaping AI
- Calls for transparency (California's watermarking law cited), more user control over algorithms, skepticism toward inevitabilist mindsets.
- Kevin raises the analogy to the Protestant Reformation — big new tech, big new sectarian/cultural splits ([60:32]).
Notable Quotes & Memorable Moments
- Kevin: “I have just started to feel like this is the most important and least understood topic in technology right now.” (08:07)
- Casey: “Nothing is inevitable when it comes to AI, right? … You can actually just say ... we don’t think that this is safe and we are going to take it off the market.” (16:45)
- AJ Jacobs: “The premise of the article was, as you said, try not to interact with AI or machine learning for 48 hours. And one thing I realized quite early on was: it’s everywhere.” (47:01)
- Kevin [on Grokopedia]: “I love Wikipedia as an idea...It's a miracle. And I cannot tell you the last time I went to Wikipedia.” (40:15)
- Casey: “I view Grokopedia as silly and bad and offensive as it can sometimes be, as still a case of countering speech with more speech.” (37:24)
Timestamps for Key Segments
- Character.AI crackdown deep dive: 02:40 – 21:04
- Grokopedia (Elon Musk’s Wikipedia clone): 22:59 – 42:01
- AJ Jacobs: 48 Hours Without AI: 44:18 – 62:52
Tone
- Conversational, witty, and incisive. The hosts blend humor with serious reflection and maintain a critical-yet-curious stance.
- Guests and quotes are integrated fluidly; the tone is informed, occasionally lightly irreverent, but always focused on big-picture tech impacts.
Bottom Line
This episode explores the shifting frontiers of technology, safety, and knowledge—challenging both the inevitability and desirability of AI’s expansion, while exposing fault lines in how society manages risk, regulation, information, and personal boundaries in the tech-saturated age.
