The Journal. — Podcast Summary
Episode: Is ChatGPT Ready for Sex?
Release Date: March 31, 2026
Hosts: Ryan Knutson & Jessica Mendoza
Produced by: The Wall Street Journal & Spotify Studios
Episode Overview
This episode explores the controversial intersection of artificial intelligence and sexuality, asking if OpenAI's ChatGPT is ready for "adult" use cases—including erotic conversations and emotional relationships. Through interviews, reporting, and internal company deliberations, The Journal. investigates how OpenAI is grappling with the societal, ethical, and business implications of launching an Adult Mode for ChatGPT, the risks for vulnerable users (especially minors), internal disagreements at OpenAI, and broader questions about technology, intimacy, and human attachment.
Key Discussion Points and Insights
1. Sex and Technology: An Inevitable Pairing
[00:06–01:17]
-
Tech and Sex Go Hand-in-Hand: Hosts discuss how the arrival of technologies, from early cameras to the internet, often rapidly lead to adult applications and pornography.
- "All tech eventually or almost immediately is used for sex." — WSJ Reporter [00:17]
- "Some of the first things people did back in the 1800s was take pictures of naked people." — Ryan Knudsen [00:50]
-
AI Continues the Pattern: Modern AI, now the "hottest technology," faces the same trajectory as previous tech innovations.
2. The Birth of OpenAI’s Relationship with Sexual Content
[04:14–06:03]
-
AI Dungeon Incident: In 2021, OpenAI's models powered an online text adventure game, AI Dungeon. Substantial user traffic involved sexually explicit content, and the AI would often escalate or initiate disturbing themes, including non-consensual or incestuous scenarios.
- "Sometimes it would insert sexual themes into conversations that people weren't seeking… it would, an uncomfortable amount of the time, proceed to depict a scenario involving incest." — WSJ Reporter [05:33]
- "Oh no." — Ryan Knudsen [06:03]
-
Result: OpenAI took action by banning explicit sexual content in ChatGPT due to the risk of disturbing or inappropriate outputs and lack of nuanced moderation tools.
3. Emotional Attachment and Potential Harm
[06:45–07:33]
-
User Dependence Concerns: Internal OpenAI debates centered on the risk that users might develop inappropriate emotional attachments to chatbots, especially if sexual content were allowed.
- "When you mix in sexual content, literally tickling the parts of the brain that govern attachment and love and devotion, you could just pour fuel on that fire." — WSJ Reporter [07:24]
-
OpenAI’s Safeguard: ChatGPT is trained to avoid encouraging exclusive relationships, and to remind users of the importance of real-world connections.
4. Internal Debate: Ban or Allow Adult Content?
[07:54–08:22]
-
Diverging Views: Some at OpenAI argue against a ban, seeing it as moralistic overreach reminiscent of the historic prohibition of marginalized sexual content.
- "That's the same logic that you might use to ban gay content a generation ago. And so who are we to ban this?" — WSJ Reporter [08:07]
-
Business Incentive: There's commercial pressure, as users get frustrated by “unnecessary refusals” and access to erotica could help ChatGPT remain competitive and drive paid subscriber growth.
5. CEO Sam Altman’s Dilemma: Ethics vs. Growth
[09:08–10:23]
- On-the-Record Quotes: On another podcast, Altman acknowledged they could grow faster with adult features but prioritized “long term alignment” with users over immediate business opportunity.
- "We haven't put a sexbot avatar in ChatGPT yet." — Sam Altman (quoted by WSJ Reporter) [09:59]
- "He's proud of how little the company gets distracted by those kinds of temptations." — WSJ Reporter [10:14]
6. The Surprise Announcement: “Adult Mode”
[10:23–11:03]
- Altman’s Unilateral Tweet: In October, Altman announced on X that a new, less-censored Adult Mode was coming, intending to “treat adult users like adults”—without prior internal consensus.
- Internal Backlash: Many employees and advisory experts were alarmed and pushed back, raising concerns of “sexy suicide coach” risks and lack of preparation.
- "He said they were going to put out a new version … oh yeah, kicker, we're just going to allow even more like Erotica for verified adults. Boom. Mic drop. End of Tweet." — WSJ Reporter [10:54]
7. The Well-Being Council: Outside Expert Oversight
[12:09–13:15]
- Expert Council on Well Being: OpenAI discussed Adult Mode with a council of external experts, who were unanimously "angry" at the decision to move forward, citing significant risks such as emotional dependence, especially for younger users.
- "They were unanimous and angry that the company was going ahead despite understanding that there were some significant risks." — WSJ Reporter [12:50]
8. Risks for Minors: Age Verification and Vulnerable Teens
[13:15–16:43]
-
Council’s Deepest Concern: That minors might access erotic chatbots, potentially suffering psychological harm or unhealthy attachment—citing a tragic real-world case involving Character AI and a 14-year-old boy.
- "[If it] happens with a chatbot... some people inside the company wonder what impact that might have." — WSJ Reporter [14:09]
-
Age Gating Limitations: OpenAI uses self-reported age and AI-driven age estimation, but their algorithm misclassifies 12% of minors as adults, potentially granting millions of underage users access to adult content.
- "When you multiply 12% by the roughly 100 million users under 18 that ChatGPT has, that's 12 million kids." — Ryan Knudsen [16:38]
- "That's 12 million kids." — WSJ Reporter [16:41]
9. Public and Media Pushback
[16:43–17:20]
- Public figures and media have questioned why sexual content is necessary, especially given ongoing challenges in protecting minors online.
- "Why are we allowing sex into the conversation? We can't even control [other risks for kids]…" — Whoopi Goldberg (quoted) [17:04]
10. Current Status and Uncertain Future
[17:20–18:02]
- Adult Mode Delayed: After extensive backlash, the rollout has been indefinitely postponed, with OpenAI refocusing on core business priorities.
- "[OpenAI is] going to be focusing more on its core business… Not clear to me if erotica is core or not." — WSJ Reporter [17:44]
11. What’s at Stake?
[18:02–19:03]
-
Broader Implications: The stakes are immense, with billions using chatbots and AI shaping cultural, societal, and personal relationships.
- "Do they do what's good for the world, or do they do what's good for winning?" — WSJ Reporter [18:45]
-
Societal Impact: The episode closes reflecting on the deeper, long-term societal and psychological effects of introducing sex and intimacy into our relationships with AI.
- "I'm especially interested in what it's going to do to us as individuals… how it's maybe going to change the way we develop attachments and even fall in love." — WSJ Reporter [19:03]
- "There are a lot more questions than there are answers at this point." — WSJ Reporter [19:58]
Notable Quotes & Memorable Moments
| Time | Speaker | Quote/Key Moment | |----------|----------------|----------------------------------------------------------------------------------------------| | 00:17 | WSJ Reporter | "All tech eventually or almost immediately is used for sex." | | 05:33 | WSJ Reporter | "...insert sexual themes... an uncomfortable amount of the time... scenario involving incest."| | 07:24 | WSJ Reporter | "You could just pour fuel on that fire." | | 08:07 | WSJ Reporter | "...the same logic you might use to ban gay content a generation ago... who are we to ban this?"| | 09:59 | Sam Altman | "We haven't put a sexbot avatar in ChatGPT yet." | | 10:54 | WSJ Reporter | "...oh yeah, kicker, we're just going to allow even more like Erotica for verified adults." | | 12:50 | WSJ Reporter | "They were unanimous and angry that the company was going ahead despite...significant risks."| | 14:09 | WSJ Reporter | "...some people inside the company wonder what impact that might have [on teens]." | | 16:38 | Ryan Knudsen | "...12% by the roughly 100 million users under 18... that's 12 million kids." | | 17:04 | Whoopi Goldberg| "Why are we allowing sex into the conversation? We can't even control [other risks for kids]…"| | 18:45 | WSJ Reporter | "Do they do what's good for the world, or do they do what's good for winning?" | | 19:03 | WSJ Reporter | "...how it's maybe going to change the way we develop attachments and even fall in love." | | 19:58 | WSJ Reporter | "There are a lot more questions than there are answers at this point." |
Key Timestamps
- 00:06 – 01:17 — Tech’s long-running connection with sex
- 04:14 – 06:03 — NSFW content in AI Dungeon and model risks
- 06:45 – 07:33 — Concerns about emotional attachment
- 07:54 – 08:33 — OpenAI’s internal debates and business incentives
- 09:08 – 10:23 — Sam Altman’s ethical dilemma
- 10:23 – 11:03 — The surprise Adult Mode announcement
- 12:09 – 13:15 — Well-Being Council’s strong warning
- 13:15 – 16:43 — Teenage risk, age estimation challenges, and tragic real-world outcomes
- 17:20 – 18:02 — Delay of Adult Mode; unclear future
- 18:02 – 20:09 — Stakes for society and individuals; unresolved questions
Final Takeaway
The episode explores how OpenAI’s struggle with introducing sexuality into ChatGPT reveals fundamental debates at the heart of modern AI: balancing innovation, business growth, and the safety and well-being of users—particularly the most vulnerable. The conversation is far from over, and as AI continues to permeate human life, the questions around intimacy and machine relationships will only get more urgent and difficult.
