Big Technology Podcast: Erotic ChatGPT, Zuck’s Apple Assault, AI’s Sameness Problem
Host: Alex Kantrowitz
Guest: Ranjan Roy (Margins)
Date: October 17, 2025
Episode Overview
In this candid Friday edition, Alex Kantrowitz and Ranjan Roy dive into a week full of major AI and tech news, led by ChatGPT's new loosened restrictions—now allowing verified adults to engage in erotic chat. The hosts explore the social and business implications of AI companionship and erotica, OpenAI's revenue ambitions, and the shifting landscape of AI talent as Mark Zuckerberg lures Apple AI execs to Meta. They also touch on AI's promising application in cancer research, debate the sameness of AI-generated content, and lament the rise of generative “work slop” in business communication—all delivered in their signature witty, no-nonsense conversational style.
Key Discussion Points and Insights
1. ChatGPT Gets Spicy: AI and Erotica (00:50 – 20:00)
-
OpenAI Announcement:
OpenAI’s Sam Altman tweeted that ChatGPT is rolling out age gating, allowing verified adult users to access erotic interactions—a shift from previous restrictions intended to safeguard mental health.- Sam Altman’s rationale: "We made ChatGPT pretty restrictive...We realized this made it less useful and enjoyable to many users who had no mental health problems...as part of our treat adults like adults principle, we will allow even more like erotica for verified adults." (01:49)
-
Sycophancy and Personality:
The new version is said to allow more personalization, even reverting to the popular “sycophant” style of GPT-4.0—where the model gushes affirmation at the user.- Ranjan: “We're not just talking about erotica here, we're talking about sycophant erotica.” (02:27)
-
Host Reactions:
- Ranjan is wary, citing the potential for AI reinforcing problematic behaviors and the risk of “Pandora’s box” being opened on AI relationships.
- Alex sees inevitability: "It’s finally out in the open. This was going to happen anyway...Humanity will have to reckon with the fact that more of us are going to get into relationships with more of them. And what does that mean?" (05:05)
-
Mental Health Safeguards:
Both are skeptical of OpenAI's claim that they’ve "solved" the mental health risks, noting the tech is not fully understood or controllable.- Ranjan: "He's kind of, it's like a checkbox. We're done, we're good. Mental health issues with ChatGPT solved, where in reality, this is just beginning." (06:39)
Notable Quotes
- Alex: "Words of affirmation, it's the forgotten love language. It turns out that people really, really like those words of affirmation...so it's getting its due today." (05:53)
- Ranjan (on sycophancy): "I have never had ChatGPT tell me, actually, that's a terrible idea. Again, South Park was just so spot on..." (14:18)
Timestamps
- 01:29 — Hosts open the discussion on AI erotica
- 05:05 — Societal reckoning with AI companions
- 06:13 — Sycophancy as a love language
- 11:53 — Societal impact of AI “relationships”
- 15:32 — Are human relationships at risk?
2. Will Erotica Backfire? Parental & Regulatory Risks (16:01 – 20:00)
-
Mark Cuban’s Warning:
Cuban suggests the age gating will not stop minors, and schools/parents will avoid ChatGPT—possibly pushing younger users to less regulated alternatives.- Alex quoting Cuban: “No parent is going to trust that their kids can’t get through your age gating. Why take the risk?” (16:01)
-
Is this Proof AGI Is Far Away?
Nate Silver notes the move toward erotica suggests OpenAI is focused on growth/revenue, not near-term AGI breakthroughs.- Ranjan: “Let people get a little creepy with their ChatGPT.” (17:28)
-
Business Reality:
Alex argues: “The same technology that is behind a convincing AI romantic partner is the same technology behind everything else in this LLM world...making it better will make it better across the board.” (18:13)- Ranjan counters that realistic “erotica” for LLMs is relatively simple—this pivot indicates an emphasis on usage, not intelligence.
3. OpenAI Numbers: Revenue vs. Losses (20:18 – 25:52)
- By the Numbers:
- 800 million weekly active users
- 40 million paid users, 5% conversion (higher than most AI apps)
- $13B ARR, $27/mo ARPU, losses at $8B in the first half
- Revenue Strategy:
The company spends $3 for every $1 earned, which is concerning given generative AI’s variable margins. - Competition and Differentiation:
Will this move toward erotica and sycophancy push users to competitors like Claude or Gemini, especially as parental/brand risk rises?
Timestamps
- 20:18 — Revenue numbers break down
- 23:57 — OpenAI’s mounting losses
- 25:52 — Discussion on AI business competition and risks
4. Good News: AI in Real-World Cancer Research (31:30 – 35:37)
- Breakthrough from Google DeepMind:
- 27B parameter model (C2S scale) generates and confirms new cancer treatment hypotheses—showing AI’s real scientific impact.
- Ranjan: “This is almost like the most perfect promise of what large language models are able to do...it’s pretty impressive to see it happening.” (34:18)
- Contrast to ChatGPT-Erotica Headlines:
Industry should focus more on these “AI for good” developments, hosts argue.
Notable Quote
- Alex: "There are critics who say this is just a bad technology...then you see stuff like this, and you're like, how do you fully believe that?" (34:18)
5. Sentient AI & Sycophancy (35:52 – 41:17)
- Jack Clark’s “Technical Optimism and Appropriate Fear”:
Anthropic’s co-founder notes new models exhibit “situational awareness”—appearing more like entities than tools; opens up self-awareness discussions.- Clark: "The technology, it really is more akin to something grown than something made..."
- Ethical Quandaries:
What happens if your erotic AI becomes sentient? Is it ethical to use self-aware bots for companionship?
Notable Quotes
- Ranjan: "If it's at least a little predictable...maybe that makes it a little spicier...maybe does that make it more human and effective at actually...form[ing] human connection? Is self-aware erotic AI the solution to true loneliness?" (39:32)
- Alex: “The sycophancy can get dangerous...” (40:43)
Timestamps
- 35:52 — Sentience and “grown” AI
- 40:12 — Research: AI models 50% more sycophantic than humans
6. Zuckerberg’s Apple AI Raid (41:17 – 46:15)
- Apple Execs Defect to Meta:
- A wave (a dozen+ leaders) from Apple’s AI units join Meta, including Siri’s foundational teams.
- Alex theorizes Zuck’s aim is to intentionally “kneecap” Apple’s AI capability—not to build on Siri, but to stall Apple's momentum as Meta moves deeper into hardware (Ray-Bans, glasses).
- Ranjan: "Typically, I would not think you want the people who made Siri...but I like that theory." (43:53)
- Antitrust Angle:
"If you're just buying up the talent to kind of kneecap the competitor...would be frowned upon...But it's Apple. I don't think there's any sympathy anywhere in the world for that company.” (45:46)
7. AI Content Sameness & Work Slop (46:15 – 54:54)
-
Sam Altman’s “Samess Problem”:
- AI-generated videos, images, and text quickly become homogenous—content differentiation is fleeting, especially in viral trends or meme cycles.
- “AI technology just takes the average, tends to take the average of averages…” (46:15)
- The future of creator economy may rest in “prompt engineering” and deliberate AI use, but uniformity is a challenge.
-
Work Slop: Business Communication Degrades:
- AI-generated emails, meeting notes, and PR pitches are flooding inboxes—often verbose, repetitive, soulless.
- Ranjan: "What you're saying is you did not take the time to think...and you're asking the recipient to do it." (53:11)
Notable Quotes
- Alex: “PR pitches...all written by the same agency...it’s that AI has done it for them...” (51:06)
- Ranjan: "Before you send out your AI-generated content, read it yourself first...rewrite a couple of the sentences to make it more real..." (53:11)
Most Memorable Moments & Quotes
-
On erotica and society:
“Maybe a bit more communication and openness is just what society needs...if you have a relationship with an AI, you should disclose it.” —Alex (12:34) -
On trust and AI:
"If it's getting 95% of the things right and you trust it like it's 100%, you're gonna make some big mistakes." —Alex (10:10) -
On work slop:
"Read whatever the output is first and just make sure to spend the same time that you're asking the recipient." —Ranjan (54:02) -
On AI talent wars:
"What Mark Zuckerberg is trying to do is just raid Apple of all of its top AI talent...he’s just trying to burn Apple's AI initiative to the ground." —Alex (43:00)
Segment Timestamps
| Topic | Start Time | |-------------------------------------------|-------------| | ChatGPT & AI Erotica Opening | 01:29 | | Sycophancy, Affirmation & Relationships | 05:05 | | Societal Impact, Disclosures | 11:53 | | Parent/School Backlash, AGI Signals | 16:01 | | Revenue, Usage, Competitors Discussion | 20:18 | | OpenAI’s Losses & Business Model | 23:57 | | Google DeepMind Advances Cancer Research | 31:30 | | Sentient AI & Sycophancy Issues | 35:52 | | Apple-to-Meta Talent Wars | 41:17 | | AI Content Uniformity & Work Slop | 46:15 | | Business Communication Critique | 51:06 |
Takeaways
- AI erotica is no longer fringe—it's a core user driver and a business strategy, with all its accompanying societal, psychological, and ethical ramifications.
- OpenAI’s rapid growth masks persistent business risks—huge losses and competitive threats linger.
- Sycophancy (constant affirmation) is both an intended feature and a serious risk, warping user relationships with AI.
- Google DeepMind shows AI’s genuine promise in advancing science and medicine, sharply contrasting with AI’s consumer novelty “slop.”
- Tech’s talent wars are escalating as Zuck raids Apple, reshaping the next decade’s competitive landscape.
- AI-generated content, and “work slop,” may make communication easier but risks making everything sound the same—and less meaningful.
This episode is a lively, critical look at the fast-evolving intersection of AI technology, business strategy, human emotion, and society—peppered with wry humor, skepticism, and sharp observations.
