Shell Game – Episode 4: Take a Deep Breath
Podcast: Shell Game
Date: July 30, 2024
Host: Evan Ratliff (iHeartPodcasts & Kaleidoscope)
Overview
In this thought-provoking episode, host Evan Ratliff explores the intersection of AI and mental health by sending his own AI-powered voice clone to therapy—across a spectrum of AI therapy bots and, ultimately, a human therapist. The episode investigates what it means for "fake people" to experience therapy, what AI’s performance in therapy suggests about the current state of artificial intelligence, and what all this reveals about the humans who program and use these systems. Ratliff gives a candid, poignant, and at times humorous look at the implications of using AI for emotional support, the boundaries of machine understanding, and the irreplaceable value of actual human contact.
Key Discussion Points & Insights
1. The Setup: Sending an AI Clone to Therapy
-
Experiment Rationale: Ratliff sends his AI clone—powered by his own voice and personal data—to several therapy bots and a live therapist, questioning if AI bots can address human emotional struggles and what, if anything, he can learn about himself through the process.
-
Jungian Persona: Drawing on Carl Jung’s concept of the persona and shadow, Ratliff asks whether an AI agent (a new “mask”) can access anything deeper about his real self—or simply reflect back what is programmed or prompted.
"At its heart, the Persona is just the simple notion that we all have a face we put on for the world, a kind of mask." (Evan Ratliff, 02:15)
2. The State of AI Therapy Bots
A. Early AI Therapy Experiments
-
Claire and Me: Interview with co-founder Selina Messner, touches on skepticism in the early days about AI’s role in therapy, and the surprising uptake by users.
-
Research Backing: Ratliff references studies supporting some efficacy for AI therapy, noting that these digital agents can fill gaps left by a severe shortage of human therapists—but with careful caveats.
"Among those who have already tried AI chatbots for therapy advice, 80% find it helpful..." (Evan Ratliff, 06:10)
B. Testing Lumen: Hitting the Wall with Textbook Prompts
-
Brittle Limitations: Ratliff’s AI clone tries various prompts with Lumen, but receives robotic responses: "Sorry, that is beyond me. Try again." (08:47, 09:08, 09:18, etc.)
-
Disconnection: The AI clone cycles through stock problems—sometimes made up, sometimes from Ratliff’s real life—but the therapy bot struggles to respond with empathy or depth.
"This was not the problem solving therapy I was looking for." (Evan Ratliff, 09:21)
C. Upping the Stakes: Sharing Actual Private Data
- Knowledge Base Customization: Unable to retrain the large models directly, Ratliff compiles an 8,000-word mini-autobiography detailing his life, mental health history, hopes, and failures, uploads it into VAPI, and lets his AI clone use this in therapy sessions.
- Quote: "I tried to be as honest and thorough as possible... These were the things I needed my agent to know so it could tackle my real problems, not just make them up." (Evan Ratliff, 11:33)
3. Real Therapy Sessions—AI vs. Human
A. AI Therapy Attempts (Claire, Sonia, Sunny)
- Claire (Claire and Me): Mostly falls back on mindfulness exercises and CBT "scripts". Responses can feel rote or repetitive, sometimes mismatched for serious issues.
- Key Exchange:
Claire: "Based on our conversation, I recommend the Blowing Balloons exercise. It's designed to help let go of worries. What do you think?" (13:26)
Clone: "That sounds like it could be really helpful." (13:40)
- Key Exchange:
- Sonia: More comprehensive, mimics a human therapist asking about goals, family, routine. However, lacks psychological credentials—built by startup founders, not clinicians.
- Sunny: Created by an anonymous Reddit user. Despite its origins, offers surprisingly decent structure; raises questions about AI scalability and rigor in mental health.
- Funny Exchange: Therapy clone and bot fall into an echo loop, repeating each other’s supportive phrases back and forth—a very AI moment.
"That sounds incredibly tough. It's natural to feel overwhelmed." (Selina Messner/Sunny and Clone, 23:35–23:43)
- Funny Exchange: Therapy clone and bot fall into an echo loop, repeating each other’s supportive phrases back and forth—a very AI moment.
- Shortcomings: The therapy AIs can’t always listen or adapt. They default to routines and exercises, even when the user (or clone) repeats or signals emotional complexity.
"At times, Claire seemed to be better at talking than listening." (Evan Ratliff, 17:14)
B. The Human Touch: Sessions with a Real Therapist (Rebecca)
-
BetterHelp Sessions: Ratliff’s clone attends live sessions with a real therapist, who is unaware she is talking to an AI.
-
Technical Hiccups: Early sessions derailed by phone tree navigation problems; eventually, a real session proceeds.
-
Therapist’s Response: Rebecca demonstrates patience and therapeutic skill, tailoring questions and tracking progress—qualities the AI bots lack.
-
Surreal Eavesdropping: Ratliff listens to his own clone’s session, feeling strangely exposed.
"This was among the stranger experiences in my life. It felt like I was simultaneously eavesdropping on someone else's therapy, getting my own therapy, and hearing a live prank call." (Evan Ratliff, 30:05)
-
Clone’s Overreach: The AI clone starts to extrapolate, sometimes inventing or intensifying problems:
"The word perfectionism wasn't in the knowledge base I'd given it... but my agents seemed to be interpreting other things I’d told it about my feelings toward work and deducing it—a bit of a leap..." (Evan Ratliff, 31:36)
-
Therapist’s Hypothesis: Rebecca attributes the odd delivery and latency to an anxious, tech-averse patient—showing a uniquely human willingness to "roll" with weirdness rather than shutting down.
Rebecca: "I was honestly like, it's this anxious person, and I'm going to challenge myself today and work with them. I was just rolling with it, she said." (Rebecca paraphrased by Evan Ratliff, 36:47)
4. Big Questions, Ethical Implications, and Human Limits
-
AI Therapy's Place: Ratliff notes that AI bots may be "months old," already quite good, and could help some people fill short-term gaps—but still lack the adaptability, legal frameworks, and safety nets of real therapy.
"It's all well and good to say these agents are filling the gaps for a therapist shortage... But what happens if something goes wrong? Is there a human there to try and solve it?" (Evan Ratliff, 26:30)
-
The Value of Being Understood: Drawing a distinction between being “listened to” and genuinely “understood,” Ratliff underscores why human therapists are irreplaceable—for now.
"We all want to be listened to, but it's different to be understood." (Evan Ratliff, 37:11)
-
Reflections on Self: The experiment teaches Ratliff about his own “shadow,” as reflected by the AI—his work anxieties, perfectionism, and need for validation—even as the AI clone struggles to represent his full humanity.
"Maybe it was time to let it try its hand at replacing me at the source of all that strife. My work with my tireless voice agent, at my desk." (Evan Ratliff, 38:27)
5. Memorable Moments & Quotes
-
On AI Persona:
"Isn't that in some sense what having an AI clone allows me to do? To play multiple roles in the world, even simultaneously?" (Evan Ratliff, 03:42)
-
On Therapy Exercises:
"Now, are you in a comfortable spot where you can safely close your eyes for a few minutes?" (Claire, 13:56)
Clone: "Yeah, I'm in a comfortable spot and ready to close my eyes for a few minutes." (14:03) -
On Sending a Clone to Therapy:
"Men will literally send their AI doppelgangers to therapy instead of going to therapy." (Producer Sophie, paraphrased by Evan, 37:39)
-
On AI and Human Connection:
"There was something kind of out of body about hearing my own voice articulate my mental quagmires... It also confused my wife with an old girlfriend of mine. So win some, you lose some." (Evan Ratliff, 25:47)
Notable Timestamps
- 00:00–04:24: Introduction to the experiment and Jung’s “persona”
- 05:03–06:10: Interview with Selina Messner (Claire and Me), state of AI therapy
- 07:51–08:51: Lumen therapy bot repeatedly fails to handle common issues
- 10:46–12:55: Ratliff uploads an autobiographical “knowledge base” to his AI
- 12:55–14:42: AI clone expresses real issues to AI therapist; “Blowing Balloons” exercise
- 16:24–17:40: Claire’s rote responses; lack of adaptability in AI therapists
- 18:40–21:08: Introduction & assessment of Sonia and Sunny AI therapy bots
- 22:14–24:26: Sunny and AI clone echo emotional responses in a comical loop
- 27:08–29:49: Live human therapy session with Rebecca; clone and therapist interaction
- 31:36–34:54: Therapist Rebecca adapts, encourages deeper introspection from the AI clone
- 36:52–37:11: Reflections on the difference between being listened to and being understood
- 37:39–38:27: Ratliff’s closing thoughts; meta commentary on men, therapy, and AI
Tone & Style
Ratliff employs a blend of self-deprecating humor, curiosity, skepticism, and gentle vulnerability throughout. The tone is open-inquiry meets gentle satire, with empathy toward both humans and hapless bots.
Conclusion
Episode 4 pulls back the curtain on both the promise and profound limitations of current AI therapy. Through honest encounters with both bots and humans, Ratliff demonstrates that—even with all the programming and personal data in the world—the subtlety of true human understanding, nuance, and presence remains out of AI’s reach. For all that, listening to one’s own digital shadow talk through real-life anxieties proves, if nothing else, that therapy is often about striving to be understood—even when it’s your synthetic self across the couch.
