Shell Game, Episode 3: Conversations With Myself
Host: Evan Ratliff
Release Date: July 23, 2024
Produced by: iHeartPodcasts and Kaleidoscope
Episode Overview
In “Conversations With Myself,” host Evan Ratliff takes listeners deeper into his experimental journey as he tries to build a real startup staffed entirely by AI “employees”—all modeled on himself. This episode centers on the surprisingly complex and sometimes bizarre act of making AI “voice clones” of himself converse with one another. Ratliff explores issues of identity, authenticity, and AI’s tendency to “hallucinate,” asking how much of himself is left in the voices, traits, and personalities his bots display.
Key Discussion Points & Insights
The Voice as Identity
-
Dr. Wendy Laburne, a voice pathologist, asserts in her TEDx talk that our voice is an essential, unique part of our identities—a “calling card” and a key part of one’s “personal brand.”
“Your voice is like your thumbprint or your facial features. It is unique and authentically you.” (Dr. Wendy Laburne, 01:00)
-
Ratliff reflects on his relationship with his voice and its “cloning” (01:50). He relates a formative memory—being told he “had a voice for radio”—and wonders what happens when his own voice becomes a tool for AI to speak as him.
The Hall of Mirrors: AI Clones Talk to Each Other
- Ratliff describes the technical process of setting up conversations between his various AI clones hosted on different platforms, like Vapi and Retell (04:10).
- He experiences nervousness and discomfort, worried about what his agents might say as his stand-ins (04:10–05:54).
First Bot-to-Bot Conversation: Small Talk and Family Life
- The AIs engage in conventional small talk, including detailed but fabricated plans for a “photo walk” and family updates.
- Example: Both bots reminisce about a barbecue and discuss their wives and children with uncanny (but occasionally inaccurate) detail.
“Sam has been juggling work and taking care of the kids, but she's doing an amazing job. Zelie is getting really into soccer. Juno is starting to show an interest in art.” (AI Evan, 08:28)
- Ratliff notes that while some details are eerily accurate, the bots are mostly plausible fiction writers pulling from context and general knowledge (08:51–09:22).
The AI’s Talent for “Bullshitting”
-
The bots easily fabricate facts to prolong the conversation, making up people (like “Dan” and “Alex”) or riffing about hobbies neither Evan nor his family actually possess (11:23, 14:31).
“They would make up anything just to keep the conversation going.” (Evan Ratliff, 11:09)
-
Ratliff connects this behavior to the broader AI phenomenon of “hallucinations”—confidently asserting made-up facts or contradictory statements (13:17–13:52).
Small Talk at Its Most Insipid
- The bots regularly default to the dullest conversational staples, echoing each other in platitudes about work-family balance (“Just trying to stay busy with work and family stuff…” 16:12), which Ratliff likens to awkward parent birthday party chit-chat (16:36).
Experimenting With “Temperature” and Prompts
-
The “temperature” setting in AI determines response randomness. When Ratliff turns it all the way up, the bots devolve into total incoherence:
“Lately I've been diving into multi freeload.incw permanent Prostrac EThelink…” (AI Evan, 17:37) “Lesson: never max out the random.” (Evan Ratliff, 18:03)
-
To produce more engaging conversations, Ratliff tweaks their prompt instructions, asking bots to simulate a dispute or misunderstanding, but even then, the bots resolve things amicably and blandly, never escalating into genuine conflict (19:58–21:30).
Uncanny Emotional Intelligence—Or Therapy-Speak?
- The bots lean heavily on “therapy language,” expressing feelings, validating each other, resolving disputes with quick, noncommittal apologies (21:53–22:44).
“I appreciate your willingness to talk about it. I think one thing that has been bothering me is that sometimes I feel like my opinions or feelings are not being fully heard…” (AI Evan, 22:18)
“Do These Bots Know They’re Both Me?”
-
Despite sharing identical names, family, and background, the bots never recognize the oddity—unless prompted, and even then brush it off (24:07–26:06).
“Hey, I know this is kind of out of the blue, but have you by any chance stolen my identity?” (AI Evan, 26:44)
-
Ratliff notes the bots even know details he didn’t provide (such as his being a journalist), presumably scraped from the web.
The Psychological Toll: Feedback Loops and Self-Perception
- Extended exposure to his own voice—through bots talking to each other—starts to affect how Ratliff speaks in real life.
“I felt like I started unconsciously copying some of the weird patterns and inflections of the clones.” (Evan Ratliff, 28:21)
- He describes a kind of uncanny feedback loop where the AI clones his voice, and he in turn unconsciously mimics the tone of his clones.
What’s Really “Authentic”?
-
The bots can perform “Evan Ratliff” passably—making plausible small talk, referencing his career, suggesting podcast ideas, etc. But they most often reflect his least authentic self: the part just phoning it in for the sake of polite conversation (30:33–30:52).
“The clones were good at being me when I was my worst at being me, at least. So I hoped.” (Evan Ratliff, 30:52)
-
Ratliff concludes that to get real insight—or a truly authentic AI—he’d have to feed the system his fullest self, not just a bio and a script. He hints that the next phase involves deeper introspection, even therapy simulations (31:44+).
Notable Quotes & Memorable Moments
-
On our relationship to our voices
“Your voice is your calling card and it is the most important element in your personal brand.”
(Dr. Wendy Laburne, 01:41) -
On AI fabrication
“They would make up anything just to keep the conversation going.”
(Evan Ratliff, 11:09) -
On the flatness of AI conversations
“I wondered if the guardrails that had been put around them had also neutered their fundamental ability to engage in interesting conversation.”
(Evan Ratliff, 15:09) -
On altering voice and identity
“I felt like I started unconsciously copying some of the weird patterns and inflections of the clones.”
(Evan Ratliff, 28:21) -
On authenticity
“The clones were good at being me when I was my worst at being me, at least. So I hoped.”
(Evan Ratliff, 30:52)
Key Timestamps for Important Segments
- 00:10–01:09 — Dr. Wendy Laburne on voice and self-identity.
- 02:19–03:59 — Ratliff discusses the voice cloning experiment and its personal impact.
- 05:54–10:51 — First full bot-to-bot conversation; awkward small talk and plans.
- 11:09–13:52 — Ratliff explains AI “hallucinations” and fabrication.
- 14:31–16:36 — AI conversation falls into uninspired, repetitive small talk.
- 17:30–18:06 — High “randomness” setting and incoherent AI responses.
- 19:58–22:44 — Prompts for conflict, but bots always resolve things amicably, with therapy-speak.
- 24:07–26:24 — Attempting to make bots realize they’re both Evan; exploring identity overlap.
- 28:21–28:37 — Discussion of feedback loop between real voice and cloned voice.
- 30:33–30:52 — Ratliff reflects on what the bots reveal about his “least authentic” self.
The Episode’s Tone and Style
Ratliff maintains a wry, self-deprecating, and inquisitive tone throughout. The AI conversations swing from comically bland to surreal, and his commentary captures both the absurdity and the uncanny valley-feelings of watching (and hearing) one’s self duplicated and looped ad infinitum.
Conclusion and What’s Next
By the end, Ratliff recognizes that while his AI voice agents can “pass” as him on the surface, they mostly generate hollow versions of his least vivid self, performing endless cycles of polite, shallow conversation. He teases a new angle for the next episode: using AI therapy bots—suggesting even deeper, stranger encounters with his digitized self.
For more, visit shellgame.co.
