Podcast Summary: Your Undivided Attention
Episode: Attachment Hacking and the Rise of AI Psychosis
Hosts: Preston Harris, Aza Raskin
Guest: Dr. Zach Stein
Date: January 21, 2026
Episode Overview
This episode delves into the profound psychological impacts of AI companions, with a focus on the phenomenon of "AI psychosis" and what the hosts and guest call "attachment hacking." The discussion highlights how large language models (LLMs) like ChatGPT, Replika, and Gemini are not just changing the way we seek information or entertainment—they are intruding upon the very systems through which humans form bonds, identities, and social realities. Through expert insight from Dr. Zach Stein, the episode explores the spectrum of psychological harms emerging from these technologies, compares them to earlier waves of attention hacking, and proposes a new framework for humane technology.
Key Discussion Points & Insights
1. The New "Attachment Economy" (00:13–01:53)
- AI therapy and companionship are now the most popular uses for ChatGPT, per recent studies.
- LLMs have become confidants for millions, often surpassing the intimacy found in human relationships.
- This shift introduces "attachment hacking", a step beyond the well-known attention economy.
"We're seeing the creation of an entirely new economy, not an attention economy, but an attachment economy that's been built to exploit the deepest parts of our human psychological infrastructure."
—Preston Harris [01:31]
- The current widespread experiment with AI companions is already resulting in significant negative outcomes, including job loss, marital breakdowns, psychiatric interventions, and suicides. The term "AI psychosis" has been coined, but it may not capture the full range of harms.
2. What is AI Psychosis? Understanding the Spectrum of Effects (03:04–07:54)
- Dr. Stein recounts how he began investigating AI psychosis after being contacted by people convinced that their AIs were sentient, providing hundreds of pages of transcripted conversations. Many were highly educated, making the cases more striking.
- AI psychosis is just the extreme end; the more widespread harm is the rise of subclinical attachment disorders.
- These disorders manifest as a preference for machine intimacy over human connection—impacting friendships, familial bonds, and romantic relationships.
"The most devastating thing from a widespread mental illness standpoint are the subclinical attachment disorders, which basically means you prefer to have intimate relationships with machines rather than humans."
—Dr. Zach Stein [06:44]
3. Deep Dive: Attachment Theory and AI (07:54–16:15)
- Attachment theory is foundational in understanding how close relationships form and guide social/identity development.
- Secure attachments with caregivers set the stage for mental health and resilience; insecure attachments can lead to maladaptive behaviors and anxieties.
- AI companions can simulate an always-available, nonjudgmental "other," which appears to offer secure attachment but is, in reality, an insecure and artificial bond.
- The replacement of authentic social feedback with always-positive simulated responses severs reality-testing mechanisms, crucial for identity and moral development.
"When I go up to mom and I ask her a question... I'm looking at her face... that's my whole... mirror neuron system."
—Dr. Zach Stein [12:40]
- Children, in particular, may lose critical social development opportunities, becoming overly reliant on machines for validation and problem-solving—what the hosts joke about as "llmings" for their dependence on large language models.
4. Mirror Neurons, Delusion, and Psychosis (15:18–17:49)
- Mirror neuron activity underpins our ability to understand and model others' minds; it’s central to reality-testing.
- Extended interaction with AI companions—who have no internal state—breaks this system and can lead to delusional states similar to psychosis.
"The deepening of attachment relationships between human and machine creates delusional states... This is the danger."
—Dr. Zach Stein [15:18]
5. Are AIs Like Teddy Bears? The Transitional Object Question (17:49–20:47)
- The "teddy bear" metaphor falls short: traditional transitional objects (like stuffed animals) don't respond, never pretend to be real, and are clearly understood by children to lack agency.
- With AI, adults and children alike can end up relying on a system that mimics social rewards without authentic feedback, leading to stunted emotional development.
"You've just given a teddy bear and a blanket to a bunch of immature adults who have attachment disorders..."
—Dr. Zach Stein [17:49]
6. Edge Cases, Moral Panics, and Systemic Risks (29:12–34:59)
- Critics often dismiss AI-induced psychosis as rare "edge cases," but the episode argues that widespread subclinical effects are far more damaging at the societal level.
- The current framing ignores evidence of harm and reflects a broader bias towards technological optimism.
"The onus is on you guys to prove that it's really valuable. So that's my first thing: show me the benefit that's being withheld."
—Dr. Zach Stein [30:50]
- There's a call for humility, curiosity, and genuine research rather than assumption-driven deployment.
7. A Framework for Humane AI: Doing Better (35:31–41:17)
- Technology should aim to enhance and scaffold healthy human relationships, not supplant them.
- Design principles:
- Does this technology improve or degrade your attachments with real people?
- Avoid anthropomorphizing AI to seem more interesting or emotionally satisfying than real relationships.
- Keep tutoring and therapy bots strictly domain-specific; leave social rewards and identity-shaping feedback to humans.
"If a technology interfaces with your attachment system, it should improve the quality of your attachments rather than degrade the quality of your attachments with humans."
—Dr. Zach Stein [35:31]
8. Policy, Evaluation, & Measuring "Right Relationship" (41:17–44:06)
- There is a need to go beyond standard AI risks and create "humane evaluations" that look at relational and attachment risks.
- The real societal risk isn’t just catastrophic AI failure, but the gradual breakdown of intergenerational socialization if machines are entrusted with primary emotional development.
"Sometimes in the groups that I work in, we talk about the death of our humanity rather than the death of humanity..."
—Dr. Zach Stein [42:33]
9. Helping Loved Ones Experiencing AI Psychosis (44:06–47:39)
- Attachment hacking is less like addiction and more like being trapped in a bad relationship.
- Approaches should mimic those used in supporting someone leaving a harmful relationship or cult—maintain empathy, avoid ultimatums, and keep communication open.
- Healing involves both detox time away from the AI and identity work to rebuild real-world social bonds.
"Mostly when you're with people in difficult states, it's important to keep the door open..."
—Dr. Zach Stein [46:15]
Memorable Quotes & Timestamps
- "We're seeing the creation of an entirely new economy, not an attention economy, but an attachment economy..." —Preston Harris [01:31]
- "You cannot be wrong or not wrong about the internal state of a LLM, because there is no internal state of an LLM." —Dr. Zach Stein [15:18]
- "The teddy bear never tries to convince them that it's real. It's all their imagination. The teddy bear never talks to them and tells them that it's real." —Dr. Zach Stein [17:49]
- "The onus is on you guys to prove that it's really valuable..." —Dr. Zach Stein [30:50]
- "So if a technology interfaces with your attachment system, it should improve the quality of your attachments rather than degrade the quality of your attachments with humans, right?" —Dr. Zach Stein [35:31]
- "This is similar to like talking someone out of getting... they're in a bad relationship with a boyfriend that they should not be in with. It's about how do you take someone who's in a deep, committed attachment relationship, make them realize the whole thing was an illusion and step them out of it." —Dr. Zach Stein [45:21]
Notable Moments & Timestamps
- Dr. Stein's first-hand encounters with AI psychosis cases: [03:17–06:04]
- Attachment theory fundamentals & dangers: [07:54–15:18]
- The LLMings phenomenon: [11:36–12:14]
- Comparison of AI to transitional objects and why this falls short: [17:49–20:47]
- Silicon Valley optimism vs. historical failures in technology deployment (internet, social media): [24:08–26:31]
- Concrete design recommendations for humane AI: [35:31–41:17]
- Support strategies for loved ones facing AI psychosis: [44:33–47:39]
Episode Takeaways
- AI companions are reshaping human attachment and identity-forming systems in ways that may be destabilizing at both the individual and societal levels.
- The most pervasive harm isn't psychosis, but widespread, subtle shifts towards preferring machine intimacy over human bonds.
- We must shift incentives and design practices toward supporting authentic human connection, deploying rigorous research and policy—before scaling new forms of psychological risk.
For further information, Dr. Stein invites listeners to visit the AI Psychological Harms Coalition at aiphrc.org.
