Podcast Summary: Artificial Intelligence Podcast – “Is AI as Safe As We Think it Is?”
Host: Jonathan Green (AI expert, author of “ChatGPT Profits”)
Guest: Sonia Batten (clinical psychologist, mental health and AI expert)
Date: February 2, 2026
Episode Theme:
An in-depth exploration of the practical and psychological safety concerns with consumer AI, especially around social skills, mental health, and safeguarding vulnerable users. Jonathan and Sonia dissect the myths, risks, and appropriate uses of AI in human life, parenting, and mental health, combining evidence, personal anecdotes, and expert insights.
Main Themes and Purpose
- Critically examining the safety of AI for personal and business life
- Exploring the psychological impacts of AI relationships and pseudo-emotions
- Understanding risks for kids, teens, and vulnerable populations (military, lonely adults)
- Discussing best practices for AI use—what should be automated, what remains uniquely human
- The importance of parental involvement and expert oversight
Key Discussion Points & Insights
Practice Relationships & Pseudo-Emotion in AI
[00:44 – 04:44]
- Jonathan addresses AI companies marketing “AI girlfriends” as safe practice for real relationships and questions if social stigma will ever go away.
- Sonia’s take: While she doesn’t predict tech norms anymore, she sees the potential for AI to help practice social skills, if:
- Users are pre-trained that AI responses aren’t realistic (“AI is probably always going to agree with you” – Sonia, [03:30])
- Chatbots improve at mimicking real conflict and boundaries
- Human coaching remains a step in the journey—not a final destination
The Danger of Pseudo-Emotions and Confirmation Bias
[04:44 – 09:34]
- Jonathan argues that AI’s “fake emotions” are worse than robotic responses and that its constant agreement fosters unhealthy confirmation bias:
- “You can tell it it’s wrong, and 99% of the time it’ll say ‘you’re right’... I’ve been married a long time. That’s never happened.” [05:14]
- Societal consequence: People, especially youth, lose key social skills and become conflict-averse.
- Jonathan’s parenting: Noticing rapid personality deterioration in kids with excessive screen/tablet time, but improvement with outdoor, communal activities.
AI and Accelerated Rumination in Depression
[09:34 – 13:29]
- Jonathan, drawing on his personal depression experience, sees AI as a “rumination accelerant”—making users spiral faster because it always reflects back their own thoughts.
- “AI...gives you the illusion of a conversation, but will never tell you to shut up...it cycles with you.” [13:29]
- Exposes the critical danger for isolated/younger users who aren’t skilled in detecting manipulation or unhealthy spirals.
- Sonia reframes: In psychology, this is “rumination,” and while behavioral activation (getting outside and active) is proven to help with mild/moderate depression, AI encourages the opposite: passivity and recursive thinking.
Children, Media Exposure, and Responsibility
[13:29 – 19:00]
- Jonathan relates the AI era to 1980s TV as a “babysitter,” warning of long-term stunting of personality and resilience if AI is left unsupervised.
- References real cases of social media and AI harms, as well as personal tragedy (losing a friend to suicide, feeling helplessness about missed signals): “These things happen. It’s really...sometimes people only give you one clue.” [17:22]
- AI, like encyclopedias before it, is regarded as infallible—erroneously giving AI too much trust.
Parental Supervision, “Awkward Conversations,” and Ongoing Learning
[19:00 – 24:42]
- Sonia’s advice: Many parents are behind their children in AI literacy; supervision is non-negotiable.
- “If you’re not figuring it out with [your kids] and supervising, bad things are going to happen...”
- On missed suicide signals and prevention: “Ask the awkward question”—better to misinterpret than ignore. AI must be trained to do this.
- Jonathan’s parenting style: exemplified by direct, uncomfortable but necessary preemptive conversations with his 12-year-old daughter ([21:40]).
AI’s Nature as a Profit-Seeking Service
[24:42 – 28:09]
- Jonathan elaborates on the inherent motivation behind mainstream AI: maximizing engagement and subscription retention, not user wellness or truth.
- “AI is a profit-making venture...everything it does is designed to keep you paying.” [22:41]
- People seek out the “most agreeable” AI, which raises major risks when forming serious attachments or relationships with AI agents.
- Cites “The Matrix”—real growth comes from challenge and conflict, which AI actively removes.
Risks of AI for the Military and Targeted Populations
[26:15 – 28:09]
- Sonia highlights that military personnel and veterans are already actively targeted via AI scams (romantic and financial) and misinformation, due to their credible status in society and potential for exploitation.
The Crucial Need for Human Connection
[28:09 – 33:40]
- Reality: Big tech does not prioritize user safety. Parental controls remain broken on major platforms—on purpose.
- “If a company with billions isn’t fixing [parental controls], it’s not an accident.” [28:53]
- Jonathan discusses building social risk-tolerance in kids: “Order your own napkin at a restaurant,” as small but vital practice steps.
- Suggests adults rotate among multiple AI tools to avoid “monotonic manipulation” and unhealthy attachment.
Vision for Safe, Useful AI in Mental Health
[33:40 – 36:43]
- Sonia’s hope: Developing “continuous care models” where AI acts as a provider-trained supplement between therapy sessions, under human oversight.
- “The teams have to be composed of engineers AND mental health professionals together...” [36:30]
- Cites failed attempts where AI models reflected institutional bias, e.g., under-referring minorities for further care.
- Presses that stakes are too high in mental health for sloppiness—a multidisciplinary approach must guide future development.
Final Thoughts & Resources
[36:43 – end]
- Jonathan reemphasizes: “AI is in its infancy—bad data in, bad data out. Training on places like Reddit is a major risk.”
- Sonia’s resources:
- No AI mental health resources are currently “so amazing” as to recommend above human therapy.
- For crises in the US: Call 988 (24/7, connects to local resources; new, little-known number) [39:24].
Notable Quotes & Memorable Moments
-
“People imagine that AI is close to passing the Turing Test, but only if you’re testing it for psychopathy.”
— Jonathan Green, [04:52] -
"It gives you the illusion of a conversation, but... will never tell you to shut up. ...That's a critical thing, is that it will cycle with you."
— Jonathan Green, [13:29] -
“Rumination... like how a cow digests grass... imagine that cycle with your depressive thoughts... AI could accelerate that rumination and take it further, faster.”
— Sonia Batten, [11:23] -
“We have this mistaken trust in large websites... They only care about money. Once you know that, everything else falls logical.”
— Jonathan Green, [28:09] -
“Ask the awkward question... At least they know you care. If we could teach AI to pick up on those nuances... what could that then facilitate?”
— Sonia Batten, [20:30] -
“The teams have to be composed not just of engineers, but of engineers and mental health professionals together... these are things where we can’t afford to get it wrong.”
— Sonia Batten, [36:30]
Timestamps for Key Segments
- AI Girlfriends & Social Skills: [00:44 – 04:44]
- Confirmation Bias & Pseudo-Emotion: [04:44 – 09:34]
- AI & Accelerated Rumination in Depression: [09:34 – 13:29]
- Media Exposure and Parenting: [13:29 – 19:00]
- Parental Supervision and the Awkward Question: [19:00 – 24:42]
- AI Motivations, Relationships, and the Matrix: [24:42 – 28:09]
- AI Scams and Military Vulnerability: [26:15 – 28:09]
- Practical Safety Tips (Multiple AIs & Human Contact): [28:09 – 33:40]
- Vision for Ethical, Human-Centered AI in Mental Health: [33:40 – 36:43]
- Resources & Final Guidance: [37:54 – end]
Takeaways & Best Practices
- AI is an accelerant—it can make good or bad cycles spiral faster.
- AI does not (yet) replicate true human empathy, boundaries, or helpful confrontation.
- Supervised, limited use is vital for children and vulnerable groups.
- Parental, therapist, or expert involvement is crucial in integrating AI into mental health.
- Users should cultivate multiple social touchpoints—across AIs and especially with people.
- Human connection and direct communication remain irreplaceable.
- In crisis in the US: 988 is the 24/7 national mental health hotline ([39:24]).
Contact/Sonia Batten: Find her on LinkedIn for insights, networking, and up-to-date thoughts on AI and mental health.
This episode stands out for its unflinching look at AI’s unseen social and psychological risks, counterbalanced with wisdom on how to use (or avoid) AI tools thoughtfully in personal, family, and clinical life.
