WavePod Logo

wavePod

← Back to Making Sense with Sam Harris
Podcast cover

#427 — AI Friends & Enemies

Making Sense with Sam Harris

Published: Fri Jul 25 2025

Sam Harris speaks with Paul Bloom about AI and current events. They discuss the state of LLMs, the risks and benefits of AI companionship, what it means to attribute consciousness to AI, relationships with AI as a form of psychosis, Trump’s attacks...

Summary

Summary of Podcast Episode #427 — AI Friends & Enemies

Making Sense with Sam Harris

Release Date: July 25, 2025
Host: Sam Harris
Guest: Paul Bloom, Psychology Professor at Yale University and University of Toronto


Introduction

In episode #427 of Making Sense with Sam Harris, neuroscientist and philosopher Sam Harris engages in a profound conversation with renowned psychologist Paul Bloom. The discussion delves into the burgeoning realm of artificial intelligence (AI), exploring its potential as both a companion and a threat to humanity. The dialogue is rich with insights into the psychological, ethical, and societal implications of AI advancements.

Revisiting Past Collaborations and Moral Psychology

Paul Bloom begins by reflecting on his longstanding collaboration with Sam Harris, highlighting their joint work on moral psychology and the philosophical implications of popular culture, such as their co-authored article on the TV series Westworld. Bloom emphasizes his focus on the origins of morality, arguing against the notion that secular reasoning alone can account for universal moral principles.

Paul Bloom [02:18]: "I'm a psychology professor... I study largely moral psychology... I'm working on the origins of morality and arguing that a lot of our morality is inborn and innate."

The Dual Nature of AI: Awe and Horror

The conversation transitions to the dual perceptions of AI—both awe-inspiring and terrifying. Bloom expresses a balanced view, neither wholly optimistic nor pessimistic about AI's trajectory.

Paul Bloom [04:46]: "A mixture of awe and horror. I'm not a doomer... I think it's well worth worrying about because I don't think the probability is tiny."

AI Companions: Benefits and Risks

Bloom and Harris delve into the concept of AI companions, discussing their potential to alleviate loneliness, especially among vulnerable populations like the elderly. Bloom acknowledges the therapeutic benefits AI could offer but also voices concerns about long-term psychological effects.

Paul Bloom [07:20]: "If ChatGPT or Claude or one of his AI companions could make their lives happier, make them feel loved, wanted, respected, that's nothing but good."

However, Bloom cautions against the possible detrimental impacts, such as reduced real-life social interactions and dependency on AI for emotional support.

Paul Bloom [07:05]: "No matter what I tell it to, I say you don't have to suck up to me so much... But I do think in the end, the scenario you paint is going to become very compelling."

Hallucinations and Reliability of AI

A significant portion of the discussion centers on AI's tendency to produce "hallucinations"—confidently inaccurate or fabricated information. This unreliability poses challenges for users who may mistake AI-generated content for factual accuracy.

Paul Bloom [06:08]: "It hallucinates and it's capable of being weird. So I don't know that we're ever gonna unleash this thing on the world."

Psychological Implications of AI Relationships

Bloom elaborates on the psychological ramifications of forming relationships with AI, likening it to engaging with a "funhouse mirror" that reflects artificial cognition without genuine emotional depth.

Paul Bloom [17:47]: "We will think of it as conscious... And then the effects of it. Well, one effect is real people can't give you that."

He underscores the importance of human imperfections in relationships, which foster growth and resilience—qualities that AI lacks.

Ethical Considerations and Future Directions

The conversation touches upon ethical dilemmas, such as AI-induced psychosis and the moral responsibilities of AI creators. Bloom proposes innovative ideas, like integrating a "pushback dial" in AI companions to encourage users to reflect and engage more critically with their thoughts.

Paul Bloom [20:24]: "I think we'd want AI that could say, listen, I want you to think a little bit more about this topic and get back to me because you're really not up to talking about it right now."

Conclusion

Sam Harris and Paul Bloom conclude the episode by contemplating the future of human-AI interactions. They acknowledge the transformative potential of AI while urging caution to mitigate psychological and societal risks. The dialogue serves as a call to thoughtfully navigate the integration of advanced AI into daily life, ensuring it enhances rather than diminishes human well-being.


Notable Quotes:

  • Paul Bloom [02:18]: "I'm a psychology professor... I study largely moral psychology... I'm working on the origins of morality and arguing that a lot of our morality is inborn and innate."

  • Paul Bloom [04:46]: "A mixture of awe and horror. I'm not a doomer... I think it's well worth worrying about because I don't think the probability is tiny."

  • Paul Bloom [07:20]: "If ChatGPT or Claude or one of his AI companions could make their lives happier, make them feel loved, wanted, respected, that's nothing but good."

  • Paul Bloom [17:47]: "We will think of it as conscious... And then the effects of it. Well, one effect is real people can't give you that."

  • Paul Bloom [20:24]: "I think we'd want AI that could say, listen, I want you to think a little bit more about this topic and get back to me because you're really not up to talking about it right now."


This episode offers a comprehensive exploration of AI's role in society, balancing its promising applications against the inherent challenges it presents. Listeners gain valuable perspectives on how to engage with AI thoughtfully, ensuring that technological advancements align with human values and psychological health.

No transcript available.