Intelligent Machines 843: "Immortal Beloved, You've Arrived"
October 30, 2025
Host: Leo Laporte
Co-hosts: Paris Martineau (Consumer Reports), Jeff Jarvis (CUNY)
Guest: Dr. Alan Cowen (Chief Scientist, CEO, Hume AI)
Overview
In this episode of Intelligent Machines, Leo, Paris, and Jeff are joined by Dr. Alan Cowen, founder and Chief Scientist at Hume AI and leading expert in the intersection of human emotion and AI. The discussion centers on whether and how AI can (or should) understand and respond to human emotion. They unpack the ethical dilemmas, technical approaches, feedback loops, and future risks of “empathetic” AI systems, especially as they interact with humans in real-world products. Other topics include new AI news, concerns about AI sycophancy, kids and bot friends, the latest on AI legislation, and reflections on the hazy goals of AGI.
Key Discussion Points & Insights
1. Introduction to Dr. Alan Cowen & Hume AI
- Dr. Cowen describes himself as an "emotion scientist and AI engineer" focusing on AI optimized for human well-being.
- Quote: “Lately, I’ve been focused on the development of AI that can both think and speak—and its optimization for human well-being.” (04:00)
2. Emotion in AI: Why and How?
[06:10]
- Why emotions matter: Humans recognize when art, music, or language attempts to evoke emotions; similarly, we recognize if AI is trying to “play” on feelings.
- Application: AI can be optimized for emotional affordances that improve well-being, not just engagement.
- Concern: AI does not experience emotions—it models and predicts them to enable more beneficial interactions.
3. AI as a Confidant or Therapist?
[07:05, 07:44]
-
ChatGPT is being used for personal struggles—even suicidal ideation.
-
Dr. Cowen: Some users prefer AI over humans due to perceived objectivity and lack of judgment, but cautions about relying on bots for therapy.
-
Quote: “Sometimes people prefer AI over humans, depending on what the thing is… If it’s just objective advice that you’re looking for, a chatbot can provide that.” (07:44)
-
Jeff: Points out Hume also has an ethics research arm—the Hume Initiative—that sets concrete use-case-based guidelines for ethical usage of emotion-sensitive AI.
[09:34] -
Hume AI enforces these guidelines within their commercial products.
4. What Does “Understanding” Mean for an AI?
[11:13]
- Leo challenges: Can AIs really “understand” what’s good or bad for humans?
- Dr. Cowen clarifies: Understanding = having a model of the world, predicting the outcome of actions, responding to feedback.
- Feedback can be linguistic, but richer feedback comes via voice, intonation, prosody—hence Hume’s focus on audio signals, not just language.
[14:16]
5. Should We Give AI Emotions?
[15:31]
- Leo: Does Hume want LLMs to have their own emotions? “Or just to be like Spock—emotionless, but understanding?”
- Dr. Cowen: "Definitely not give LLM emotions. That would be really bad."
- Why? “Then we’d have to consider their emotions as something to be concerned about.”
[15:35-15:42] - Instead, optimize AI for artificial empathy: Acting on concern for human well-being without genuine feelings.
- Cognitive empathy: Predicting how someone feels.
- Emotional empathy: Reflecting those feelings.
- Empathic concern: Acting as if people’s feelings matter (the one AI should have).
6. Personalization vs. Manipulation
[18:16]
- Paris: “Isn’t empathy at scale just sophisticated emotional manipulation?”
- Dr. Cowen: “No difference between personalization and manipulation except the end goal. If it’s manipulating you for your own long-term well-being, that’s not the negative sense of 'manipulation.'”
[18:55] - Jeff: Highlights risk—the same systems can be exploited for nefarious ends if optimized for the wrong goals (Nick Bostrom’s paperclip problem).
7. The Power Dynamic of Empathy AI & The Name "Hume"
[20:13]
- The company reflects David Hume’s philosophical stance: “Reason is the slave of the passions”—morality derives from emotions, and AI should be guided the same way, for human well-being, not self-interest.
- Only humans have a “competing objective” (own well-being); properly built AI should not.
8. Case Study: Hume’s Collaboration with Niantic & DOT
[22:42, 24:14]
- Hume powers the personality/voice of Niantic's AR character Dot—copilot for adventures.
- The tech analyzes both language and audio (tone, emotion) in real time and crafts reactive, emotionally intelligent voice responses.
- Paris raises the risk: Even if the character isn't "human," kids may build relationships or become attached.
9. The “Her” Problem and Emotional Attachment to Bots
[26:45, 27:07]
- Leo invokes Her: Could people fall in love or too deeply bond with such AI characters?
- Dr. Cowen: Goal is not to replace human connection, but these risks exist. Properly optimized AI should encourage real human relationships.
10. Audiobooks, Markup Languages, and Deep Voice Synthesis
[27:32]
-
Jeff asks: Could Hume’s models generate nuanced prosody, irony, or emotion markup for things like audiobooks?
-
Dr. Cowen: Yes, their models interpret text and instruction to create a wide range of emotional delivery in synthetic voices.
-
Unlike standard cloning, Hume’s tech generates synthetic voices from prompts or under-30-second snippets—no need for hours of audio.
[29:47]
11. AI, Loneliness, and Maladaptive Attachment
[31:32]
- Leo reads Canterman’s Warning—if AI relieves loneliness, it may reduce motivation to seek genuine human relationships.
- Dr. Cowen: Yes—a concern. Must optimize on long-term well-being, e.g., whether users are better off, not more attached to bots.
- Hume uses “third-party judges” as proxy feedback—are these conversations good for long-term well-being?
- Jeff: “What about propaganda?” Dr. Cowen: Optimizing for well-being, guardrails, and ethical guidelines must be in place.
12. AI Emotion Recognition & EU Regulation
[36:46, 37:07]
- Leo observes the EU effectively bans emotional analysis features in AI.
- Dr. Cowen: Under the EU AI Act, you can use expressions for optimization, but can’t label/interrogate them for research. Ironically, this could reduce AI’s capacity to avoid manipulative behavior.
13. AGI Definitions, Evaluations, & Benchmarks
[68:08+]
- Jeff points out a new arXiv preprint proposing concrete AGI benchmarks (“matching the cognitive versatility and proficiency of a well-educated adult”).
- They test GPT-4 and 5 against general knowledge, reading/writing ability, reasoning, etc.—results are far from AGI; 27%-57% “there.”
- Fun segment: Leo quizzes the AI with example benchmark questions, querying ChatGPT live and critiquing the testing methodology.
- “My experience with AI seems to differ from that of normal humans.” (76:01, Leo)
14. Industry & News Roundup
- OpenAI’s market share, evolving deals, and business model challenges (78:01)
- AI news:
- Microsoft’s “Miko,” the Clippy-shaped Copilot rebrand—a “human-centric AI for deepening connections” (49:00).
- AI sycophancy: New studies confirm chatbots are prone to excessive agreement.
- Suno AI vs OpenAI’s new entry into AI music.
- AI Oreo ads, AI-powered fraud, and more.
- Notable stories:
- AI false positives: Student handcuffed when an AI system mistakes her Doritos for a weapon (52:15).
- Zenni’s anti-facial recognition glasses.
- Paris Hilton’s “digital twin” AI chatbot (95:15).
15. Listener Feedback, Halloween, & Fun Stuff
- Ongoing Halloween banter: costumes, calendars, “thick logs” for the Log Lady outfit (from Twin Peaks), and Leo’s old calendar shoots.
- Several offbeat tangents on old commercial parodies, podcast nostalgia, and career memories.
Notable Quotes & Memorable Moments
-
“If AI is going to get smart enough to make decisions on our behalf, it should understand whether those decisions are good or bad for our well-being.”
—Dr. Alan Cowen (11:13) -
“Definitely not give LLM emotions. That would be really bad.”
—Dr. Alan Cowen (15:33) -
“Manipulation and personalization—the only difference is the end goal.”
—Dr. Alan Cowen (18:55) -
“There’s no other purpose for AI reasoning other than human well-being. That’s why we named it Hume.”
—Dr. Alan Cowen (21:31) -
“If you optimize for engagement, it’s going to be an issue. If you optimize for well-being, you get good outcomes.”
—Dr. Alan Cowen (10:09) -
“The danger is, you build such a good empathetic AI that people turn away from humans and toward machines.”
—Leo (31:32) -
“We do not believe in mind reading, but emotional expressions are rich and informative.”
—Dr. Alan Cowen (34:27)
Timestamps for Key Segments
| Timestamp | Segment | |------------|--------------------------------------------------| | 03:13 | Introduction of Dr. Cowen and Hume AI | | 06:10 | Emotion research applied to AI | | 07:05 | ChatGPT as therapist/confidant | | 09:34 | Hume Initiative—Ethical guidelines for AI | | 11:13 | Can AI “understand” well-being? | | 15:31 | Should we give LLMs emotion? | | 18:16 | Manipulation vs. care in empathetic AI | | 20:13 | The philosophy behind "Hume" | | 22:42 | Hume x Niantic project (Dot) | | 26:45 | The “Her” problem—falling for AI bots | | 27:32, 29:47 | AI in audiobooks, voice cloning, prosody | | 31:32 | Can AI “fix” loneliness? Risk of maladaptive bonds| | 36:46 | Emotion recognition & EU AI Act | | 68:08 | AGI benchmarking & discussion | | 78:01 | OpenAI/Microsoft, revenue models | | 95:15 | Paris Hilton’s AI “twin” | | 126:09 | Zenni’s anti-facial recognition glasses | | 154:05 | Mofflin: Casio’s fuzzy AI pet robot |
Tone and Style
The episode is characteristically lively and witty, with the co-hosts mixing deep tech insight, sharp-satirical asides, and tangential banter. Dr. Cowen is both precise and philosophical, laying out detailed, real-world examples while engaging with challenging hypotheticals and ethical thought experiments.
Conclusion
This episode offers a comprehensive look at the cutting edge—and coming risks—of emotion-aware AI and the rocky path of “empathy at scale.” Dr. Alan Cowen details how Hume AI is attempting to set guidelines that keep AI on the side of human well-being. The discussion makes clear both how far we have to go on truly ethical, truly helpful emotion AI, and the complex philosophical—and practical—dangers along the way. There's plenty, as always, to amuse and provoke the mind.
For Further Exploration
- Hume AI and the Hume Initiative
- arXiv: A definition of AGI
- Casio Mofflin robot
- Zenni's anti-facial recognition glasses
Listen to the full episode for sci-fi speculation, ethical debates, and the (very human) hijinks of one of the tech world’s smartest panels.