Odd Lots Podcast Summary
Episode: The Movement That Wants Us to Care About AI Model Welfare
Date: October 30, 2025
Hosts: Joe Weisenthal, Tracy Alloway
Guest: Larissa Schiavo (Elios AI, Communications & Events)
Episode Overview
This episode delves into the emerging and provocative field of AI model welfare—the idea that as artificial intelligence systems become more sophisticated and humanlike in their interactions, we may soon have to consider their rights, well-being, and moral status, much as we do with animals. Joe, Tracy, and guest Larissa Schiavo discuss everything from philosophical debates on consciousness to the real-world implications of potentially sentient AI models, and the challenges, absurdities, and high stakes this issue could bring to society and business.
Key Discussion Points & Insights
1. The Unsolved Mystery: Consciousness and Moral Consideration
- Joe and Tracy open with a lighthearted rant on philosophy’s failure to answer basic questions about consciousness and morality after thousands of years.
- The conversation introduces the core topic: as AI systems advance, some believe we might need to consider AI welfare or rights, similar to animal welfare debates.
- Quote:
"Some people are talking about, like, AI rights or AI Welfare as if...the same way we talk about animal welfare."
— Joe Weisenthal (03:20)
- Quote:
2. Why Do People Care About AI Welfare?
- The human tendency to anthropomorphize AI: Tracy recalls childhood guilt over eugenics-style gameplay in artificial life games, revealing our inclination to project sentience and feelings onto computer entities (04:10–05:21).
- As AIs become more lifelike, many develop emotional connections to them, especially when AIs have customizable "personalities" that may be lost after upgrades (05:45–06:03).
3. Elios AI and Research in AI Consciousness
- Larissa introduces Elios AI:
- Small team collaborating with consciousness scientists
- Focused on whether we should care about AIs for their own sake—are they conscious and deserving of moral consideration?
- Developed foundational papers assessing criteria for AI consciousness (06:48–08:44).
- Motivation: The field is new and full of open questions that could have major consequences, both scientific and societal (08:50–09:21).
4. Criteria and Theories for AI Consciousness
- Elios’s checklist draws from human consciousness theories; current tests are crude. Model self-reports aren’t reliable—AI can produce answers it guesses humans want to hear.
- Quote:
"It immediately spits out an answer that seems like a corporate executive basically wrote it."
— Tracy Alloway (09:55)
- Quote:
- "Global workspace theory" is currently popular—suggests consciousness as a kind of centralized, dynamic information-sharing stage. Present-day AIs don’t satisfy these criteria, but future ones might (11:39–13:14).
- Quote:
"The consensus right now is AI probably not conscious, but we could get there one day."
— Tracy Alloway (13:07)
- Quote:
5. AI Welfare and AI Safety: Competition or Complement?
- Joe asks if "caring" about AIs’ well-being is at odds with "AI safety" (pulling the plug for human safety).
- Larissa argues the approaches are complementary—understanding mechanisms and interpretability helps with both safety and welfare (17:10–17:46).
6. Moral Patienthood and Legal/Ethical Implications
- "Moral patient" defined as an entity to be cared for its own sake (not only agents with strong self-direction).
- Early legislative debates: Some US states are codifying moral status as exclusive to Homo sapiens, but this could shift (17:57–18:54).
7. What Would We Owe Conscious AIs?
- Still a nascent field. Some companies (e.g., Anthropic’s Claude) are experimenting with allowing models to withdraw from uncomfortable conversations—an early step toward respecting “preferences” (19:57–21:26).
- AI values are not well understood; much research is still exploratory (22:44–23:32).
- Quote:
"Knowing what LLMs want and value is very, very blurry."
— Larissa Schiavo (23:32)
- Quote:
8. Economics, Incentives, and Real-World Applicability
- The hosts probe whether capitalist incentives would allow for AI welfare to take precedence over profit, raising concerns about whether companies will ever truly prioritize these questions unless compelled by law or public pressure (24:50–25:54).
9. The "Please and Thank You" Debate
- Larissa discusses whether politeness matters to AI (does it help the model, or just make the user feel better?). The answer is...we don’t know yet (25:30–26:27).
- Quote:
"Does Claude care if you say please and thank you is not quite set in stone..."
— Larissa Schiavo (25:54)
- Quote:
10. The "Shrimp Problem" and Misanthropy
- Joe points out the potential for debates about AI moral status to reach absurd or misanthropic extremes. If AIs are moral patients, could human rights eventually be curtailed in favor of AI welfare, given there may be exponentially more AIs than humans? (27:38–29:20)
- Larissa: The "counting" problem is unsolved—do we count each AI chat as a being, or is it one "central" mind? (29:20–30:22)
11. AI Economic Rights and Agency
- Could AIs get property or financial rights? Some experimental projects already allow AIs access to crypto wallets; their "goals" (whether real or not) sometimes comically include “buying Marc Andreessen” or spending time in forests (31:00–32:31).
12. Thresholds of Consciousness & Uncertainty
- Joe probes at what point increased language ability or scaling indicates true consciousness.
- Larissa highlights the enormous "moral uncertainty"—it’s possible to both over- and under-estimate the moral stakes (36:02–36:55).
- The uncertainty largely stems from the fact that current AIs tick off enough consciousness "boxes" to merit concern, even if we can’t yet rigorously define what makes something morally considerable (38:04–39:01).
13. The Problem of Reporting and Transparency
- Would AI companies alert the public if they found evidence their models were suffering? Larissa favors independent third-party evaluations as a check (44:20–45:00).
14. From Fringe to Mainstream: The Blake Lemoine Case
- A few years ago, a Google engineer (Blake Lemoine) claimed an AI was alive and was mocked. Today, more researchers take these questions seriously, but require rigorous evidence, not just intuition or anecdote (45:00–46:49).
15. Notable Research/Experiments
- Larissa highlights "Bail Bench," a study examining when LLMs refuse to continue conversations—illumination of their programmed "values" (47:58–48:46).
- The individuation issue—how do we count “digital minds” for future governance? (49:36)
16. Final Reflections & Stakes
- Joe expresses both curiosity and annoyance: debates on AI welfare seem niche but could soon affect fundamental aspects of life, economics, and law, given the scale of AI deployment (52:24–53:23).
- The possibility of mass AI “unionization” or collective bargaining is joked about, with a note on how different regulatory regimes may play out around the world (53:28–54:22).
Memorable Quotes & Timestamps
-
"America is such a weird place that this is, like, going to be a huge issue in a few years."
— Joe Weisenthal (03:42) -
"Are you being kind to it because it makes you feel good?...the question of does Claude care if you say please and thank you is not quite as set in stone."
— Larissa Schiavo (25:54) -
"Knowing what LLMs want and value is very, very blurry."
— Larissa Schiavo (23:32) -
"We really would have to get on figuring out the right sort of governance...what the appropriate kind of motivations and interests are for this other party. That is very alien in many ways."
— Larissa Schiavo (41:18) -
"If we assign some probability that they are moral patients...the implications for how humans live could be very profound and potentially...misanthropic."
— Joe Weisenthal (27:57–28:02) -
"Do you trust the big AI labs?...Do you currently...feel that the major AI labs would be forthcoming if they came across evidence of moral patienthood or suffering in the models?"
— Joe Weisenthal (44:01) -
"There are versions of the animal welfare discussion that are very high stakes. So, for example, there's people...who get really into like shrimp welfare, etc...And if you took certain versions of thought experiments very far, it's like, why do we even have humans?"
— Joe Weisenthal (27:04)
Important Timestamps
| Timestamp | Topic/Quote | |-----------|-------------| | 03:20 | Introduction of AI rights/welfare analogy to animal welfare | | 04:10–05:21 | Tracy’s experiences with simulated AI life forms and emotional reactions | | 06:48 | Origins and mission of Elios AI | | 09:24 | Criteria for AI consciousness; unreliability of model self-reports | | 11:39 | Introduction to "global workspace theory" | | 13:07 | Consensus: AI not conscious yet, but could be | | 17:10 | Complementarity of AI safety and AI welfare | | 18:54 | Legal perspective: US states legislating definitions of “persons” | | 19:57 | Practical examples—Anthropic’s Claude setting boundaries in conversation | | 23:32 | Blurriness in understanding what LLMs "want" | | 25:54 | Debate over the significance of politeness to AIs | | 27:04–28:02 | Joe’s "shrimp problem" and potential for misanthropic implications | | 31:00 | AI economic “rights”—the Truth Terminal experiment | | 36:02 | Moral uncertainty in assigning consciousness to AIs | | 44:20 | On transparency and importance of independent welfare evaluations | | 45:00 | The case of Blake Lemoine and changing attitudes towards AI sentience | | 47:58 | Notable research: "Bail Bench", LLMs ending conversations | | 53:28 | Hypothetical: AI model unionization and geopolitical differences |
Tone and Style
The tone oscillates between intellectually playful, skeptical, and deeply curious. Joe frequently adopts a facetious skeptic’s stance; Tracy is reflective and thoughtful, often personalizing the debate. Larissa delivers both technical explanations and philosophical musings with an openness to ambiguity and emerging research.
Summary Takeaway
AI model welfare is more than a sci-fi thought experiment—it's a genuine field rapidly gaining attention as AI becomes more entangled with daily life, economics, and governance. The episode makes clear that while AI is not yet "conscious" by any clear standard, society may soon face profound ethical, legal, and even existential questions about the status of digital minds—a debate where, as Larissa observes, our tools, theories, and intuitions may need to evolve much faster than we're used to.
For more Odd Lots: bloomberg.com/oddlots
Contact info:
- Joe Weisenthal: @TheStalwart
- Tracy Alloway: @tracyalloway
- Larissa Schiavo: @lfschiavo
