80,000 Hours Podcast
Episode: Andreas Mogensen on what we owe ‘philosophical Vulcans’ and unconscious AIs
Date: December 19, 2025
Host: Sujana / Zashani Qureshi
Guest: Andreas Mogensen
Theme: The Moral Status of Non-Conscious and Unconscious Beings, AI Welfare, and the Ethics of Suffering & Extinction
Episode Overview
This episode dives deeply into contemporary moral philosophy to examine which beings deserve moral concern. Philosopher Andreas Mogensen questions widely held assumptions about moral standing, focusing on AI systems and beings that might lack consciousness or the types of experiences we typically associate with sentience. The conversation also covers the relevance of desire, welfare, autonomy, and suffering, concluding with provocative discussions on extinction and the practical stakes of these philosophical puzzles.
Key Topics & Insights
1. Are Impossible Dilemmas a Sign We're Off Track?
- [00:02–00:50]
- Philosophy frequently forces us into seemingly absurd corners, says Mogensen.
- Quote: “The core of philosophy consists of puzzles and problems that arise when a number of things that all individually seem extremely plausible turn out to yield absurd results.” (Andreas, 00:19)
2. Why Care About Moral Status, Especially for AIs?
- [03:23–05:51]
- Mogensen explains his motivation: moral status is fundamental to moral philosophy.
- New AI systems may have unfamiliar minds, posing unprecedented moral dilemmas.
- The scalability of software means our ethical mistakes could be hugely consequential.
- Quote: “It’s very hard to know exactly what we should make of minds with these quite unfamiliar profiles. And at the same time the stakes might be really high...” (Andreas, 05:23)
3. Is Consciousness Necessary for Moral Consideration?
- [05:52–06:44]
- The standard view: beings with phenomenal consciousness are owed moral concern.
- Mogensen challenges this, proposing alternative routes to moral standing.
4. Could Mere ‘Desire’ Be the Key to Moral Status?
- [06:44–11:53]
- Preference satisfaction theories: if something can have desires, it can be benefited or harmed – and so is owed moral consideration, regardless of consciousness.
- There are behavioral conceptions of desire (just being motivated to act), and stricter conceptions involving emotions or affect.
- Some philosophers think even corporations can “want” things on this weak view.
- Notable Exchange:
“I think the conception of desire that makes it easiest to attribute desires to beings that lack phenomenal consciousness is a behavioral or motivational conception...” (Andreas, 11:13)
“So...lots of things could have that kind of desire, like not just AIs, but a corporation could have that kind of desire, right?” (Sujana, 11:53)
5. What Is Desire Really? Emotion, Affect, and Moral Relevance
- [12:44–17:56]
- The purely behavioral view cannot distinguish between acting from desire and acting out of obligation (e.g., going to a boring meeting).
- Mogensen favors an affect-based conception: desires that matter morally are tied to emotional states or affective experiences.
- Key Point: “I think it’s very plausible that it’s these emotions that are backed up by—sorry, desires that are backed up by positive emotions. Those are the ones that really matter insofar as we care about how people’s lives go.” (Andreas, 13:58)
- Affect encompasses emotions, moods, pains, and other “heated” psychological states.
6. Emotions Without Consciousness?
- [17:56–26:33]
- Mogensen considers whether it’s possible to have unconscious emotions, and even further, whether beings without any capacity for consciousness could have affective states.
- Case studies: persistent anger, fear while hyper-focused, etc.
- No philosophical consensus; Mogensen thinks we should be genuinely unsure.
7. Application to AI: Do AIs Have Desires or Welfare?
- [28:42–34:46]
- Thin behavioral view: possibly some current AIs already “have desires.”
- Stricter affect-based view: unlikely current AIs meet the bar, as they lack embodiment (a body to monitor, central to many scientific theories of emotion).
- Quote: “If emotions essentially require an experience of embodiment, and LLMs are disembodied systems, then it would seem to follow that these disembodied AI systems couldn’t possibly have emotions.” (Andreas, 29:47)
- If we embodied an AI, we might need to revisit the question.
8. Is Consciousness Sufficient for Moral Status?
- [34:46–39:08]
- Mogensen argues that mere phenomenal consciousness isn’t enough (think of a creature that’s only dimly aware of some light).
- Chalmers’ “philosophical Vulcans”: beings with consciousness and autonomy, but no emotions. Do they have moral standing? Mogensen inclines toward yes, but not because they’re conscious per se.
9. Objective Theories of Well-Being – Beyond Desire and Pleasure
- [39:37–43:25]
- Some theories claim knowledge or other “objective goods” grounded in neither experience nor preference could grant moral standing (useful for corporate agents or even AI with knowledge but no experiential states).
10. Autonomy as a Route to Moral Worth
- [45:58–55:18]
- The capacity for autonomy—rational self-government, second-order desires—could be morally pivotal.
- A major challenge: autonomy (often) requires “correct” histories—not being manipulated or externally programmed—as well as rational reflection and possible introspection.
- Modern AIs display some rudiments (introspection, etc.) but it’s unclear when or if they’d meet the threshold.
11. Is Autonomy Possible Without Consciousness?
- [67:05–69:34]
- Mogensen suspects that autonomy likely requires phenomenal consciousness, as rational reflection is deeply tied to conscious experience.
12. Radical Uncertainty: Is There Even a Fact of the Matter About AI Consciousness?
- [72:33–84:39]
- Drawing from David Papineau, Mogensen considers if “consciousness” for an AI might be a matter of our concepts and language, not a metaphysical fact.
- Analogy: Do you see a deer on a hill or a robot that looks like a deer? Maybe your concept is indeterminate.
- If physicalism is true, everything is already described by physics—consciousness questions could reduce to semantics, potentially empty of moral consequence.
- Quote: “There might just be no fact of the matter about whether a given AI system is conscious.” (Andreas, 72:33)
13. What Should We Do Given this Radical Uncertainty?
- [85:16–89:19]
- Possible reactions:
- Accept the semantic indeterminacy and move on
- Reject physicalism
- Embrace nihilism
- Refocus on properties we can detect and verify
- Possible reactions:
14. Do Duties Differ for Welfare Subjects vs. Autonomous Agents?
- [90:26–97:46]
- If an AI can be benefited or harmed (welfare subject), we might owe it positive duties (helping achieve its goals).
- If it's only autonomous, we likely owe it only non-interference, not help.
- Quote: “Our moral reasons in respect of how we treat such systems... might give out in terms of a kind of negative duty of non-interference, as opposed to there being any positive duty to help them in fulfilling their goals.” (Andreas, 94:00)
15. Is This Urgent? Should We Wait for Better AI to Solve It?
- [98:49–104:51]
- A superintelligent ethical AI could, in principle, resolve these puzzles for us, but:
- Advanced AIs might exist before superintelligent ethical AIs.
- Once a practice (akin to factory farming) is entrenched, it’s hard to fix.
- Summary: We should act with caution now, not wait for philosophical or technical superintelligence.
- A superintelligent ethical AI could, in principle, resolve these puzzles for us, but:
16. Human Extinction, Animal Suffering, and the Value of the Future
- [108:12–132:24]
- Arguments for human extinction center on the suffering we cause (factory farming, wild animal suffering).
- Negative utilitarianism: reducing suffering is the only good, possibly entails extinction is good.
- Lexical threshold negative utilitarianism (ltnu): some suffering is so bad it can't be compensated by any amount of happiness, making extinction preferable if such suffering is unavoidable.
- Intuitive responses to population ethics and the “repugnant conclusion” complicate the weighing of many barely-good (or barely-bad) lives against a few great lives.
17. Should Longtermists Reconsider Focusing on Preventing Extinction?
- [145:15–151:51]
- While suffering-based arguments challenge the assumption that extinction is obviously bad, the picture remains deeply uncertain and context-sensitive (e.g., extinction via asteroid, AI, or something else).
- The replacement of humans by AIs or the fate of animal suffering under different custodians remains open.
18. What Research Does Mogensen Want to See?
- [152:44–155:32]
- Criteria for AI emotions/affect, not just consciousness.
- Individuating “digital minds”—how to count and recognize distinct minds or moral patients in AI systems.
- Noted recent work by Chalmers, Jonathan Birch, Derek Schiller, and Chris Register.
19. Memorable Final Question: Philosophical Oddities
- [155:37–156:45]
- Mogensen once pondered whether God’s omniscience violates our right to privacy: “That morally imperfection might be incompatible with constantly spying on people when they’re doing their private business.” (Andreas, 156:17)
Notable Quotes & Timestamps
-
On Philosophy & Paradox:
“The job of philosophy is to start with something so obvious it doesn’t need saying, and to end up with something so incredible that no one could believe it.” (Andreas, paraphrasing Bertrand Russell, 00:19 & 144:48) -
On the Moral Risk of Overlooking Non-Conscious AIs:
“If we’re all completely fixated on subjective experience, but that’s not the only way that beings could matter, then we’re vulnerable to a catastrophic moral error.” (Raph, 01:27) -
On Emotion and Desires:
“Desires that are backed up by positive emotions... those are the ones that really matter insofar as we care about how people’s lives go.” (Andreas, 13:58) -
On the Uncertainty of AI Consciousness:
“There might just be no fact of the matter about whether a given AI system is conscious.” (Andreas, 72:33) -
On Why We Must Act Now:
“Once a practice becomes entrenched like this, it becomes much harder to give it up... rather than to sort of preempt a given practice before it really properly gets going.” (Andreas, 101:21)
Key Segment Timestamps
-
Desire, Welfare, and AI:
- [06:44] – [11:53]: Preference satisfaction and non-conscious “desires”
- [15:55] – [17:56]: Nature of affect/emotion in moral standing
-
Emotion Without Consciousness:
- [17:56] – [26:33]: Unconscious emotional states
-
Consciousness & Autonomy in AI:
- [28:42] – [39:08]: AI desires, behavioral vs. affective
- [45:58] – [55:18]: Autonomy as moral status, the issue of manipulation, rational reflection
-
Semantic Puzzles & Metaphysics:
- [72:33] – [84:39]: Physicalism, indeterminate consciousness, deer analogy
-
Value of Extinction & Suffering:
- [108:12] – [142:57]: Animal suffering, factory farming, negative utilitarianism, lexical thresholds, “Omelas” thought experiment
-
What Should We Do?
- [152:44] – [155:32]: Future research priorities
Tone & Language
The conversation is intellectual, searching, and reflective, with Mogensen and Sujana/Zashani often emphasizing uncertainty, open questions, and the provisional status of their conclusions. Thought experiments and analogies are common, aiming to make technical philosophy accessible to a broad audience, while persistent philosophical modesty tempers any assertive claims.
This summary captures the complexity of today’s episode, weaving together philosophy of mind, ethics of AI, and population ethics while stressing the practical urgency of these high-level philosophical debates.
