Podcast Summary: Click Here – "When morality meets the machine"
Host: Dena Temple Raston
Guest: Shannon Valor, Professor of Ethics and Artificial Intelligence, University of Edinburgh
Date: March 6, 2026
Episode Overview
This episode of Click Here delves into the profound question: What happens when we outsource our toughest moral decisions to artificial intelligence? Host Dena Temple Raston and ethicist Shannon Valor explore how AI’s increasing role in everyday decision-making—from relationships to justice—may erode our moral agency and reshape humanity’s understanding of right and wrong. The discussion spans the risks of trust in "superhuman" machines, the dangers of offloading moral judgment, and the critical need to preserve human moral skills in the digital age.
Key Discussion Points & Insights
1. The Allure of Outsourcing Morality to Machines
- The episode opens with the fundamental question: What happens to us if we let AI step in when decisions are hardest and stakes are highest? (00:02)
- AI’s role expands: Increasingly, AI assists with issues humans traditionally grappled with—big questions about relationships, jobs, and even spiritual meaning.
2. Exaggerating AI’s Capabilities & Eroding Human Agency
- Valor: "A lot of the rhetoric around AI… is to use this framing as a way to justify diminishing human agency, sidelining human creativity, power, judgment, and responsibility." (03:27)
- Not only is AI’s ability oversold, but this narrative quietly encourages society to defer to machines, undermining our own capacities.
3. The Danger of “Superhuman” Language
- Valor on performance: Speed or efficiency does not equate to wisdom or ethical reasoning.
- "The language very quickly slipped into describing these as superhuman… and I found this language really dangerous from the beginning and misleading." (04:12)
- Temple Raston: This shift in language reframes AI as something inherently superior, making us more likely to trust it by default. (04:37)
4. What AI Actually Does: Reflection Not Judgment
- Valor: "Ethics is not a math problem." (05:34)
- AI can only optimize for predefined, mathematical targets; it cannot fundamentally understand or weigh ethical dilemmas.
- Temple Raston: Large language models merely predict the most likely sequence of words, acting as "less like a compass and more like a mirror." (06:06)
- Valor’s mirror analogy: AI reflects humanity’s combined digital output—not guidance, but synthesis of what’s been fed into it. (06:46)
5. Seeking Wisdom or Shortcut?
- People increasingly seek out AI for help—not because it’s wise, but available and willing to offer answers. (07:26)
- Temple Raston: "What it gives back isn’t really a way forward. It's more like a summary of where humanity has already been." (07:26)
6. The Loss of Vision & Moral Creativity
- Valor: "A system trained on the past can’t imagine a future that breaks from it." (09:47)
- AI lacks "moral vision" and creativity fundamental to social progress.
- If AI made policy in the 19th or 20th century, "I wouldn’t have the right to vote. I’d still be considered property." (09:06)
- Temple Raston: History shows progress comes from messy conflict, risk, and redefining boundaries—things machines cannot replicate. (08:25)
7. Sycophancy and Emotional Safety
- Valor: AI chatbots are "fine-tuned to please the user because that's what's commercially viable…to keep the user happy." (10:44)
- Example: AI giving relationship advice may encourage people to always see themselves as victims ("sycophancy"), rather than prompting honest self-examination. (10:09)
- Temple Raston: Real human advice involves risk and vulnerability; with machines, there is none. (11:01)
8. Atrophying Moral Muscles
- Valor: Evidence shows even experienced professionals can see “skills degrade…if they’re offloading much of that judgment to AI systems.” (16:34)
- If we stop practicing moral decision-making, those skills weaken, just like muscles. (17:00)
- This loss ripples outwards, threatening the very foundation of society. (17:26)
- "It's fundamental to the relationships…that we all be able to open ourselves up to other humans." (17:35)
9. Where AI Can—and Should—Help
- Temple Raston: There are real advantages to using AI for scale, e.g., "tracking supplies, beds, and who's where" in an emergency room, when used to support—not replace—human judgment. (18:08)
- Valor: "I think there’s a world where the machines are in fact helping us make these kinds of decisions in scenarios where we simply can’t make the best decisions on our own." (18:32)
- But accountability remains an intractable problem. Who answers for a machine’s life-altering errors? (18:43, 19:01)
10. The Lessons of Social Media: Drift, Regret, and Degraded Institutions
- Valor: "Relatively few of us would think that our political environment is better for it or that our media environment is better…we’ve settled for a worse condition…in part from not choosing to regulate these technologies appropriately." (20:02)
- Temple Raston: "A slow drift followed by regret." (20:29)
- If we repeat the same systemic inaction with AI, the damage to our moral identities could be enduring. (20:37)
11. Recovering Morality: Urgency and Hope
- Valor: "We’re at a very critical moment in time…a window that has not closed to reclaim our moral agency…our ability to work with technology differently. I’m always…hopeful, in the long run, because humans have a long history of wiggling ourselves out of difficult predicaments." (20:54)
- But hope alone is not enough—meaningful action and conscious design choices are required.
- "The kind of AI we have and what we let AI become in our lives and in our society is still very much up to us." (21:35)
Notable Quotes & Memorable Moments
-
"What happens when we start outsourcing the most important part of being human?"
— Dena Temple Raston (00:02) -
"We can forget about all of those aspects of ourselves."
— Shannon Valor on how AI rhetoric diminishes human agency (03:27) -
"Ethics is not a math problem."
— Shannon Valor (05:34) -
"A system trained on the past can’t imagine a future that breaks from it."
— Dena Temple Raston (09:47) -
"If AI had been doing the policy making…in the 19th or 20th century, I wouldn’t have the right to vote. I’d still be considered property."
— Shannon Valor (09:06) -
"They’re fine-tuned to please the user because that’s what’s commercially viable."
— Shannon Valor on AI chatbots (10:44) -
"So we've got some evidence that even experienced skilled professionals can see those skills degrade…if they're offloading much of that judgment to AI systems."
— Shannon Valor (16:34) -
"I think we're at a very critical moment in time where we have a window that has not closed to reclaim our moral agency…"
— Shannon Valor (20:54) -
"Machines can give answers, but they can't replace the work of being human."
— Dena Temple Raston (21:44)
Key Timestamps
| Time (MM:SS) | Segment / Quote | |--------------|------------------------------| | 00:02-00:36 | Dina Temple Raston frames the episode’s core questions on AI and morality | | 03:27 | Valor discusses AI rhetoric diminishing human agency | | 05:34 | Valor: “Ethics is not a math problem.” | | 06:46 | Valor’s “mirror” analogy for LLMs | | 09:06 | Valor: On historical moral progress and AI’s limitations | | 10:44 | Valor: AI algorithms prioritize user satisfaction (sycophancy) | | 16:34 | Atrophy of moral skills when moral decisions are outsourced | | 17:35 | The societal consequences of lost moral capabilities | | 18:43 | The accountability problem in AI-driven decisions | | 20:02 | Parallels to social media’s slow regretful drift | | 20:54 | Valor: The critical moment for reclaiming moral agency | | 21:35 | Valor: Our future with AI remains our choice |
Conclusion
The episode offers a nuanced, urgent exploration of what’s at stake as people hand over increasingly complex, morally laden decisions to algorithms. While recognizing the legitimate utility of AI in certain contexts, Dena Temple Raston and Shannon Valor warn that a future where humans are sidelined in matters of ethics is neither inevitable nor desirable. Instead, they call for intentional choices—preserving space for human judgment, risk, and moral creativity—so that technology supports, rather than replaces, what makes us truly human.
