Philosophy Bites - Walter Sinnott-Armstrong on AI and Morality
Podcast Hosts: David Edmonds, Nigel Warburton
Guest: Walter Sinnott-Armstrong (Duke University)
Date: June 14, 2024
Overview
In this episode, David Edmonds and Nigel Warburton engage philosopher Walter Sinnott-Armstrong in a rich discussion about the challenge of embedding human morality into artificial intelligence systems, especially in life-and-death situations such as medical triage. Sinnott-Armstrong explains why neither a single moral framework nor mere aggregation of human judgments provide a satisfactory solution. He advocates for a hybrid approach that blends principled reasoning with empirical data, using concrete examples from medical ethics such as kidney allocation. The conversation addresses major philosophical hurdles, technical complications, practical implementation, and the broader implications for AI ethics across different domains.
Key Discussion Points & Insights
1. Defining Artificial Intelligence
-
Machine Learning as the Essence of AI
- [00:56] Sinnott-Armstrong: “Artificial intelligence should be defined very broadly. It occurs whenever a machine learns something because learning involves intelligence.”
- Intelligence in AI is fundamentally tied to the system’s ability to learn and improve its methods for achieving set goals.
-
Learning and Goal-Seeking
- [01:26]: The crucial components are setting goals and learning improved methods to reach them.
2. Can We Program Human Morality Into Machines?
-
Problems With "Top-Down" Approaches
- [01:39] Edmonds: “If I’m a utilitarian, I just program it to maximize happiness... If I’m a Kantian, I program Kantian ethics.”
- [01:55] Sinnott-Armstrong: “The Kantians won’t like the utilitarians and the utilitarians won’t like the Kantians. Who made you the dictator who got to tell everybody else which moral system should be built into the machine?”
- Embedding any one ethical theory sidelines widely held alternative perspectives.
-
"Bottom-Up" Aggregation and Its Dangers
- [02:22]: Collecting massive data from the internet or people's behaviors reflects actual human moral judgments, but also absorbs biases and prejudices.
- [03:04] Sinnott-Armstrong: “Sure, humans would say that, but that just shows the weakness of humans.”
- Not only does this import societal flaws (racism, sexism), but it provides no clear rationale for decisions—yielding a “black box”.
3. The Hybrid Solution: Principles + People’s Values
-
Combining Approaches for Conflicting Cases
- [03:59] Sinnott-Armstrong: “Both, and make them work together. We call it a hybrid.”
- A pragmatic method: solicit community views on what features matter (e.g., waiting time, dependents, prognosis) and balance these against some overarching moral principles.
-
Case Study: Allocating a Scarce Kidney
- [04:23] Sinnott-Armstrong: “Which features of the patient matter to which of the two patients should get the kidney?”
- Community values (age, dependents, waiting time, criminal record) are weighed in specific scenarios, and the system refines its models by testing public agreement on conflicting hypothetical cases.
4. Consensus, Local Variation, and Ethical Limits
-
Resolving Disagreement and Majority Rule
- [05:44] Edmonds: What if I don't agree with the consensus in my region?
- [06:05] Sinnott-Armstrong: “In these medical situations, I think you should go with...what the local values are...It’s fine if one hospital has slightly different values than the other...But there are limits.”
- The system can accommodate local, culturally sensitive rules with red lines against clear injustices (like racism).
-
Where To Draw the Boundaries
- [06:49] Sinnott-Armstrong: “I’d have to give you all the arguments for why racism is bad ... When we do our surveys, we find that over 95% agree that race should not matter.”
- Broad agreement on key ethical boundaries (race, gender) permits variation elsewhere.
5. Expertise Versus Popular Opinion
- Should Doctors’ Views Count More?
- [07:43] Edmonds: Should expertise be weighted?
- [08:07] Sinnott-Armstrong: “If people are not aware of those facts and they would make different judgments if they knew those facts, then you want to favor the judgments of the people who know the facts.”
- However, if both lay public and experts are well-informed, their opinions should be equally weighted.
6. Practical Implementation in Medicine
-
How the AI System Assists Rather Than Dictates
- [09:26] Sinnott-Armstrong: The AI can serve as a check or confirmation for doctors, potentially highlighting overlooked factors or prompting further discussion if its recommendation differs.
- [10:08] Edmonds draws a parallel to GPS reliance—will we end up deferring to the AI automatically?
- [10:35] Sinnott-Armstrong: “What they’re going to ask is why? Why is the AI picking patient A instead of patient B?...the AI will be able to say because of this feature and this feature, so it can give the doctor a reason.”
-
Explainability and Transparency
- Unlike black-box models, the hybrid system is designed to report why it recommends one option over another, thus supporting (rather than replacing) expert judgment.
7. Future Prospects for Moral AI
-
Current and Near-Future Status
- [11:57] Sinnott-Armstrong: “AI is already used in some kidney transplant centers, but not for moral purposes...We’re talking 10 years in the future, but 10 years from now, I wouldn’t be surprised.”
- The hybrid solution is a work-in-progress with potential for deployment a decade hence.
-
Applications Beyond Healthcare
- [12:50]: Expansion to dementia care, hiring practices (addressing fairness, gender, and race), and even the military.
- “They’re all interested in using this kind of technology not because they think it’s going to work, but because they think it’s worth trying.”
Notable Quotes & Memorable Moments
- On the impossibility of imposing one morality
- “Who made you the dictator who got to tell everybody else which moral system should be built into the machine?” – Walter Sinnott-Armstrong, [01:55]
- On the limitations of crowdsourcing morality
- “Sure, humans would say that, but that just shows the weakness of humans.” – Sinnott-Armstrong, [03:04]
- On the core advantage of hybrid systems
- “You need some pre principles, but you don’t want to impose your own principles. So you ask people, what are the features that really matter to you in a moral situation?” – Sinnott-Armstrong, [03:59]
- Why expertise sometimes trumps lay opinion
- “If people are not aware of those facts and they would make different judgments if they knew those facts, then you want to favor the judgments of the people who know the facts.” – Sinnott-Armstrong, [08:07]
- On the real role of AI in ethical choices
- “This system doesn’t necessarily dictate the final answer, but it can be helpful in the process.” – Sinnott-Armstrong, [09:26]
- On the future of AI ethics
- “We’re talking 10 years in the future, but 10 years from now, I wouldn’t be surprised.” – Sinnott-Armstrong, [11:57]
Timestamps for Key Segments
- Introduction to the problem [00:19–01:39]
- Defining AI [00:43–01:26]
- Limits of "top-down" ethical programming [01:39–01:55]
- Pitfalls of "bottom-up" learning [02:22–03:36]
- Hybrid solution in practice [03:59–05:44]
- Handling disagreement and local variation [05:44–07:13]
- Role of expertise [07:43–09:12]
- How AI actually helps in moral cases [09:12–10:35]
- Explainability and transparency [10:35–11:43]
- Real-world deployment and future prospects [11:43–12:42]
- Other domains for AI morality [12:50–13:36]
Conclusion
Walter Sinnott-Armstrong articulates the depth and complexity of “teaching” morality to AI. He rejects both pure principle and naïve aggregation in favor of a hybrid solution grounded in pluralism, transparency, and ongoing public engagement. The conversation offers a candid, nuanced perspective on the promise and peril of moral AI, and sketches a roadmap for ethically responsible innovation in medicine—and potentially far beyond.
