Decoder with Nilay Patel: "The Surprising Case for AI Judges"
Date: February 12, 2026
Guest: Bridget McCormack, President & CEO, American Arbitration Association; Former Chief Justice, Michigan Supreme Court
Overview
In this thought-provoking episode, Nilay Patel dives deep with Bridget McCormack into the use of artificial intelligence in legal dispute resolution. Starting with the fraught state of trust in American judicial systems, the conversation weaves a path from the practicalities and politics of running public courts to the transformative—and possibly perilous—potential of large language models (LLMs) as arbiters of justice. Bridget unpacks her experience as chief justice and her current pioneering work with the American Arbitration Association (AAA), which has launched an AI-powered, documents-only arbitration platform for construction disputes. The discussion tackles fairness, transparency, bias, and how AI might both reinforce and redefine our collective sense of justice.
Key Discussion Points & Insights
1. The (Im)Perfection of the Human Legal System
[05:50] – [12:54]
-
Administrative Burden in Public Courts:
Bridget describes her former role as akin to being a "CEO of public dispute resolution", working with limited funding and navigating “separately elected judges with different budgets” in a decentralized and inconsistent system.- “Your funding isn’t based on how well you do... you have to walk over to the legislature and convince some brand new representative... that online dispute resolution is really going to increase access to justice.” (McCormack, [06:35])
-
Legal System Isn’t Deterministic:
Nilay challenges the notion that courts are just input-output "computers," with Bridget responding that, while more predictability would reduce disputes, “humans are... flawed. It therefore isn’t always predictable.” (McCormack, [10:49])- Certain cases require interpretation and discretion (e.g., new statutes or ambiguous laws).
-
Accessibility and Wrongful Decisions:
92% of Americans can't afford legal help; most state (not federal) cases involve unrepresented parties.- “If you look at the rate of reversals by appellate courts... they’re getting a lot wrong.” (McCormack, [13:18])
- “The wrongful convictions tell us that in 3–5% of cases there was an error made... If you’re landing planes, not a great number.” (McCormack, [15:01])
2. Trust and Perception in Arbitration vs. Public Courts
[15:53] – [19:44]
-
Declining Trust in Institutions:
Bridget observes that people are “locked out” of formal justice primarily by cost and complexity, leading to declining confidence.- “Imagine if we said, if you want to drive on the highway, you can do that, but you have to hire a driver... We would never accept that, but we accept that most Americans are locked out of their formal justice system.” (McCormack, [17:22])
-
Perception of Arbitration as Biased:
Despite perceptions that arbitration favors big businesses, data shows that individuals are actually more likely to be heard and receive awards through arbitration than through courts.
3. The Case for AI in Dispute Resolution
[25:18] – [41:33]
-
Making Parties Feel Heard:
Bridget highlights procedural justice research: If people feel “heard” and the ruling is explained, they trust the process more—even in defeat.- “If the neutral... can explain it to them, they’re far more likely to grow trust in institutions.” (McCormack, [26:42])
-
AI’s Unique Value:
AI, via LLMs, can “listen” endlessly, restate and check parties’ claims, and guarantee all issues are addressed.- “At the front end of our AI arbitration process... the agents take in the party’s complaints... and then go back to the parties and say, here’s my understanding... Did I get that right? And the parties get to say... until the parties are satisfied that they have been heard and understood.” (McCormack, [29:30])
-
How the AI Arbitrator Works:
- Current Narrow Application: Website for documents-only construction disputes. Humans remain “in the loop” at critical stages; arbitrators ultimately review and sign off on awards.
- “The AI arbitrator is... a bunch of different agents... A bunch operate at the front end... then there are a bunch of reasoning agents... then agents that do a draft award... There’s a human in the loop throughout.” (McCormack, [32:32])
-
Technical Build:
Built with McKinsey’s AI team (Quantum Black) and grounded on a handbook derived from historical cases and expert input. Team trained to build out further use cases gradually.
4. Limitations, Safeguards, and Appropriate Use Cases
[39:56] – [46:25]
-
Current Constraints:
Only written documents/testimony—not live witness credibility—handled by the AI. Human arbitrator can intervene at any stage. -
Expansion Plans:
Next up could be supplier disputes, health provider-insurer conflicts, and other document-rich B2B cases.- “There’s actually an unlimited number of disputes for which this is appropriate. There are also some for which it will never be appropriate.” (McCormack, [44:06])
-
Ethical Boundaries:
AI should never arbitrate criminal cases or disputes involving governments, which must remain in public forums.- “Cases where the government is accusing you of something... should happen in public courtrooms...” (McCormack, [44:26])
5. Trust, Transparency, & The Hallucination Problem
[50:45] – [55:51]
-
AI Hallucinations & Bias:
Bridget acknowledges risk, which is why AAA’s system is narrowly scoped, human-supervised, and “governed, trained and grounded” in its domain.- “We keep a human in the loop... We’re going to be very transparent about all of our audits, and that’s, I think, really critical to growing trust.” (McCormack, [51:51])
-
Comparing Human and AI Error:
Bridget bluntly counters AI skepticism with a story of human fallibility:“I called the guy. I said, judge, you’ve made this claim about me... But I... wasn’t there.... His answer was, ‘Yes, you were.’ ...So if you think that the human beings who get tired, who get hungry, who... have cognitive biases are getting every single thing right, then you have a lot more trust in the public justice system than I think most people do.” (McCormack, [54:23])
6. Fairness, Accountability, and Public Oversight
[56:22] – [66:09]
-
AI Lacks Human Accountability:
Nilay points out that unlike judges, “An AI system running on a cloud... does not feel accountable to you in that way.” -
Bridget’s Response:
Multiple options empower users to choose; some disputes don’t need public courts.- “If every small business could plan for the disruptions that befall every small business, we’d have a better world... why wouldn’t we have more options?” (McCormack, [58:07])
-
Nonprofit Mission as Safeguard:
AAA being a nonprofit reduces “client service” bias and enables focus on broadening access to justice for both sides in a dispute. -
Benchmarks, Audits, and Showing the Work:
“You can debias a data set a lot easier than you can debias a human. And so, when you benchmark and do your audits of your AI arbitration system and you show your work, you can either convince the public that it’s treating both people fairly or you won’t.” (McCormack, [65:08])
Notable Quotes & Memorable Moments
-
On Determinism in Law:
“If it were more deterministic, we would have fewer disputes. Right? ...Because it’s probabilistic...”
— Bridget McCormack ([10:49]) -
On Access to Justice:
“92% of Americans can’t afford help with their legal problems.”
— Bridget McCormack ([12:54]) -
On Feeling Heard:
“If the neutral deciding the dispute can explain it... they’re far more likely to grow trust in institutions...”
— Bridget McCormack ([26:42]) -
On Human Error in Courts:
“The wrongful convictions tell us that in 3-5% of cases there was an error made. ...If you’re landing planes, not a great number.”
— Bridget McCormack ([15:01]) -
On the Unique Value of AI in Arbitration:
“The agents go to work again until the parties are satisfied that they have been heard and understood. I mean, maybe we could do that in courts, but we would have to spend a whole lot more money.”
— Bridget McCormack ([29:30]) -
On AI Limitations and Scope:
“We were not prepared to have witness testimony evaluated by agents...that might come one day. But we’re not there today.”
— Bridget McCormack ([35:25]) -
Skepticism About Human Justice:
“If you think that the human beings who get tired, who get hungry, who... have cognitive biases are getting every single thing right, then you have a lot more trust in the public justice system than I think most people do.”
— Bridget McCormack ([54:23]) -
On the Future of AI Legal Decisions:
“I think... in some number of years, we will think it’s amazing that we let humans drive cars... And I think we’ll also think it was probably crazy that we thought a human being had to oversee...disputes...”
— Bridget McCormack ([60:20])
Timestamps for Key Segments
| Segment | Timestamps | |---------------------------------------|----------------| | Introducing AI in Arbitration | 01:36–03:56 | | Explaining Difference: Courts vs AAA | 05:50–08:56 | | Should Law Be Deterministic? | 10:49–12:37 | | Accessibility, Human Error, Trust | 13:18–15:53 | | Fairness & Feeling Heard in Law | 25:18–29:30 | | The AI Arbitrator: How It Works | 32:17–36:45 | | AI’s Technical & Practical Build | 36:49–39:56 | | Hallucinations & Human Error | 50:45–55:51 | | Fundamental Limits: AI vs Human Courts| 56:35–59:43 | | Who AI Arbitration Is For | 61:22–66:09 | | Long-term Vision and Disruption Pace | 71:31–74:27 |
Tone & Language
- Conversational, candid, sometimes wry (especially from Nilay)
- Bridget is detail-oriented but accessible, blending optimism with realism about AI's limitations
- Both speakers are willing to challenge each other's assumptions and respectfully push into uncomfortable territory
- The discussion maintains a focus on practical implications and real human experiences
Final Thoughts
This episode offers a rare, transparent look at both the promise and potential pitfalls of AI-driven dispute resolution. Bridget McCormack is passionate about using technology to widen access to justice, nuanced about the dangers of ungoverned automation, and hopeful that—if done right—AI could actually deepen trust in legal systems by making outcomes more transparent, fair, and accessible. The conversation doesn’t shy away from uncomfortable edges: AI bias, issues of accountability, and the fact that, for most people, even the flawed promise of arbitration might be their only shot at having a legal dispute heard at all.
Recommended for:
Anyone interested in the future of law, AI ethics, institutional trust, or how real-world automation is reshaping access to justice for everyone—from giant corporations to ordinary people signing yet another inscrutable contract.
