The Lawfare Podcast: Why AI Won't Revolutionize Law (At Least Not Yet)
Host: Alan Rosenstein, Lawfare Institute
Guests: Arvind Narayanan (Princeton University), Justin Kurl (Harvard Law School)
Date: February 12, 2026
Episode Overview
This episode dives deep into the recent Lawfare research report by Arvind Narayanan, Justin Kurl, and Syash Kapoor. Despite the hype around AI's potential to overhaul legal practice, the guests argue that structural features of the profession — such as adversarial dynamics, regulatory barriers, and the necessity for human judgment — will prevent truly transformative cost savings or radical change in the near term. The episode frames AI as a "normal" (rather than exceptional) technology, requiring substantive organizational adaptation to be impactful.
Key Discussion Points & Insights
1. AI as "Normal Technology"
(03:41–06:50)
- Four-Stage Framework: Arvind outlines a model inspired by historical technological adoption (e.g., electricity) with four stages:
- Improvements in capabilities.
- Translation into domain-specific products.
- Worker adoption and adaptation.
- Organizational, legal, and normative changes.
- Transformative, But Slow: "People systematically overestimate what can be done in the short term... but underestimate what's going to happen in the long term." (Alan Rosenstein, 05:49)
- Role of Agency: AI’s impacts aren’t inevitable; reform and institutional change are critical for broad benefits.
2. Why Law Is Expensive — Before AI
(10:34–14:44)
- Credence Good: It’s hard for clients (even lawyers) to evaluate service quality—market mechanisms don’t work as with TVs or cars.
- Relative vs. Absolute Value: Service value is often zero-sum, depending on the other party’s moves (especially in litigation).
- Professional Regulation: Licensing laws (Unauthorized Practice of Law or UPL) restrict who can provide legal advice, making innovation difficult.
- “Anytime you apply legal knowledge to specific facts, you might be engaged in the practice of law. If you do so without permission... that's actually a felony.” (Justin Kurl, 12:50)
- Vague Boundaries: “It's an almost Kafkaesque situation... there is such a thing as the unauthorized practice of law... but no one will actually tell you what constitutes [it.]” (Alan Rosenstein, 13:25)
3. AI Capabilities vs. Legal Impact
(16:13–18:58)
- Not Skeptics of Capabilities: Paper concedes AI can already do many discrete legal tasks and may soon outperform most lawyers at set pieces (e.g., briefs).
- Feedback Loops in Law: True legal skill requires nuanced judgment—hard to build in data for AI. Unlike math/coding, learning is slow: “That feedback loop is extremely slow... Nevertheless, that's not where our skepticism comes from.” (Arvind Narayanan, 17:32)
4. Bottlenecks Blocking AI’s Transformative Promise
a. Regulatory Barriers
(18:58–22:24)
- Law Firm Ownership: Only lawyers can own firms, stifling innovative models.
- Administrative Tasks: Small-firm lawyers spend most time on non-billable work. AI can help here when regulations allow.
- Where Regulation Isn’t a Barrier: “You can actually use AI to make it much more efficient... because there aren't those regulatory barriers.” (Justin Kurl, 21:46)
b. Adversarial Dynamics (The "Arms Race")
(27:46–31:58)
- Zero-Sum Productivity: If both litigation sides have AI, they each do much more work, but the outcome is unchanged; costs may rise, not fall.
- "[Lawyers are] now doing 100x [what they used to] in all those relevant domains... but the outcome... is still settling favorably or winning at trial." (Justin Kurl, 28:10)
- Discovery Example: Digitization created more discovery work, not less cost.
- Non-Litigation Law: Adversarial dynamic bleeds into transactional work too, e.g., contract negotiations.
c. Human Involvement as Bottleneck
(33:17–37:59)
- Judgment Can't Be Removed: Even if AI can do much, there's constitutional and moral weight in decisions (e.g., sentencing) that society expects humans to bear.
- “If you're making a decision about whether someone has 10 years in prison... I would want a human being to be involved in that.” (Justin Kurl, 36:38)
- Parallel Tracks: For low-stakes matters, maybe AI-enabled arbitration; for high-stakes or "agency" decisions, keep humans in the loop.
5. Reform Proposals & Their Tensions
(41:08–48:48)
- Reform UPL Rules: Liberalize who can provide legal services; e.g., regulatory sandboxes (Utah, Illinois), regulatory markets (Gillian Hadfield’s proposal).
- Consumer Protection Tension: Current UPL rules are ineffective at ensuring quality and mainly restrict access.
- “70% of people are losing by default because they didn't actually respond... they couldn't afford a lawyer.” (Justin Kurl, 46:26)
- A Moment for Innovation: “Maybe this moment of upheaval around AI is a time when we can have a lot of innovation around the way we regulate different professions...” (Arvind Narayanan, 47:20)
6. Philosophical and Normative Stakes
(39:46–41:00)
- Human Agency vs. AI: “This is what it means to be in control of our own civilization... Not killer Terminator robots. [It boils down to] moments where we put the course of humanity in the hands of machines... a line we should not cross.” (Arvind Narayanan, 39:46)
Notable Quotes & Memorable Moments
-
On AI’s slow burn:
“People systematically overestimate what can be done in the short term, but then they tend to underestimate... what's going to happen in the long term.” — Alan Rosenstein (05:49) -
On regulatory uncertainty:
“It's an almost Kafkaesque situation... there is such a thing as the unauthorized practice of law... but no one will actually tell you what constitutes [it.]” — Alan Rosenstein (13:25) -
On zero-sum arms races:
“If you give both sides access to AI and you're sort of locked in this zero sum process, the amount of work that each side does could essentially just go up... But the outcome clients ultimately care about is... unchanged.” — Justin Kurl (28:10) -
Why humans in the loop matter:
“This is what it means to be in control of our own civilization... a world in which we leave these kinds of decisions up to AI is not a world I want to live in.” — Arvind Narayanan (39:46, 41:00) -
On reform imperative:
“I view AI as partially a way to fix those problems, but also as a way to sort of push through or motivate the reforms that we've needed for a long time that aren't actually about AI.” — Justin Kurl (44:11)
Timestamps for Important Segments
- AI as Normal Technology: 03:41–06:50
- Why Law is Expensive (Pre-AI): 10:34–14:44
- Capability Skepticism?: 16:13–18:58
- Regulatory Barriers: 18:58–22:24
- Adversarial/Arms Race Dynamics: 27:46–31:58
- Human Judgement Bottleneck: 33:17–39:46
- Normative Stakes: Human Agency: 39:46–41:00
- Should We Welcome or Resist AI in Law?: 41:08–44:57
- Reform Proposals and Regulatory Innovation: 44:57–48:48
Conclusion
The episode offers a sobering, nuanced take: AI’s raw cognitive power will not sweep away the deep structural features that define legal practice. Real gains require changes at organizational, regulatory, and philosophical levels — and much of the drama may play out in adjusting the rules of the legal “game” rather than replacing its human players. Both reform and caution are needed, with continual focus on access to justice, consumer protection, and societal consensus on the human role in decision-making.
