The Lawfare Podcast: Scaling Laws - The Ivory Tower and AI
Live from the Institute for Humane Studies' Technology, Liberalism, and Abundance Conference
Date: October 1, 2025
Participants:
- Host: Kevin Frazier (AI Innovation and Law Fellow, Texas Law; Senior Editor at Lawfare)
- Guests:
- Neil Chilson (Head of AI Policy, Abundance Institute; former Chief Technologist, FTC)
- Gus Hurwitz (Senior Fellow and CTIC Academic Director, Penn Carey Law School; Director of Law and Economics Programs at ICLE)
Overview
This lively, in-person episode brings together legal academics and policy experts to dissect the fraught intersection of academia (“the ivory tower”), institutional incentives, and AI policy in the United States. The discussion zeroes in on why AI policy feels so muddled, the roles of different actors from industry to academia in shaping perception and regulation, and how interdisciplinary challenges and academic structural issues hinder smarter policy responses.
Key Discussion Points & Insights
1. Why Is AI Policy So Muddled?
(Start: 03:49)
-
Salience Through Design:
Neil Chilson argues the leap in AI policy confusion is due to how technologies like ChatGPT, with its humanlike interface, captured the public imagination and triggered old sci-fi fears."If ChatGPT wasn't ChatGPT... if it was, like, Biology LLM or something, nobody would be talking about it the same way." — Chilson (04:22)
-
The 'AI' Label Problem:
Policymakers struggle because "AI" covers too broad a swath of technologies, making regulation difficult to conceptualize. Chilson suggests reframing:“If instead of calling it artificial intelligence they just called it advanced computing... we're just talking about regulating computers. How do I drill down on that?” — Chilson (05:27)
-
Historical Comparison:
Gus Hurwitz notes we didn't have "transistor policy" or "steam engine policy," so why “AI policy”? Social and legislative responses come in after crises or major events—not as preemptive frameworks."When Bell Labs was developing the transistor, did we have transistor policy?... Why in the world should we have AI policy?" — Hurwitz (07:07)
2. The Role of Fear and Media Echo Chambers
(12:42)
-
Recycled Policy Debates:
Many AI debates are the same as those around past technologies—privacy, misinformation, etc.—with new “AI” labels.“So many of these debates are just rehashes of all tech policy in the past... reframed in the context of AI.” — Chilson (12:52)
-
Uniqueness of the “Doomer” Contingent:
Chilson sees the “AI could destroy the world” narrative as unique—especially because it’s often pushed by industry insiders, unlike past tech scares.“I can't remember a technology... where there was a heavy contingent, a very vocal contingent, one that got a lot of press, who was saying that this technology will kill humanity.” — Chilson (13:29)
- Notably, CEOs like Sam Altman publicly called for regulation, which "did not happen with the Internet. Congress didn't even know what the Internet was for a really long time. That is unusual." (14:19)
-
Public Choice Dynamics:
Hurwitz underscores that industry calls for regulation are often self-serving—to shape the market in their own favor and limit upstart competitors."Please regulate me. I'm already the established player... but also make it harder for others who aren't already established to become competitors." — Hurwitz (15:57)
3. Academia’s Structural and Incentive Problems
(18:35)
-
Self-Sorting in Policy Influence:
Chilson notes that the academics with technical backgrounds who engage in policy often do so because they are predisposed to see technology as a problem needing governance—not because they're technology optimists.“If you have a technical background and you believe in markets... you're not coming to D.C. asking for things.” — Chilson (19:38)
-
Selection Bias in Legal Academia:
Hurwitz reflects personally and structurally: law students interested in technology tend to view tech as a problem for law to solve, not an opportunity. He also notes the inefficiency of trying to teach law students about engineering."You go to law school because you see problems and you want to use the law to fix them." — Hurwitz (24:02)
"The benefits of teaching engineers a little bit about law and policy significantly outweigh the benefits of teaching law students just a little bit about technology." — Hurwitz (25:30) -
Law vs. Engineering Mindsets:
Both guests agree many engineers now interested in policy view law as "code"—deterministic, debug-able—which leads to naive assumptions.
Chilson’s “AI legislative red-teaming” helps expose this gap."They think of law and policy as essentially... an engine or maybe even a computer that you can debug." — Chilson (27:13)
"You can sort of see the scales fall from their eyes. They're like, oh wait, you mean like somebody might try to misuse this law." — Chilson (28:16) -
Incentivizing Interdisciplinary Scholarship:
Frazier recounts the lack of incentives or recognition for interdisciplinary work in law schools."There's no incentive for a lot of junior scholars... to write interdisciplinary scholarship, and to write scholarship in a way that people will actually read it." — Frazier (29:44)
4. Fixing the Talent Pipeline and Academic Incentives
(32:57)
-
Hard to Change from Within:
Hurwitz explains that most universities incentivize traditional disciplinary work, making it hard for interdisciplinary policy and tech experts to get hired or promoted.“If I were to want to get a tenure track position in an engineering program... they wouldn't know what to do with me.” — Hurwitz (33:20)
-
Architectural and Social Barriers:
Even physical distances between departments on campuses discourage interdisciplinary collaborations."If you have to walk eight minutes in the Texas sun to go to the CS department and you're wearing a suit... you're not making that walk." — Frazier (37:27) “One of [Bell Labs’] really simple but powerful innovations was a really long hallway... [forcing] people to interact in an organic, stochastic way.” — Hurwitz (38:23)
-
Models for Change:
Hurwitz suggests embedding actual law faculty in engineering departments to teach policy, enhancing both communities’ sophistication on each other’s domains.
5. What Are We Getting Wrong in AI Policy?
(39:29)
-
Treating AI as One Thing:
Chilson argues it’s wrong to treat “AI” as a single technology, rather than a general-purpose tool with diverse applications and legacy legal solutions for most problems.“We can have a regime that regulates it as a single technology, whereas it’s a general purpose technology that will be applied in every single field.” — Chilson (39:34)
-
Change Is Coming—for Academia, Too:
Both see higher education facing disruption from AI, with outside pressure potentially forcing real change even in “the ivory tower.”“When you increase that competitive market, the institutions... can change.” — Chilson (41:35)
-
Optimism for Institutional Reform:
Hurwitz cites “disruptive innovation” theory—change may not come from the top universities, but from smaller, nimbler competitors introducing new interdisciplinary models.“If a smaller university starts a progress studies department... that becomes a model that other universities will replicate.” — Hurwitz (42:38)
6. Final Words: Experimentation, Optimism, and a Call to Action
(43:18+ end)
-
Chilson urges people to actually use AI—since the bar for experimenting is so low, it’s “useful to just try it.”
“People should use it... The barriers to try this stuff out... are so low.” — Chilson (43:31)
-
Hurwitz ends on optimism despite policy inertia, highlighting the need for positive, “abundance”-focused framing for technology and policy debates.
“I'm in a moment of optimism. I think that we are in a moment of possibility, change, dare I say it, potential future abundance.” — Hurwitz (43:51)
-
Frazier directs students to push their schools on interdisciplinarity, evaluation metrics, and adaptive education models.
Notable Quotes & Moments
| Timestamp | Quote | Speaker |
|-----------|---------------------------------------------------------------------------------------------------------------------|----------------|
| 04:22 | “If ChatGPT wasn’t ChatGPT... if it was, like, Biology LLM or something, nobody would be talking about it.” | Chilson |
| 07:07 | "Why in the world should we have AI policy?" | Hurwitz |
| 13:29 | "There was not... a very vocal contingent... who was saying that this technology will kill humanity." | Chilson |
| 15:57 | "Please regulate me. I'm already the established player... but also make it harder for others who aren't already..." | Hurwitz |
| 24:02 | "You go to law school because you see problems and you want to use the law to fix them." | Hurwitz |
| 25:30 | "The benefits of teaching engineers a little bit about law and policy... outweigh... teaching law students tech." | Hurwitz |
| 28:16 | "Oh wait, you mean like somebody might try to misuse this law." | Chilson |
| 39:34 | “We can have a regime that regulates it as a single technology, whereas it’s a general purpose technology...” | Chilson |
| 43:31 | "People should use it... The barriers to try this stuff out... are so low." | Chilson |
| 43:51 | “I'm in a moment of optimism. I think that we are in a moment of possibility, change, dare I say it, abundance.” | Hurwitz |
Timestamps for Important Segments
- 03:49 — The uniqueness of ChatGPT’s rise and public policy confusion
- 07:02 — Why default to 'AI policy' and are we overreacting?
- 12:42 — The shifting landscape: media, doom narratives, and policy actors
- 18:35 — Academics, incentives, and ivory tower introspection
- 22:57 — Talent pipeline failures in government and academia
- 29:44 — The challenge of interdisciplinary scholarship in law
- 33:06 — Academic incentives and the nuts-and-bolts of institutional resistance to change
- 39:29 — What’s fundamentally mistaken in the current AI policy discourse?
- 41:53 — Prospects for institutional reform and competitive disruption
- 43:18 — Final practical and philosophical takeaways
Takeaways
- Much of current AI policy debate recycles old tech policy issues, repackaged under new “AI” labeling, often intensified by media and industry narratives about existential risks.
- There’s a dangerous mismatch between technical understanding in government and law, and how academics are incentivized to engage with tech policy—selection biases and departmental barriers impede more productive frameworks.
- Actual interdisciplinary collaboration (especially teaching law and policy to engineers) is more effective than the reverse, but academic and institutional incentives, even down to campus architecture, get in the way.
- Positive change is possible, especially from smaller, more nimble institutions or outside forces—but current incentive structures in the legal academy are poorly aligned with real-world technology needs.
Closing Thoughts
The panel ultimately issues a call to experiment, be optimistic, and demand more of educational institutions—especially in interdisciplinary teaching, research incentives, and openness to disruption both from within and beyond traditional academia.
