The Lawfare Podcast: Scaling Laws – How AI Can Transform Local Criminal Justice
Guest: Francis Shen, Professor of Law & Director of the Shen Neurolaw Lab
Host: Alan Rosenstein, Associate Professor of Law, University of Minnesota
Date: January 16, 2026
Episode Overview
This episode of the Scaling Laws series, produced by Lawfare and the University of Texas School of Law, examines how artificial intelligence (AI) can transform local criminal justice. Host Alan Rosenstein interviews Francis Shen, law professor, director of the Shen Neurolaw Lab, and candidate for Hennepin County Attorney. Their conversation spans the promise and pitfalls of AI in criminal justice, from AI’s potential to improve investigation and sentencing, to community trust, administrative challenges, and Francis Shen’s experience running for office with AI as a central campaign plank.
Key Discussion Points and Insights
1. Francis Shen’s Background in Neurolaw and AI
- Interdisciplinary Origins:
- Shen’s academic journey started by seeking better ways to understand trauma survivors, leading him from law and social sciences to neuroscience.
- He emphasizes the need to go “beyond statistics and policy” to the “why” of behavior, seeking insights from neuroscience.
"All the things I cared about…why did a criminal do what they did?...all of those things are brain related and in fact, really deeply brain related." (Francis Shen, 08:34)
- Shift Toward AI:
- Shen became interested in how AI could process the massive complexity of human behavior.
- AI tools in focus: Initially, brain-machine interfaces and robotics; currently, large language models (LLMs) and predictive algorithms.
"The subset of AI that fascinates me the most...is the AI that is trying to either augment or replace or modify our human information processing." (Francis Shen, 09:04)
2. Evolution of AI in Criminal Justice
- AI in the Early 2010s vs. Now:
- Early research and teaching on law & AI covered machine learning and automation, but LLMs (like ChatGPT) have since transformed accessibility and use.
- The definition of AI is broad and socially constructed:
“Is the spell check on your Microsoft suite of things AI or not? … It’s information processing that humans could do, but that...we should have the machine either do it entirely or help us do it.” (Francis Shen, 13:53)
3. Practical AI Applications in Local Criminal Justice
- Context—What is the Hennepin County Attorney’s Role?
- Local prosecutors (e.g., Hennepin County Attorney) handle most felony prosecutions—a “mass scale production system.”
“Over 10,000 cases a year are coming through that county…and 97% of cases have what’s called a plea deal.” (Francis Shen, 19:39)
- Local prosecutors (e.g., Hennepin County Attorney) handle most felony prosecutions—a “mass scale production system.”
- Investigation & Prevention:
- AI-enabled cameras (e.g., Accusensis) analyze video to detect distracted drivers, scale up enforcement, and improve safety.
- AI can review vast amounts of surveillance footage that humans cannot.
“They were able to detect 10,000 violations in one month alone...I think it’s reduced distracted driving and increase in lives saved...” (Francis Shen, 28:04)
- Prevention through prediction: The ideal use of AI is not just to catch bad actors, but to anticipate and prevent crime ("prevention through prediction is better than conviction").
- Adjudication & Sentencing:
- AI can individualize sentencing, leveraging more comprehensive data to match interventions with specific defendants, analogous to “precision medicine.”
“We could...individualize. We just kind of put people in buckets. I think the processing speed of AI...would allow us to really help these humans in the system.” (Francis Shen, 17:39)
- AI can individualize sentencing, leveraging more comprehensive data to match interventions with specific defendants, analogous to “precision medicine.”
- Probation, Parole, and Recidivism:
- Existing risk assessment tools (e.g., COMPAS) historically focus on predicting negative outcomes (recidivism), but Shen argues these algorithms could instead predict positive outcomes or match rehabilitation strategies.
“There’s no reason that you couldn’t instead have a system that was asking, how likely is it that this person is going to thrive...what’s the best to pair here?” (Francis Shen, 44:23)
- Existing risk assessment tools (e.g., COMPAS) historically focus on predicting negative outcomes (recidivism), but Shen argues these algorithms could instead predict positive outcomes or match rehabilitation strategies.
4. Ethical, Social & Data Concerns
- "Big Brother" Worries:
- Community fears center around over-surveillance and discriminatory bias in data and outcomes.
- Shen draws a distinction between supportive, transparent uses of data and Orwellian overreach.
"It’s not Big Brother...That is a helping hand that’s paying attention to someone." (Francis Shen, 26:33)
- Garbage In, Garbage Out:
- The necessity of high-quality data—bad data leads to biased, potentially unjust outcomes.
- Focus on measuring real-world, positive outcomes, not just recidivism.
- Data Transparency & Trust:
- Full transparency is mandatory, especially regarding any private-sector tool; proprietary “black box” algorithms (like COMPAS) are a non-starter.
"It’s got to be with outside partners, but only partners who are willing to be fully transparent about their work." (Francis Shen, 51:00)
- Full transparency is mandatory, especially regarding any private-sector tool; proprietary “black box” algorithms (like COMPAS) are a non-starter.
5. Administrative and Political Realities
- Incremental, Trust-Building Approach:
- Avoid "move fast and break things" Silicon Valley ethos; instead, pilot narrow, contained AI projects and build capacity and trust internally.
"You pick one or two pilot projects that are really narrowly tailored and where you think you have some decent data." (Francis Shen, 53:23)
- Avoid "move fast and break things" Silicon Valley ethos; instead, pilot narrow, contained AI projects and build capacity and trust internally.
- Workforce Adaptation:
- Prosecutors, judges, and defense attorneys will need to adapt roles and skills to operate alongside new technologies.
- Budget & Capacity:
- No influx of new funds; AI must demonstrate efficiency within existing constraints.
- Community Engagement:
- Convincing voters and stakeholders is a process of listening, explaining, and moving past AI’s “scary headline” reputation:
"The first thing I do a lot of is just listening to concerns because people have heard of AI and most of what they’ve heard is not great...But then I begin to give examples..." (Francis Shen, 56:26)
- Convincing voters and stakeholders is a process of listening, explaining, and moving past AI’s “scary headline” reputation:
Notable Quotes & Memorable Moments
-
Every Story Is a Brain Story:
"Every story is a not fully understood and sometimes poorly understood brain story...The most complicated thing in the universe."
— Francis Shen (07:59) -
On Modern AI in Practice:
"Almost everyone who comes through criminal justice system in the United States is going back out into community...what intervention should happen while they are in the system?...AI can help optimize what’s best for each individual."
— Francis Shen (15:10) -
On the Definition of AI:
"AI is anything a machine can’t do. And once a machine can do it, we just stop calling it AI."
— Alan Rosenstein (13:43) -
On Algorithmic Fairness:
“It is possible that...fairness may not mean that everyone gets the same outcome...I really like the idea of individualizing because I think it’s better if you have enough information for everyone.”
— Francis Shen (45:29) -
On Running for Office on an AI Platform:
"I’m the only person talking about AI and criminal justice...The first thing I do a lot of is just listening to concerns...We are not going to look up in five years...and say, oh yeah, I’m glad that AI wave passed."
— Francis Shen (56:21, 58:33)
Important Timestamps
- 04:38 – Shen’s interdisciplinary entry into neurolaw and pursuit of answers on trauma through neuroscience
- 10:44 – How definitions and practical uses of AI in law have changed since 2010
- 19:39 – The structure of the Hennepin County Attorney’s office and criminal justice pipeline
- 22:16 – Low clearance rates and specific uses of AI-enabled surveillance in local crime investigation
- 28:04 – Example of AI’s impact: AI cameras catching 10,000 distracted drivers in a month
- 36:28 – The human-in-the-loop dilemma: when and why humans should supervise or override AI recommendations
- 44:12 – Bias, fairness, and the “accuracy / equity” tension in criminal justice algorithms
- 49:51 – Administrative and technical challenges of building AI capacity in local government
- 53:23 – The importance of starting small with transparent pilot projects
- 56:20 – Shen’s reflections on running for office with AI at the center of his campaign
Conclusion
Francis Shen and Alan Rosenstein’s discussion navigates the complex, high-stakes promise of AI in criminal justice at the local level. Shen advocates for careful, transparent application of AI to improve efficiency, equity, and public safety—while listening to community fears, building trust step by step, and ensuring all interventions remain human-centered.
Further Listening/Reading
For more on the intersection of law, policy, and AI, and for ad-free episodes, visit lawfaremedia.org.
