Transcript
Dr. Andrea Headley (0:03)
From datasmart city solutions the bloomberg center for cities, this is the datasmart citypod.
Steve Goldsmith (0:15)
Welcome back. This is Steve Goldsmith, professor of Urban Policy at the Bloomberg center for Cities at Harvard University with another episode of our podcast, the Data Smart City Podcast. Today we have an accomplished guest who's examining the use of AI tools in government from a unique angle compared to some of our other guests. We're going to talk today about perceptions and uses of AI and genai tools in the areas of justice and government. And Our guest is Dr. Andrea Headley, who is an associate professor at Georgetown University's McCourt School of Public Policy. Welcome, Dr. Headley.
Dr. Andrea Headley (0:53)
Thank you so much for having me. It's a pleasure to be here.
Steve Goldsmith (0:56)
I'm going to call you Andrea and you call me Steve. How's that?
Dr. Andrea Headley (1:00)
Sounds good.
Steve Goldsmith (1:01)
All right. No sense of disrespect. It'll just save a lot of words. I've got a lot of questions for you about your work, but just begin with telling our audience a little bit about your background. But what is the Evidence for Justice Lab at Georgetown?
Dr. Andrea Headley (1:17)
So the Evidence for Justice Lab is a research and policy hub where our motto really is putting research into action. And so we focus on leveraging research and evidence, broadly speaking, both like quantitative data, but also lived experiences and qualitative data, to really improve the criminal justice system and enhance community safety. And so we do this in lots of ways, but particularly it's important for us to engage communities, collaborate with local government, and then conduct applied research across a lot of different realms. But innovation and technology being one of the key realms in which we do
Steve Goldsmith (1:55)
that, how do you allocate your time between looking at AI fairness, algorithmic fairness, and the like, and the uses of AI to solve a justice problem? Right. You know, one way to think about this is are we setting up these interventions in a way that is fair? And the other is how can we use AI to make the system fairer? So how do you think about the responsibilities of the Justice AI Tracker and your Justice Lab?
Dr. Andrea Headley (2:27)
It's a great question, and I think there's been a lot of conversations already on the inputs that are being put into things like AI and different emerging technologies and how that either impedes or enhances fairness. And really, where we have been focusing a lot of our time on is thinking about the implementation of AI in real time and what we know about how it's being implemented, where it's being implemented, potential negative effects or positive implications. Right. Of implementing it. And then to your point around, really Trying to identify what we're calling opportunities to improve the justice system for those people who are involved, including both the employees, but also the community members. And for us, I'd say that started actually with a different project on, like, trying to understand the perceptions that people had about the use of AI. So we, you know, identified a couple of cities, and we started talking to both employees across the justice sector. So employees who are working in police departments, working in courts, working in corrections, and also community members who were either victims or crime survivors, as well as people who were navigating the criminal justice system as well, being arrested or having court cases or being incarcerated, and talking to all these people to just understand how they felt about it. But in doing that, we realized there was this big gap in even understanding what AI was being used across the system. And so then we came up with this justice and AI tracker, really trying to document across the top 100 cities in the United States what was being used, where it was being used, and then trying to go into the details of how we can classify different types of use cases.
