Transcript
A (0:02)
Welcome to Humanitarian Frontiers in AI, the podcast series where innovation meets impact. In each episode, we dive deep into how artificial intelligence is reshaping the future of humanitarian work. From enhancing crisis response to making aid delivery smarter and more effective, AI is opening new doors in the way we support communities in need. In this series, hosts Chris Hoffman and Nassim Motelabi brief you thought leaders from academia and the tech industry to discuss not only the vast opportunities AI offers, but also the ethical considerations and risks we all must navigate. Join them on this journey as they explore AI's potential to transform lives and address humanity's most pressing challenges.
B (0:48)
All right, well, welcome, everyone. It's nice to have you here today. This is such a distinguished panel. Unfortunately, Nassim can't be here with us. She is gallivanting. No, that's not true. She is at work in the Dominican Republic and she's going to try and pop in, hopefully sometime throughout the call. But if she doesn't make it, she's given me full autonomy to ask whatever I like this time because she's usually in control of the question. So we'll see how this goes. It's Chris Hoffman back here, episode three of the Future of AI and Humanitarian Action. And we've got, like I said, a distinguished panel here. And let's just jump in. I want to ask the first question. So I got challenged the other day by a university professor here in the Netherlands, and he said, I don't think we should be talking about the ethics of AI use. We should be talking about the human component of this instead. So how do we as humans apply our ethics in the way that we design the tools that AI is or AI becomes or that AI facilitates in terms of action? So he was like, you know, everybody's saying ethics in AI. And he said, I don't think that's what it should be. So, Emily, I mean, I want to start with you. Where does this ethics question start? Where does it find its genesis? And then where do you think it goes after that?
C (2:03)
Thanks for the question, Chris. So I'm actually going to say something a little provocative, which is, I actually think AI ethics is passe. So I actually think what we should be talking about is responsible AI. Because if we look over the last several years, I mean, sort of your question, like, what is the genesis of AI ethics? I think suddenly people were like, oh, wait, this thing that people talked about as being objective and neutral actually isn't objective and neutral. And we need to start thinking ethically about what is the appropriate use of this in society. And that was an amazing conversation. It gave us a lot of interesting things to work with and think about. But really, I think what I'm excited about right now is I feel like sort of the international community is coalescing around responsible AI, but as ethics are only good if they're actually operationalized. And I'm really excited because I feel like we're at a point where we're starting to talk about implementing some of the things that the conversation around AI ethics drives us towards. So how can we think about responsible AI, Trustworthy, secure, safe AI? How can we think through risk, harm, risk, mitigation, and all of those force people to think a lot more, to this professor's point, about human action? And so I think often we've forget that actually we're in charge. And the way that tech products get produced isn't unique to AI necessarily, but there are all sorts of ways in which human decisions get embodied into the tech product that gets out there. And we forget that actually it is those decisions. So when we see a racist or sexist app, we can usually follow that back. Or an AI system, I should say, we can usually follow that back to decisions that were made by people at companies or at organizations. And so I get really excited about responsible AI, because I think that is really the mechanism by which we can make good on ethical AI. Mala, I'd love to hear what others think.
