Transcript
A (0:01)
Welcome to High Impact Growth, a podcast from Dimagi. For people committed to creating a world where everyone has access to the services they need to thrive. We bring you candid conversations with leaders across global health and development about raising the bar on what's possible with technology and human creativity. I'm Amy Vaccaro, senior Director of Marketing at Dimagi and your co host, along with Jonathan Jackson, Dimangi's CEO and co founder. Today, the hype around AI is inescapable. But as new foundation models race into the development sector, there's a danger that goes beyond just inefficiency. How do we ensure that AI deployed with the best intentions to serve vulnerable communities doesn't ultimately create new risks or reinforce existing inequalities? We're tackling this head on today with our guest, Genevieve Smith. She's the founding director of the Responsible AI Initiative at UC Berkeley's AI Research Lab. Genevieve spent a decade in international development and reveals why simply calling a project AI for good is not enough. She breaks down the five critical lenses, from fairness to transparency, that program managers must use to review their work. Plus, we explore innovative models like data cooperatives, which give communities governance rights over their own data. To ensure models truly work for them, tune in to understand the flawed model of letting businesses self regulate and learn practical strategies you can use starting Monday morning to build trust and ensure your technology serves everyone equally. All right, Genevieve, welcome to the podcast. Great to have you here.
B (1:36)
Great to be here. Thanks for having me.
C (1:38)
Thanks for coming on. We're really excited to have you.
A (1:40)
So you are the founding director of the Responsible AI Initiative at UC Berkeley Artificial Intelligence Research Lab, among many other things. And you also have a deep background. From what I've seen working with organizations like the UN foundation, usaid. We'd love to start by just hearing a bit of your journey. How did you move from the world of international development into this specific niche of responsible AI?
B (2:07)
Yeah, absolutely. So the first decade of my career, really, as you said, Amy, was in international development. I worked with the UN foundation on their gender strategy for access to clean cooking technologies. Early on, for several years, I did my Master's in development practice at Berkeley, and afterwards I served as a researcher at UN Women on women's economic empowerment and digital inclusion, and then lived in D.C. for a couple of years working at the International center for Research on Women, conducting research on inclusive technology and economic empowerment more broadly. And it was around towards 2019 that I actually returned to UC Berkeley and I became the Research director for the center for Equity Gender and leadership at the business school at Haas. And that's where I really started to dive into research on AI bias. The research agenda that I set out for the organization was very much so focused on uncovering and examining bias within AI technologies, which at that time was still under explored. And a lot of research around bias in AI systems tended to be focused on the west or Europe and the U.S. you know, not really wasn't global in nature, despite the proliferation of AI technologies at the global level, including across many low and middle income countries. And what I was starting to see was that applications of AI in low and middle income countries were often categorized as AI for good, but really lacks that critical evaluation around some of these unintended consequences that we know that these tools can have. That was really seen by research that was occurring oftentimes in the West. And so I did my doctoral work at Oxford. I went back to school to get my PhD to study AI systems. And there's social implications specifically looking at machine learning based credit assessment tools that are used to enhance financial inclusion of people in low and middle income countries. And so my doctoral work examined those types of tools. And I also founded the Responsible AI Initiative at the UC Berkeley AI Research Lab. As I saw there tended to be a lack of multidisciplinary research around these topics, but also it really benefit this type of research in the responsibly I space, including bias, but other forms of other responsibility risks or issues as well as really benefit from bringing together different disciplinary backgrounds to understand the issue. And so that's the founding of that initiative was really to bring together researchers from across different disciplines to conduct research on these really critical topics related to responsibility and doing so from a global lens. Oftentimes we did a lot of partnerships with different organizations, including different tech companies around different responsibility topics. And so that has really taken off as an initiative at the Berkeley Research Lab or Bayer on campus. And I also serve now as professional faculty at Haas where I teach on responsible AI to undergrads and graduate students and executives to think about how we can really leverage this technology in ways that get ahead of potential risks and can ultimately embed and foster trust that support human flourishing more broadly. And I'm also doing a fellowship at Stanford focused on gender and AI.
