Podcast Summary: Humanitarian Frontiers
Episode: "Ethics and Responsibility from 30,000 Feet"
Host: Chris Hoffman
Date: January 20, 2025
Main Theme and Purpose
This episode of "Humanitarian Frontiers" examines the nuanced challenges of ethics and responsibility when deploying AI in humanitarian contexts. Chris Hoffman leads a thought-provoking panel discussion with leading experts—Emily (AI literacy and social impact educator), Mala (AI safety and tech for development specialist), and Suzy (researcher on AI in the Global South)—on moving beyond “AI ethics” toward operationalizing responsible, inclusive AI. The conversation ranges from funding gaps, global bias, and transformative infrastructure, to AI literacy, participatory development, and the hard questions of accountability.
Key Discussion Points and Insights
1. Redefining Ethics: From “AI Ethics” to Operational Responsibility
- Responsible AI vs. AI Ethics
- Emily argues that “AI ethics is passé,” contending that “responsible AI” is where the field must focus now, as only ethics that are operationalized have impact (02:03).
- She emphasizes, “We forget that actually we're in charge… the way tech products get produced isn't unique to AI necessarily, but there are all sorts of ways in which human decisions get embodied into the tech” (03:23).
- Beyond Compliance to Social Good
- Mala distinguishes “responsible AI” as usually about compliance (avoiding legal pitfalls) while “AI for social good” seeks to actively promote positive rights—public health, education, gender equity (04:30).
- She underscores the sector’s challenge: “These are very complicated things to do, and they're also quite expensive… one challenge is we might be putting something that's inherently problematic and introducing biases that we can educate humans not to have” (05:51).
2. Bias, Inclusion, and Local Participation in AI Development
- Research from the Global South
- Suzy presents findings from CARE International/Accenture’s research on Global South perspectives (09:36), highlighting:
- Risks of exacerbating marginalization due to digital divides
- The need for participatory and inclusive “AI built with, not for”
- Tension between AI for efficiency versus inclusivity and safety
- Importance of ongoing, meaningful participation, not tokenism
- Quote: “If we talk more about inclusive AI and participatory AI, then it’s not so much we’re building for, but we’re building with and together” (09:02).
- Suzy presents findings from CARE International/Accenture’s research on Global South perspectives (09:36), highlighting:
- Token Participation and “Participation Washing”
- Emily cautions against superficial inclusion: “Doing one focus group with eight people and then saying we had a participatory design process—that is not actually participatory AI. …We need to set the bar really high” (20:41).
3. Accountability: Who Takes Responsibility?
- Complexity of Responsibility
- Chris raises the tough question of accountability when AI does harm (13:57): “When we deploy something in a health context, who ends up being responsible for that?”
- Mala highlights that troubleshooting AI failures in humanitarian contexts is “really, really complicated,” especially because foundational models underlying most deployments are rarely built or controlled by implementing organizations (14:59).
- Donor Funding and Systemic Gaps
- Emily points to systemic funding limitations: “There’s such a trend to just fund the pilots… we are not funding and donors don’t seem to be aware of how much work actually has to go into making something that is fit for purpose” (18:02).
- “AI is anything but plug-and-play, especially in social impact. …We need to reconceptualize how we approach these as builds” (19:06).
4. The Necessity of AI Literacy
- Upskilling Across Organizations
- Emily: “There is so much misunderstanding around what AI is and how it operates… We need to change the culture of being able to say, ‘hey, wait, let’s slow down here’” (25:53).
- She emphasizes the need for non-coders to participate: “There is such a big role for non-coder individuals to play on an AI build, but they have to have the AI literacy in order to understand how…” (28:37).
5. Pilots, Infrastructure, and a Call to “Slow Down”
- Cautious Progress and Strategic Foundations
- Mala and Emily favor slowing the pace—not a total halt, but a shift in focus to infrastructure building. “Don’t do it until you’re ready… I never advocate for deploying a kind of technology that you fundamentally do not understand” (31:22, reinforced at 54:05).
- Emily recommends focusing “the next couple of years on building the infrastructure for the future”—not just models, but localized data, strong governance, and broad literacy (38:18).
- Danger of FOMO (Fear of Missing Out)
- “NGOs and humanitarian, big humanitarian aid agencies are suffering from massive FOMO and there’s huge kind of panic to get on the bandwagon…That is being kind of more and more understood now” — Suzy (22:53).
6. Global Power Dynamics and the Future Vision of AI
- Which Model Will Prevail?
- Mala notes differing global approaches: “Right now we’re very dominated by Silicon Valley...But in Singapore and India… and sub-Saharan Africa [community-driven AI approaches are emerging].”
- Warns that the humanitarian/development sector may not “shape these forces because they are just so large and complicated” (36:28).
7. Practical Pathways for Humanitarian Organizations
- Four Critical Pathways for Amplifying Civil Society Inclusion: (Suzy, 47:01)
- AI Literacy at all levels—community, government, organizations, tech companies
- Participation: “Meaningful localization” and decolonizing not only aid, but AI as well
- Advocacy: Amplify Global South voices on desired AI outcomes & risks
- Digital infrastructure and data governance — privacy, consent, sovereignty
Notable Quotes & Memorable Moments
- Emily [02:03]:
· “I actually think AI ethics is passé... I’m really excited because...we’re starting to talk about implementing some of the things that the conversation around AI ethics drives us towards.” - Mala [04:30]:
· “What they were really focusing on was the idea of negative rights...But a lot of what we focus on in international development...is related to economic, social, and cultural rights, which are positive rights.” - Suzy [09:02]:
· “If we talk more about inclusive AI...it’s not so much we’re building for, but we’re building with and together.” - Emily [20:41]:
· “Doing one focus group with eight people and then saying we had a participatory design process—that is not actually participatory AI.” - Chris [13:57]:
· “If disease spreads in a disaster because somebody built the toilets wrong, who are they going to sue? ...Now we’re looking at something that’s even more intricate with technology.” - Mala [31:22]:
· “I never advocate for deploying a kind of technology that you fundamentally do not understand.” - Emily [54:11]:
· “Default AI won’t get us there, but responsible, inclusive AI, I believe, can.” - Suzy [54:58]:
· “Do slow down massively unless you already have your kind of ethical and governance frameworks in place...And just to remember we are thinking and talking so much about decolonization of aid. AI cannot be left out of that conversation.” - Chris [51:53]:
· “What we’re talking about today is really running an F1 team. I mean, it still has four wheels, but otherwise there’s really nothing that’s similar to the way that we’ve been doing work for the last 70 years.”
Timestamps for Key Segments
- [02:03] – Emily on why “AI ethics” isn’t enough
- [04:30] – Mala distinguishes responsible AI from AI for social good
- [09:36] – Suzy introduces CARE International’s research and its implications
- [13:57] – Chris poses the tough accountability question
- [20:41] – Emily on real vs. token participation
- [31:22] – Mala: “Don’t deploy what you don’t understand”
- [38:18] – Emily: Invest in infrastructure and AI literacy
- [47:01] – Suzy’s four critical pathways for civil society inclusion
- [54:05–54:58] – Each guest summarizes their “big ask” for humanitarian leaders
Flow & Takeaways
The conversation stays frank, practical, and at times urgent. The panelists agree that piecemeal pilot projects and “default” AI risk reinforcing existing inequalities and failing to address core social impact goals. Instead, they call for an era of deliberate, inclusive, well-resourced, and participatory AI—insisting that meaningful change requires broad infrastructure-building, upskilling, and sector-specific standards rooted in human rights, not just compliance.
The bottom line:
AI in humanitarian work is not plug-and-play. Excellence—and safety—demands slowing down, building capacity, sharing power, and insisting on ethical frameworks you can actually operationalize.
Actionable Recommendations (“Big Asks” – [54:05])
- Mala: “Don’t do it until you’re ready. Simply enough.”
- Emily: “Slow down… Default AI is not going to lessen inequality. Responsible, inclusive AI—well funded and carefully thought out—might.”
- Suzy: “Slow down massively unless you already have your kind of ethical and governance frameworks in place and have thought through how to operationalize them. AI cannot be left out of the decolonization conversation.”
This summary distills thoughtful, candid commentary on the urgent need for a new paradigm of ethics, responsibility, and inclusion in humanitarian AI. The panelists challenge NGOs and the sector at large to move past hype and FOMO, develop AI that is truly fit for purpose, and put community voices and global justice at the center of every technological advance.
