Podcast Summary: Humanitarian Frontiers
Episode: Strategic Implications of AI in Humanitarian Work
Host: Chris Hoffman, Co-host Nassim Motalebi
Date: December 19, 2024
Panelists:
- Nick Thompson (CEO, The Atlantic)
- Hovig Etyemezian (UNHCR)
- Michael Chave (Former Microsoft, Humanitarian AI Consultant)
Episode Overview
This inaugural episode of "Humanitarian Frontiers in AI" presents a robust and nuanced exploration of how artificial intelligence (AI) is changing the humanitarian sector. Moderator Chris Hoffman, co-host Nassim Motalebi, and a distinguished panel discuss strategic, ethical, and operational questions about AI for aid organizations—balancing optimism about AI’s promise with concern for practical risks, implementation pitfalls, and the need for community inclusion.
Key Themes & Discussion Points
1. The Turning Point for AI in Society
[04:22] Nick Thompson’s Perspective:
-
The critical acceleration of AI started around 2015-2016 with large advances in cognitive tasks; the launch of ChatGPT in December 2022 was a watershed moment, representing “about 10 years of progress in one year.”
-
AI now routinely exceeds human capabilities in some domains, but rapid development risks outpacing society's preparedness to manage it.
“We have these incredible machines, [...] that, for better or worse, mimic human intelligence in many domains, and that are accelerating rapidly.” – Nick Thompson [05:20]
2. Adoption & Hesitancy in the Humanitarian Sector
[07:32] Chris Hoffman, Michael Chave & Hovig Etyemezian:
-
Internal tools vs. Beneficiary-facing AI:
Humanitarian organizations are more comfortable using AI for internal analytics than for direct beneficiary engagement, citing higher stakes and ethical worries. -
Healthy caution is warranted:
The sector is rightly skeptical, especially given AI’s potential for error, but must also “lean in” to new opportunities, maximizing benefit while minimizing harm.“The healthy skepticism I think I have seen primarily [...] is good. But it’s also, I think it’s important that the sector at large finds a way to lean in on the potential of what AI can provide.” – Michael Chave [07:51]
3. Do No Harm & Community Involvement
[09:55 & 14:13] Hovig Etyemezian:
-
Emphasized the need to adapt the ‘do no harm’ principle for digital tools and AI, especially ensuring affected populations are not treated as test subjects but as co-designers.
-
Shared an example: a refugee-led organization in France using AI to expand banking access, showing the value of community-driven innovation.
“How involved are the people we serve in this process? [...] It’s important distinction to make especially when we’re talking about the use of AI and technologies to improve humanitarian work and refugee space facing solutions.” – Hovig Etyemezian [11:07]
4. Cost of Error & When to Involve Humans
[14:13-19:36] Michael Chave, Nick Thompson & Hovig Etyemezian:
-
Not all AI failures have equal consequences—getting a movie recommendation wrong is minor, but AI errors in landmine detection or refugee status determination can be catastrophic.
-
Always keep “human in the loop” for high-consequence decisions; focus AI use where it augments rather than replaces critical human judgment.
“If there is significant negative consequence of acting on an AI recommendation without human interaction, then that’s where you need to keep humans in the loop.” – Michael Chave [16:21]
5. Testing Laboratory Dilemma & Needs Assessments
[19:36-21:41] Nassim Motalebi:
-
Humanitarian space often serves as a test bed for unproven technologies, without clear evidence of value to affected communities.
-
Organizations tend to favor AI deployments that increase internal efficiency rather than directly improving beneficiary outcomes.
“The adoption of AI in the humanitarian space [...] is flourishing [...] because we know what we want as organizations, which is efficiency, which is targeting.” – Nassim Motalebi [20:30]
6. Risk of Being Cautious vs. Need for Innovation
[21:41-22:53] Nick Thompson:
- Encouraged the sector to be proactive and innovative with AI, citing the risk of leaving “powerful tools in the hands of everybody else, including [...] entities causing the humanitarian problems.”
7. AI, Disinformation, and the Need for Expertise
[23:57] Nick Thompson:
-
AI-powered bots can be transformative for information access but pose risks of misinformation if not expertly managed.
-
Emphasized the necessity of training, verification, and error-checking for humanitarian chatbots.
“They absolutely hallucinate and they sometimes give you false information [...] so the trick would be having people from the humanitarian world who are well versed and can, for example, make sure that there is a ...bot that checks for common hallucinations or errors.” – Nick Thompson [24:08]
8. Patience & Process in Solution Design
[25:29] Hovig Etyemezian:
- Cited the importance of rigorous project design, stakeholder inclusion, and patient iteration, rather than simply rolling out technology for technology’s sake.
- UNHCR’s innovation funds only accept ~2% of proposals after in-depth vetting.
9. Job Displacement, Efficiency, and Unintended Consequences
[29:08-37:59] Panel Discussion:
-
Greater efficiency from AI does not always entail fewer jobs; in some cases, AI enables resource scaling, resulting in increased demand and more work.
-
The expansion of translation capabilities, for example, made it possible for UNHCR to receive thousands more applications—and resulted in more human workload, not less.
-
However, panelists cautioned that intervention rationale (cost-savings vs. better service) will shape outcomes.
“I agree that the verdict is not out yet, but I think [...] if communities are involved in the design of the solutions, then you have a different game.” – Hovig Etyemezian [33:15]
10. Lingering Limitations: Localization & Resource Needs
[37:59] Michael Chave:
-
Generative AI works well in a handful of languages; for many local languages, accuracy falls off, deepening global inequities.
-
There’s a substantial need for local capacity building, technical infrastructure, and for focusing resources on evaluation and sustainable deployment.
“Once you get, let’s say, past 10 languages, the quality, the accuracy, the value of modern AI really starts to drop. [...] So that has the risk of further deepening inequities across the world.” – Michael Chave [37:59]
11. Private Sector Partnerships & Capacity Building
[40:06-45:02] Nassim Motalebi, Hovig Etyemezian, Michael Chave:
-
Costs of building and evaluating AI remain high, but the proliferation of open-source models (e.g., Hugging Face, Llama) could reduce expenses and increase access for humanitarian organizations.
-
Private sector collaboration works best when companies share expertise and adapt to humanitarian needs (instead of simply pushing products).
“There is a space for us to partner up with private sector companies who are willing to work with us on the challenges we have identified and work together on the solution.” – Hovig Etyemezian [42:00]
“I will always learn something [...] about the technology that we create. Learning about its limitations, [...] that we can then bring back to the drawing board, bring back to the product teams.” – Michael Chave [43:30]
Notable Quotes & Memorable Moments
- On Caution:
"It’s imperfect, it will always be imperfect." – Michael Chave [07:40] - On Community Involvement:
"We continue making mistakes on testing technologies on people as opposed to finding a challenge with people and use those technologies with them, not on people." – Hovig Etyemezian [11:00] - On AI Democratization:
"I wish for AI to actually equalize the knowledge space." – Nassim Motalebi [47:05] - On Quick Wins:
"Translation between two people who speak rare languages." – Nick Thompson [47:24] - On Frustration:
"My frustration with our sector is the fear of missing out driving decision making..." – Hovig Etyemezian [47:49]
Timestamps for Important Segments
- [04:22] – Turning point for AI: historical context and recent breakthroughs (Nick Thompson)
- [09:55] – Do no harm in a digital world & community-centered design (Hovig Etyemezian)
- [14:13] – Cost of error and “human in the loop” practices (Michael Chave)
- [19:36] – Humanitarian sector as a test bed for tech & importance of needs assessment (Nassim Motalebi)
- [21:41] – Innovation call: using AI to solve meaningful sector challenges (Nick Thompson)
- [23:57] – Risks of misinformation in AI-powered tools (Nick Thompson)
- [25:29] – Example of failed tech intervention: prosthetic 3D printers in camps (Hovig Etyemezian)
- [29:08] – Efficiency vs. job displacement debate (Full panel)
- [37:59] – Language equity & AI localization concerns (Michael Chave)
- [40:06] – Costs, open-source models, and future access (Nick Thompson)
- [45:02] – Strategic recommendations: risk, design, and pragmatic adoption (Chris Hoffman)
- [47:05-48:14] – Rapid fire wishes, wins, frustrations, and next steps (Panel round)
Takeaways & Strategic Guidelines
- Balance innovation with ethical, community-centered design.
- Prioritize inclusion: AI interventions must be co-created with affected communities, not imposed upon them.
- Risk management is critical: Not just for technical error, but for broader social, economic, and linguistic impacts.
- Leverage partnership: Private sector expertise is invaluable, but incentives and expectations must be aligned and adapted to humanitarian contexts.
- Build capacity deliberately: Open source and localization efforts are essential for equitable benefit from AI.
- Strategy comes before technology: Deploy for real problems, not just because AI is available.
- Be patient and rigorous: Don’t rush—successful interventions require time, evaluation, and iteration.
Conclusion
This episode sets an ambitious yet grounded foundation for exploring AI’s role in humanitarian work—highlighting crucial considerations for designing ethical, community-driven, and sustainable AI strategies while interrogating both the potential and the pitfalls of the technology. The discussion emphasizes that AI’s impact in the sector will be shaped as much by careful process, inclusion, and adaptability as by technical capabilities.
Panel’s Rapid Fire Wrap-Up
- Nassim Motalebi: "I wish for AI to actually equalize the knowledge space." [47:05]
- Nick Thompson: Quick win—AI-powered rare language translation [47:24]
- Hovig Etyemezian: Frustration—FOMO driving decisions over real needs [47:49]
- Michael Chave: Get started—talk to people internally about AI; awareness leads to better use [48:14]
