Humanitarian Frontiers: 3Ps—Policy, Product, Pragmatism: You Only Know What You Know
Podcast: Humanitarian Frontiers
Episode Theme: Exploring How AI and Edge Tech are Transforming Global Aid
Host: Chris Hoffman (with co-host Nassim Motelabi)
Date: April 30, 2025
Featured Guests:
- Sabrina Shi (Former AI Policy Manager, Responsible AI Institute)
- Hadassah Drewkarch (Former Director of Policy, Responsible AI Institute)
- Gayatri Jal (Director of Consumer Innovations, Dimagi)
- Jigyasa Grover (AI/ML Lead; experience with LinkedIn, Google, and more)
Episode Overview
This episode of "Humanitarian Frontiers" brings together practitioners and thinkers to examine the three Ps—Policy, Product, Pragmatism—at the heart of AI’s deployment in humanitarian spaces. The discussion zeroes in on how rapidly evolving AI affects vulnerable communities, the shifting ethics of responsible tech, the true meaning of "human in the loop", and the critical trade-offs between speed, cost, and contextualization. Throughout, the panel interrogates both the opportunities and the limitations of using AI-driven tools to serve those most in need.
Key Discussion Points & Insights
1. The Velocity of AI Innovation and Humanitarian Impact
- Chris introduces the urgency: the pace of AI evolution makes it hard for nonprofits and humanitarian actors to keep up.
- Jigyasa (03:13): Emphasizes the life-altering consequences of decisions made (or delegated) in crises. The challenge is building trust in AI where human feedback loops are not feasible due to time constraints.
"In those situations, ...the delegation of all of those decisions and those vulnerabilities to AI can affect and impact lives of people. ...How do you build trust?" (E, 03:13)
2. AI Guardrails: Between Hype and Grey Zones
- Chris references industry giants (e.g., Palantir) framing AI as universally applicable—even in high-stakes scenarios (missile targeting, humanitarian logistics).
- Hadassah (04:38): Describes the double-edged nature of AI, stressing the challenge of defining “no-go” zones and how context-specific guardrails must be.
"I see AI very much as a double-edged sword. ...It's the grey zone in between that is oftentimes very hard to really kind of identify." (F, 04:38)
3. Human-in-the-Loop: Definitions and Depth
- Nassim questions the overuse (and possible dilution) of “human-in-the-loop”.
- Sabrina (06:37): Breaks down that "human in the loop" varies depending on context, autonomy of the system, and pace of decisions. In high-autonomy settings, checking every decision isn't feasible—systems may require periodic or flag-based human intervention.
"When we say human in the loop, we're talking about accountability and checks in the process." (G, 08:15)
- Sabrina argues a nuanced approach is key; “AI as support vs. AI as decider.”
4. Voice of Affected Populations: From Passive Recipients to Partners
- Nassim (09:22): Calls for a sector-specific conversation on integrating affected populations not just as users but as contributors/input in the AI lifecycle.
- Gayatri (10:48): Acknowledges inherent inequities. Describes fast prototyping and user feedback (even via simple forms/chatbots) as ways to gather insight early—but notes creation often begins detached from affected communities’ perspectives.
"The process ...is inherently inequitable because I am coming up with the idea. ...Who am I to come up with the idea? I am not part of the affected population." (D, 10:54)
5. What Is Responsible AI in Humanitarian Practice?
-
Chris (12:24): Provokes the ethical question: Is deploying and then seeking feedback truly responsible, or is co-design from the outset the only valid way?
"Are we being unethical by doing that? ...Responsible AI is going to the population first, asking them, and then building something that addresses their needs." (B, 13:14)
-
Hadassah (14:20): Context matters. In humanitarian settings, urgency means ideal participatory models are often not feasible—even if desirable.
"When you're looking at the humanitarian context... the dynamics of developing and then ultimately implementing technology is slightly different. ...It makes it very hard to involve affected parties from the outset." (F, 15:06)
-
Jigyasa (15:48): Advocates for participatory approaches—seeing affected people as partners rather than mere subjects.
"There can be a lot of transformative shift when we stop seeing a lot of populations as subjects of AI systems and... start recognizing them as partners in the creation of those systems." (E, 15:51)
6. Contextualization vs. Scale: The Technical and Practical Trade-offs
-
Sabrina (16:53): Highlights tension between the AI industry's move toward generalizable systems (deep learning, few-shot learning) and the need for context-specific solutions for marginalized groups.
"When we're talking about humanitarian problems, which are very unique... the idea is let's build a general understanding... But there are problems with that. ...You're trying to apply it to a marginalized or underrepresented population." (G, 17:38)
-
Gayatri (19:27): Shares real-world limitations—top AI models underperform in under-resourced languages, and practical workarounds (few-shot learning) don’t always bridge the gap.
7. Decision-Making Power: Who Defines ‘Responsible’ and ‘Safe’?
-
Gayatri (19:27): Raises the pivotal question: Who defines safe, accurate, responsible AI? The ideal is democratic, equitable input, but reaching this in practice is elusive.
"It's almost like we're proposing some kind of democratic structure ...that takes everyone's opinions ...to come to some kind of framework ...to ...apply to the AI solutions that we make. ...I agree with the principle... I don't know how we're going to get there." (D, 20:09)
-
Jigyasa (20:42): Suggests assembling multidisciplinary, culturally competent teams (including sociologists, local experts) to help contextualize both models and evaluation processes.
"We need teams that are diverse and they come from different parts of the world... so that along with developing the model, we need context-specific testing regimes." (E, 21:17)
8. Colonial Perspectives and Commodification of AI
-
Nassim (22:05): Reflects on AI as the new global commodity—raising questions about whose values drive “responsible AI,” and how tech can perpetuate or resist colonial dynamics.
"How much of this drive around commodization of AI... is fueled by these colonial perspectives?" (C, 22:15)
-
Gayatri (25:32): Returns to first principles of communication; if both users and creators struggle to define what AI is, how can needs assessments be valid or useful?
"Do the people, the affected groups ...understand what AI is? Do we fully understand what AI is?" (D, 25:32)
9. The Trilemma: Cost, Speed, and Contextualization
-
Chris (25:59 & 28:05): Cost and urgency are at odds with the true contextualization required for responsible deployment. Out-of-the-box solutions are tempting, but real contextualization increases costs—a challenge for cash-strapped NGOs.
"Doing something responsibly, which would be ...appropriatizing these things ...that’s very costly. ...If everybody wants something that works globally for them, they've got to understand that that's probably seriously irresponsible..." (B, 27:40)
-
Sabrina (28:42): Cost is not just empirical—it's how different stakeholders perceive value, risk, and problem framing. In humanitarian settings, the needs and perceptions of the “actual customer” (end user/affected party) must be prioritized over those simply holding the budgets.
"Cost is less of an empirical metric than we think... the actual customer ...are those who will be affected by the system." (G, 28:42)
-
Jigyasa (32:29): Notes new technical approaches (fine-tuning, open-source models, edge deployment) are making contextualization more feasible, even under budget constraints.
-
Gayatri (33:56): Shares NGO perspective—with practical examples of how deployment, improvisation, and partner costs all figure into “cost” calculations.
10. Scalability vs. Context: Measuring Success in Humanitarian AI
- Nassim (35:56): Humanitarian ROI is not purely scale—sometimes context or particularity trumps reach or speed. Corporate ROI logic (maximize reach/scale) doesn’t always map onto the humanitarian context.
Notable Quotes & Memorable Moments
| Timestamp | Speaker | Quote | |-----------|-----------|-----------------------------------------------------------------------------------------------| | 03:13 | Jigyasa | "How do we build that trust? ...when you don’t have a lot of time for human in the loop..." | | 04:38 | Hadassah | "The grey zone in between is...very hard to really kind of identify..." | | 08:15 | Sabrina | "When we say human in the loop, we're talking about accountability and checks in the process."| | 10:54 | Gayatri | "Who am I to come up with the idea? I am not part of the affected population..." | | 13:14 | Chris | "Responsible AI is going to the population first, asking them, and then building something..."| | 15:51 | Jigyasa | "Transformative shift ...when we ...start recognizing them as partners in the creation..." | | 17:38 | Sabrina | "Trying to apply [generalized models] to a marginalized or underrepresented population..." | | 20:09 | Gayatri | "Who decides what is responsible and who decides what is safe and who decides what is accurate?| | 21:17 | Jigyasa | "We need teams ...from different parts of the world ...alongside these technical developers." | | 22:15 | Nassim | "...how much of this drive...is fueled by these colonial perspectives?" | | 25:32 | Gayatri | "...do they understand what AI is? Do we fully understand what AI is?" | | 27:40 | Chris | "...if everybody wants something that works globally... that's probably seriously irresponsible..."| | 28:42 | Sabrina | "...those who will be affected by the system, are the ones who decide. And that's not in a money context."| | 33:56 | Gayatri | "...multiple different costs involved. ...our partner organizations are spending money to bring frontline workers ...to try out this chatbot."| | 39:02 | Jigyasa | "AI is nothing but an enabler of humanity... if we think of AI as ...a coworker..." | | 41:19 | Hadassah | "It's people, process, and then tech. ...We need to better understand ...other pieces of the puzzle before we talk about tech."|
Timestamps for Important Segments
- [03:13] Trust & Decision-Making in High-Velocity Crisis Response
- [04:38] Building Guardrails: Defining No-Go Zones & Grey Areas
- [06:37] Human-in-the-Loop: What Does It Actually Mean?
- [10:48] Including Affected Populations: Where Does Their Voice Enter?
- [12:24] Responsible AI: Product-Led vs. Participatory Design
- [15:48] Partners not Subjects: Rethinking Community Engagement
- [16:53] AI Development Trade-offs: Scale, Context, and Marginalized Groups
- [19:27] Equitable Decision-Making & The Challenge of Frameworks
- [22:05] Colonialism & Commoditization of AI
- [25:59] Communicating AI: Bridging Knowledge Gaps with Affected Groups
- [27:40] Cost, Speed, and Contextualization Trilemma
- [28:42] Reframing Cost: Value Beyond Money
- [33:56] Practical Costs and Real-World Challenges
The Future of AI in Humanitarian Work: Panel Hopes and Recommendations
[38:03–43:19] Roundtable: Where Should AI Take Us Next?
-
Jigyasa: AI as “enabler of humanity,” amplifying what needs human insight while freeing time for other tasks. But vigilance is required about who controls and how it’s used.
"If we think of AI as not a replacement but as like a coworker...that is where I see AI moving towards." (E, 39:02)
-
Sabrina: Envisions a shift in process—putting local champions in charge, using rapid prototype cycles not just on tech, but "how" it gets built.
"If there's any space in the world, where there's an opportunity for us to rework the way that we think about building solutions that are meant to actually help people..." (G, 40:41)
-
Hadassah: "People, process, then tech"—an insistence on not letting AI override the fundamental importance of people and tailored processes.
"We need to better understand where those other puzzle pieces fit in before we talk about that tech piece..." (F, 41:41)
-
Gayatri: Echoes the call for local leadership in tech, and stresses the necessity of considering barriers like accessibility for marginalized groups (e.g., girls’ mobile technology access).
"We can make the most gender intentional tool, but if not that many people can access it ...then that's probably not helpful." (D, 44:03)
Tone and Takeaways
The conversation is open, humble, sometimes self-critical, and leans toward action without shying away from complexity. Speakers balance optimism (“AI as enabler of humanity”) with caution about over-promising, colonial tech narratives, and the risk of bypassing local expertise. There’s a clear push for participatory, context-sensitive, and value-driven development—framing technology as the last piece of the puzzle, not the first.
In short:
- AI offers immense promise for humanitarian aid—but only if it is deployed responsibly, with affected populations as true partners, not just end-users.
- Responsible AI in this context must balance speed, context-awareness, cost, and local voice.
- The community needs new processes, humility around what we know (and don’t), and sustained attention to equity—not just new tools.
