Humanitarian Frontiers in AI – "Aid and Algorithms: Demystifying LLMs"
Podcast: Humanitarian Frontiers
Host: Chris Hoffman
Guests: David Master (AI Specialist, AWS), Scott Turnbull (Founder, Techtavern; Ex-CTO, Data Friendly Space)
Date: February 25, 2025
Episode Overview
This episode plunges deep into the realities and myths surrounding large language models (LLMs) and machine learning (ML) in the humanitarian sector. Host Chris Hoffman guides a panel discussion with David Master and Scott Turnbull, who bring practical insight from both the tech and nonprofit worlds. Their dialogue unpacks the technical, strategic, and ethical dimensions of deploying AI for global aid—moving from foundational explanations to advanced use cases, risk analysis, and the future landscape of humanitarian AI.
Key Discussion Points and Insights
1. Demystifying LLMs: What Are They and How Do They Work?
- LLMs Explained in Simple Terms
- David Master (03:02):
“An LLM is just a model that understands language patterns. It doesn't know anything—it just predicts the next word in a sequence from what it's seen in vast data.”
- LLMs do not “know” facts; they generate plausible-sounding text by learning language patterns, not truths.
- Critical understanding needed: Knowing how to prompt and interpret outputs, and being aware of "hallucinations," where models present confident but incorrect information.
- David Master (03:02):
2. Machine Learning & Humanitarian Applications
- How ML Finds Patterns
- Scott Turnbull (04:41):
“It’s like an infinite monkey scenario… but you select the monkeys doing the best work to solve a problem more efficiently.”
- NGOs wanting to predict crises (e.g., civil unrest) should:
- Engage ML expertise.
- Use past events as training data.
- Build models for similarity and predictive analytics (Bayesian math plays a role).
- Scott Turnbull (04:41):
3. Organizational Readiness: Data, Infrastructure, and Costs
- Foundations for AI Deployment
- Data strategy is paramount:
- Mindset: Company approach to data.
- People: Do they have data stewards and trained staff?
- Process: Ability to manage and respond to data needs.
- Technology: Tools for storage, processing, and model selection.
- David Master (07:23):
“We have these pillars of a data strategy: mindset, people, process, and technology.”
- Data strategy is paramount:
- Financial Considerations
- Costs are based on storage, compute, and third-party model access (token usage).
- David Master (09:25):
“It’s really a matter of how you’re using it. There are storage costs and compute costs, and you’re charged per token.”
- David Master (09:25):
- Donors are interested but need clear cost/impact projections and reassurance regarding safety and efficiency.
- Costs are based on storage, compute, and third-party model access (token usage).
4. Risks, Ethics, and Responsible AI
- Risk Aversion and Use Cases
- Humanitarians are justifiably cautious about exposing beneficiaries to AI-driven experiences; however, with frameworks and risk categorization (as per the EU’s system), high-risk use can be mitigated—not avoided entirely.
- David Master (13:14):
“High risk doesn’t mean don’t use it, just understand the implications… bias in data, bias in the model, and mitigate for those concerns.”
- David Master (13:14):
- Scott Turnbull (15:20):
“Any risk model is incomplete without the benefit analysis as well. There’s also a risk in not acting, especially in humanitarian contexts.”
- Humanitarians are justifiably cautious about exposing beneficiaries to AI-driven experiences; however, with frameworks and risk categorization (as per the EU’s system), high-risk use can be mitigated—not avoided entirely.
5. Quick Wins & Use Cases for Humanitarian AI
- Data Tagging and Reporting
- Automating extraction and tagging of qualitative and quantitative data, e.g. consistent reporting across NGOs, is a key win.
- Scott Turnbull (22:35):
“AI can be used to extract and automatically generate reports. Consistent reporting across multiple organizations in the field…pure chaos. Leveraging AI makes coordination smoother and faster.”
- Scott Turnbull (22:35):
- Automating extraction and tagging of qualitative and quantitative data, e.g. consistent reporting across NGOs, is a key win.
- Environmental Monitoring
- Use of AI to analyze environmental soundscapes (Amazon rainforest: chainsaws vs. birds) and ocean health (identifying Goliath grouper through acoustic signature) demonstrated unforeseen benefits.
- David Master (23:49):
“As you have something that you’re going after, it creates all this other data… and lots of other use cases.”
- David Master (23:49):
- Use of AI to analyze environmental soundscapes (Amazon rainforest: chainsaws vs. birds) and ocean health (identifying Goliath grouper through acoustic signature) demonstrated unforeseen benefits.
- Predictive Analytics’ Underutilization
- Reluctance due to reliance on traditional methods; missed opportunities for cross-sector impact and communication of interconnected risks.
- Scott Turnbull (26:44):
“We’re too focused on predictive analytics for responses, but we’re missing the opportunity to help people understand interconnected impacts globally.”
- Scott Turnbull (26:44):
- Reluctance due to reliance on traditional methods; missed opportunities for cross-sector impact and communication of interconnected risks.
6. The HR of the Future and Skills Needed
- Changing Roles and Capacity Building
- Shift toward broader expertise: Not just data scientists but domain experts capable of understanding humanitarian needs, standards, and using AI as an enabler.
- David Master (18:45):
“A major role HR of the future plays is organizational knowledge and how it’s delivered to people for the mission.”
- Scott Turnbull (20:13):
“Intelligence is going to be free, but wisdom is at a premium… If you don’t understand the Sphere standards, you need to go.”
- David Master (18:45):
- EU rules require staff education and organizational responsibility for AI literacy.
- Shift toward broader expertise: Not just data scientists but domain experts capable of understanding humanitarian needs, standards, and using AI as an enabler.
7. Sustainability, Infrastructure, and Practicality
- Lightweight Models and Riding the AI Wave
- Sustainability means leveraging turn-key cloud services, minimizing custom code, and staying agile as the AI field evolves swiftly.
- Scott Turnbull (41:50):
“You have to be laser focused, stay as light in code as possible… AI is changing so fast, you’ll bleed out trying to chase the tail… ride along with third-party services wherever you can.”
- David Master (44:05):
“Key piece is lighter models. Query a smaller model first, then escalate as needed—big impact on cost and efficiency.”
- Scott Turnbull (41:50):
- Sustainability means leveraging turn-key cloud services, minimizing custom code, and staying agile as the AI field evolves swiftly.
- Privacy and Lock-In Concerns
- Main client concerns: data privacy (Is it being used for model training elsewhere?), long-term cost, and vendor lock-in. Workshops on 'responsible AI' needed for trust and transparency.
8. Future and Long Game: What’s Next for AI in Humanitarianism?
- Innovation at the Edge, Scaling in the Middle
- Frontline field innovation is the source, but needs connection and scaling via industry/cloud partners.
- David Master (52:00):
“Innovation happens at the edge and scales in the middle. It needs people, attention, iteration—and the ability to recognize and scale potential.”
- David Master (52:00):
- Frontline field innovation is the source, but needs connection and scaling via industry/cloud partners.
- Bridging Back Office and Field
- Short-term: Use AI for admin/OPS efficiency, capacity building, and education.
- Long-term: Robotics—drones, delivery bots, medical AI—will hugely impact field operations.
- Scott Turnbull (54:08):
“AI can enhance education dramatically… field operations will lag, but in five or ten years, robotics are going to be huge.”
- Scott Turnbull (54:08):
Notable Quotes & Memorable Moments
-
LLMs and Hallucinations:
“If you ask too specific a question to too large a model, you get back coherent nonsense. That’s what’s called hallucinations.” – David Master [03:02] -
Machine Learning as Monkeys on Typewriters:
“It’s the infinite monkeys on infinite typewriters problem, but we start selecting the monkeys doing the best work.” – Scott Turnbull [04:41] -
Efficiency vs. Complexity:
“Intelligence is going to be free, but wisdom is going to be at a premium.” – Scott Turnbull [20:13]
“Scale complexity: This is the first time we can go from bulk service to individualization in humanitarian response.” – Scott Turnbull [32:16] -
Risk of Inaction:
“There’s a risk in not acting, especially in humanitarian context. If you could have saved 10% more people, what’s the impact if we fail to act out of fear?” – Scott Turnbull [15:20] -
Innovation at the Edge:
“Innovation happens at the edge and scales in the middle… The future I want: creative approaches at the frontline, then scale lessons learned industry-wide.” – David Master [52:00] -
Cloud Services & Building Blocks:
“We sell Lego blocks… but how you put them together and architect it is still a large piece of it.” – David Master [37:11]
Important Segment Timestamps
- Intro to Episode and Guests: [00:48]
- LLMs in Simple Language: [03:02]
- Machine Learning for NGOs: [04:41]
- Data Strategy Pillars: [07:23]
- Costing AI Projects: [09:25]
- Donor Attitudes on Funding AI: [10:20]
- Frameworks for Responsible AI & EU Risk Levels: [13:14]
- AI for Beneficiary-Facing Chatbots: [15:12]
- Quick Wins (Tagging, Reporting, Environmental Monitoring): [22:35]
- On Predictive Analytics & Missed Opportunities: [26:44]
- Skills Gap, HR of the Future, Domain Experts: [18:45], [20:13], [37:51]
- Sustainable AI: Light Models, Vendor Lock-in: [41:39], [44:05]
- Privacy Concerns and Responsible Data Use: [47:57]
- Example: Caregiver Form-Filling Bot: [48:59]
- Future of AI – Education, Robotics, and Field Impact: [52:00], [54:08]
Takeaways for Humanitarian Practitioners
- LLMs and ML are not magic boxes—they demand understanding, critical engagement, and responsible usage.
- Effective AI deployment starts with data—strategy, stewardship, and culture matter as much as tech tools.
- Use cases that automate reporting, surface qualitative insights, and enable smarter field coordination are ripe for impact right now.
- Embracing risk with robust mitigation beats paralysis-by-analysis; the risk of inaction can outweigh the risk of trying.
- Long-term, robotics and advanced prediction could redefine the humanitarian landscape—but success rests on capacity-building, collaboration, and scaling grassroots innovation.
Listen for rich anecdotes, actionable insight, and a pragmatic yet optimistic tone throughout. Perfect for professionals seeking to bridge tech and field realities—without hype, but with hope.
