Humanitarian Frontiers Podcast: "AI Regulations: Trickling up, Pouring Down, or Nowhere to Be Seen?"
Podcast: Humanitarian Frontiers
Episode: AI Regulations: Trickling up, Pouring Down, or Nowhere to Be Seen?
Date: April 14, 2025
Host: Chris Hoffman, with co-host Nassim Motalebi
Guests: Gabrielle Tran (Institute for Security and Technology) & Richard Maura (Tech Worker Community Africa)
Episode Overview
This episode takes a critical look at emerging AI regulations—where they exist, where they don’t, and what this means for the humanitarian sector globally. Host Chris Hoffman, co-host Nassim Motalebi, and expert guests Gabrielle Tran and Richard Maura discuss the realities behind model training, hidden labor, diverging regulatory approaches, ethical grey areas, and how all this impacts humanitarian organizations’ responsibilities and risks when deploying AI.
The conversation moves between technical foundations, global policy gaps, discussions of power and consent, supply chain ethics, and concrete suggestions for NGO practitioners navigating this ever-shifting landscape.
Key Discussion Points & Insights
1. What Actually Happens Behind the Scenes of LLMs?
(02:44 - 05:54)
- Richard Maura demystifies model training by explaining the critical but often unseen role of human labor in AI moderation:
- “AI does not exist without data. This data cannot work on itself... The need to employ individuals who can sit behind the machine and train this data…cannot be underestimated.” (02:44 - 04:53, Richard)
- Moderators remove harmful or sensitive content; this is why, for instance, a chatbot like ChatGPT "refuses" certain requests—humans have deliberately trained those guardrails.
- The work of training and moderating LLMs is typically outsourced to low-wage workers in the Global South, raising fundamental ethical questions.
Memorable Quote:
"Moderators are the gatekeeper of the text, so to say. Absolutely." (04:29, Chris & Richard)
2. Ethics or Just a Performance? The Labor Supply Chain & Unacknowledged Workforce
(05:54 - 09:46)
- Gabrielle Tran argues that current AI ethics discourse largely serves providers, not society:
- Focuses on technical dimensions (privacy, fairness, explainability), which benefits companies seeking to sell a responsible image.
- Ignores deeper social and political consequences—including the exploitative labor behind content moderation and data labeling.
- Cites investigative work showing AI supply chains often depend on underpaid, poorly protected labor in developing economies, exposing humans to toxic material for model filtering.
- “If the foundation of even ethical AI depends on unethical labor, can you really call that ethical?” (07:24, Gabrielle)
- Richard adds:
- Much training data isn't owned or sourced ethically, compounding risks of intellectual property abuse and consent violations.
3. Policy Landscape: Europe Leads, Others Patchwork, Global South Lags
(09:46 - 13:06, 16:01 - 20:35, 21:34 - 25:50)
-
EU AI Act: The first comprehensive regulation, enforces staff training and lays out risk-based rules and documentation mandates.
-
US: No overarching federal law. Mix of voluntary frameworks (“AI Bill of Rights,” NIST RMF), plus patchwork state laws (e.g., biometric privacy in Illinois, hiring rules in New York).
“The US...emphasizes not stifling innovation... The EU is more willing to impose hard requirements.” (12:23, Gabrielle) -
Global South/Africa:
- Richard explains that, for many African policymakers, AI isn’t a priority amid economic and political challenges.
- Gaps in ethical oversight, lack of investment in regulatory frameworks, and a profit-over-impact mindset hinder meaningful governance.
- “AI in Africa is not a priority. Unfortunately, there are underlying issues... Politicians [are] not about what AI is and what impact it has in society.” (22:13, Richard)
-
Practical Implications:
- NGOs straddling multiple jurisdictions are left with ambiguity.
- Gabrielle notes potential for global “race to the top,” as major firms may standardize systems to the strictest rules (similar to GDPR’s worldwide influence).
- However, when compliance costs outweigh potential revenue (especially in smaller markets), companies might simply pull services—“Meta pulled the news from Canada in response to Canada’s [regulation]...they pulled the plug.” (23:53, Gabrielle)
4. Who Is Accountable When AI Crosses Borders?
(16:01 - 20:35, 18:46 - 21:33)
- The EU AI Act requires “appointed representatives” for general-purpose models.
- Humanitarian organizations can (soon) request full documentation and “paper trails” on how models were trained—“Can I have the paper trail of how the model was built?” (18:46, Gabrielle)
- This could improve transparency, but accountability for downstream humanitarian deployments outside the EU (e.g. EU-based NGO using AI in Kenya) remains murky.
- Chris: “What happens then? Who’s the responsible party for what’s happening in Kenya if you’re a European NGO?” (20:01, Chris)
5. Responsible AI in High-Risk Humanitarian Contexts
(13:06 - 16:01, 26:06 - 29:27, 36:05 - 40:29)
- Nassim raises the stakes:
- Humanitarian work often avoids AI deployments that haven’t been stress-tested, due to concerns for at-risk populations.
- “We don’t see that many AI use cases that are actually tailored for affected populations because we don’t want to risk them. We don’t want to risk deploying an AI solution that could have consequences.” (14:54, Nassim)
- Gabrielle explains regulatory mechanisms for accountability:
- Providers must equip downstream users with documentation and risk guidance.
- Under the EU Act, there will be channels for NGOs to request audits/insights into models they're using.
- Global divergence vs. convergence:
- While regulations diverge, market pressures may push companies to adopt strictest standards globally (GDPR precedent).
6. Consent, Data Sovereignty, and Power Imbalances
(29:27 - 33:22)
- Data trusts as an (imperfect) solution: a third party holds and manages data for beneficiaries, helping navigate permissions and privacy for vulnerable groups.
- However, Chris and Richard highlight huge deficits in digital maturity and meaningful consent:
- Workers in the Global South often accept poor conditions without understanding data rights or consequences, driven by economic desperation.
- “Consent is the last thing individuals, especially the workforce, will be mindful of... They are desperate for work.” (31:15, Richard)
7. What Should Humanitarian Organizations Do?
(36:05 - 40:29)
Gabrielle’s Policy Roadmap:
- Start with core ethical principles: fairness, transparency, accountability, human well-being.
- Align policies with the highest available standards (EU, NIST).
- Assign explicit AI governance responsibilities internally—no matter organization size.
- Maintain human accountability for every major AI-driven decision.
- If in doubt, delay deployment and beta-test in controlled environments.
- Invest in AI literacy for practitioners, especially in high-stakes contexts.
- Use simple models where possible, as high-complexity models increase opacity and risk undetected biases.
- Example: Healthcare AI that misunderstood asthma data due to flawed training assumptions, which could have endangered patients (38:38, Gabrielle).
Notable Quotes & Memorable Moments
-
“If the foundation of even ethical AI depends on unethical labor, can you really call that ethical?”
— Gabrielle Tran (07:24) -
“Some of this data... is not even the owners’... That also raises a whole issue about ethical considerations in AI.”
— Richard Maura (08:36) -
“Accountability in humanitarian AI use means that humans remain answerable for any decision... If a human in your organization feels they can’t answer for an AI-driven decision, I don’t think you should use it.”
— Gabrielle Tran (36:52) -
“Consent is the last thing individuals... will be mindful of... They are desperate for work.”
— Richard Maura (31:15) -
“These corporations will comply with stringent regulations only when the cost of non-compliance outweighs the cost of adaptation… In smaller markets, perhaps in the Global South, where compliance could outweigh potential revenue, they might just opt out entirely.”
— Gabrielle Tran (23:53) -
“AI, or more complex AI, is not an inevitability if it does not enhance these capabilities and real kind of ethical deliberation. It demands asking not just how to ethically implement AI, but whether certain applications of AI should exist at all. It's inherently social, it's political, and it's a moral judgment rather than a technical one.”
— Gabrielle Tran (42:42)
Timestamps for Significant Segments
- 02:44 — Richard Maura explains the basics of LLM moderation and training labor.
- 05:54 — Gabrielle Tran discusses how ethics conversations serve providers and expose labor exploitation.
- 09:46 — Chris & Gabrielle on the fragmented global regulatory landscape.
- 16:01 — Accountability and auditability in new EU regulatory frameworks.
- 21:59 — Richard outlines Kenya’s and Africa’s low prioritization of AI policy.
- 23:53 — Gabrielle draws lessons from Meta's withdrawal from Canada.
- 29:27 — Chris and Gabrielle discuss data trusts, digital maturity, and the challenge of consent in humanitarian contexts.
- 36:05 — Gabrielle’s roadmap for humanitarian AI governance and the asthma patient AI case.
- 42:23 — Closing: Gabrielle on the need to ask “whether” some AI use cases should exist.
Closing Messages & Hopes for the Future
-
Gabrielle:
Emphasizes pursuing fairness, transparency, accountability, and human well-being over technical solutionism, arguing that the debate on AI's ethical use must be grounded in real-world consequences rather than mere performance of ethicality. -
Richard:
Calls for stronger UN and international intervention to ensure member states uphold human and labor rights in AI supply chains, especially in contexts vulnerable to exploitation. -
Chris & Nassim:
Highlight ongoing need for investment in AI literacy, deliberative organizational governance, and a culture of “principles first,” particularly when the stakes are highest for humanitarian populations.
In Summary
This episode of Humanitarian Frontiers lays bare the real-world, high-stakes drama of AI deployment: a vast gulf between shiny tech promises, global regulatory gaps, invisible labor, and the practical struggles faced by humanitarian organizations. The conversation is a must-listen (and a must-read) for practitioners wanting to look beneath the surface of AI tools and make responsible, human-centered choices for the most vulnerable in society.
