High-Impact Growth — “Beyond ‘AI for Good’: Building Responsible AI”
Podcast by Dimagi | Air date: January 22, 2026
Host: Amy Vaccaro (A), Senior Director of Marketing at Dimagi
Co-Host: Jonathan Jackson (C), CEO & Co-founder, Dimagi
Guest: Genevieve Smith (B), Founding Director, Responsible AI Initiative, UC Berkeley
Episode Overview
This episode moves beyond the “AI for Good” hype to examine what it truly means to develop and govern responsible AI, especially in global development. Guest Genevieve Smith, an expert in responsible AI, outlines a practical framework for funders, practitioners, and policymakers, emphasizing the five core lenses of responsibility. The discussion explores the limits of viewing AI as an unequivocal social good, the risks of unchecked deployment, and participatory models like data cooperatives to empower vulnerable communities.
Key Discussion Points & Insights
1. Genevieve Smith’s Journey to Responsible AI
- Background:
- 10+ years in international development (UN Foundation, UN Women, International Center for Research on Women) (02:07)
- Focus on gender, economic empowerment, and digital inclusion.
- Shift to Responsible AI:
- Research at Berkeley on AI bias—identifying Western-centric research gaps.
- PhD at Oxford on the social implications of AI for financial inclusion, especially credit assessment tools in LMICs.
- Founded the Responsible AI Initiative at UC Berkeley, aiming for multidisciplinary, globally relevant research.
- Teaches responsible AI to undergraduates, graduates, and executives.
“Applications of AI in low and middle income countries were often categorized as AI for good, but really lacked that critical evaluation around some of these unintended consequences that... tools can have.”
— Genevieve Smith [03:41]
2. Defining Responsible AI
-
Core Definition:
- Designing, developing, deploying, and using AI in ways that are safe, fair, and trustworthy (06:22).
-
Five Responsibility Lenses:
- Fairness & Bias:
- Does the tool perform differently for various populations? Does it reinforce stereotypes or inequalities?
- Example: LLMs less accurate in LMIC languages; image generators perpetuate gendered stereotypes.
- Privacy:
- How is data collected, secured, consented, and governed?
- Security & Safety:
- How might models be misused by bad actors or fail, e.g., prompt injections, deepfakes, misinformation.
- Transparency:
- Can users and developers understand model decision-making and training data?
- Lack of transparency adopted by default if using closed models like GPT-5.
- Accountability:
- Are there clear structures if something goes wrong? Who is responsible for redress?
- Fairness & Bias:
-
Broader Considerations:
- Future of Work: Will AI displace or uplift jobs, especially in LMICs?
- Environmental Impact: Data centers and the AI “compute race” drive rising energy use.
- Geopolitics: The race for AI dominance intensifies, impacting global priorities.
“The purpose of using a tool for good is not sufficient because it doesn’t actually mean that any of these other things are solved for.”
— Genevieve Smith [12:48]
3. From Principles to Practice: Operationalizing Responsible AI
- For Product & Program Managers:
- Conduct risk assessments and audits with cross-functional teams.
- Implement adversarial “red teaming,” simulating how models could be exploited or fail.
- Leadership must prioritize responsibility—embed it in performance reviews and key results (OKRs).
- Ask the fundamental question: Is AI really the right tool for this problem? Avoid “AI for AI’s sake.”
“Choosing a model can be really important to assess... needs and potential risks. …But it’s also really important that organizational leaders prioritize responsibility because they're ultimately the ones that set the priorities of the organization.”
— Genevieve Smith [15:21]
- On AI Hype:
- Beware of "pilotitis"—developing AI pilots with no long-term scaling plan or value proposition, a lesson from earlier digital health waves (19:43).
- Don’t over-prioritize inclusion if it means overlooking equity (23:43).
4. Equity and the Risks of Helicopter Tech
- Past Parallels:
- Clean cooking stoves and digital health interventions failed when “helicoptered” into LMICs without local adaptation.
- AI Language Disparity:
- LLMs privilege Standard American English—other varieties get poorer outputs with more stereotypes.
- AI in Financial Inclusion:
- Gender-biased credit models: Women get fewer and smaller loans, despite better repayment than men (even after controlling for variables).
- Optimizing for profit often means perpetuating existing biases unless deliberate audits/interventions are performed.
“Even inclusion can actually come at the cost of equity sometimes. And we shouldn’t also think that inclusion is sufficient when it comes to some of the goals that we're trying to achieve as a development community.”
— Genevieve Smith [27:16]
5. Participatory Data Governance & Community Agency
- Participatory Design Spectrum:
- Ranges from minor input (e.g., surveys) to genuine co-creation and ongoing governance.
- Data Cooperatives:
- Example project: Building datasets (like images of people in non-stereotypical roles) structured as co-ops. Contributors have governance rights and share in benefits/revenue.
- Participatory approaches help counteract “extractive” AI and build for true community benefit.
“Responsibility isn’t just about finding the issues... It’s also about imagining technology futures that we want to live in and how do we ensure that they represent people's needs and perspectives and priorities and bring out community voice.”
— Genevieve Smith [44:32]
6. Leadership, Organizational Culture, & Personal AI Use
- Organizational Principles:
- Set clear AI responsibility principles.
- Create safe spaces for experimentation—a “lunch and learn” to share AI applications, successes, and concerns openly.
- Encourage transparency and normalization of responsible AI use, not shaming staff who do or don’t use it.
- Personal Practice:
- Genevieve uses AI for coding, editing, and as a “thinking partner,” but prefers local models (for privacy) such as Ollama and DeepSeek (33:06).
- Important to stay aware of cultural and privacy limitations of any model.
Notable Quotes & Memorable Moments
-
On the deficit of critical evaluation in “AI for Good” deploys:
“Simply calling a project ‘AI for good’ is not enough.” [00:55]
-
Five Responsibility Lenses summary:
“Fairness and bias, privacy, security and safety, transparency, and accountability… become important for any AI application, right? And especially as we think about how AI is being deployed in the development sector, oftentimes for vulnerable communities…”
— Genevieve Smith [11:18] -
On practical culture-change:
“Meta recently changed its performance reviews to have a question around how AI is being used in the workplace. …Imagine if there was another question, which is, how are you now using it in consideration to support for responsibility or trustworthiness?”
— Genevieve Smith [30:08] -
On inclusion vs equity:
“Inclusion can actually come at the cost of equity sometimes.”
— Genevieve Smith [27:16] -
On participatory design:
“We’re creating this new dataset... as a cooperative structure. So that essentially means that the people who contribute their data will have governance rights over how it’s used over time and be able to also have payments go to them if it is approved for use in different cases…”
— Genevieve Smith [43:29]
Timestamps to Key Segments
- Genevieve’s career journey: 02:07–05:46
- Defining Responsible AI & core lenses: 06:22–13:33
- Moving from big principles to everyday practice: 14:56–19:43
- Challenges: Hype, inclusion vs. equity, language bias: 19:43–28:38
- Practical advice for orgs & individuals: 28:38–33:06
- On participatory design and data cooperatives: 41:31–45:38
- Actionable takeaways for funders, policymakers, practitioners: 46:18–48:09
Actionable Takeaways
What You Should Do, Starting Monday
- For Funders:
- Integrate a responsibility lens into application reviews.
- Fund participatory processes, not just products.
- For Policymakers:
- Avoid relying solely on business self-regulation; recognize profit motives may not align with equity.
- For Practitioners:
- Embed the five responsibility components in every product/design review.
- Use practical frameworks (like the Responsible AI Playbook) to operationalize responsible AI.
“How can we use those five responsibility components as a lens to make products that better serve our communities and have a responsible AI strategy?”
— Genevieve Smith [47:10]
Resources Mentioned
- UC Berkeley Responsible AI Initiative — includes playbooks for responsible AI use.
- Organizations should consider using local open models like Ollama or DeepSeek (with technical and contextual caveats).
Closing Thought
Embracing responsible AI is not about slowing innovation, but about building trust and designing technology that truly serves everyone. Move beyond the “for good” label—use critical frameworks and prioritize local contexts, equity, and community voice.
