Nonprofit Lowdown #353: "AI Can't Be Ignored" with Nate Wong
Date: September 1, 2025
Host: Rhea Wong
Guest: Nate Wong, Partner at Bridgespan, Co-author of "AI Cannot be Ignored"
Episode Overview
This episode explores the pivotal role artificial intelligence (AI) is coming to play in the nonprofit and social sectors. Rhea Wong sits down with Nate Wong to discuss their recent white paper, “AI Cannot be Ignored”, and the necessity for nonprofits to not only acknowledge, but actively shape AI’s integration in the sector. Together, they tackle common fears, discuss resources and practical use cases, and reflect on the deeper human implications of adopting AI.
Key Discussion Points & Insights
1. Nate Wong's Journey to AI & Social Impact ([01:22]–[03:50])
- Nate started in engineering due to family expectations, but his passion evolved into “public interest technology”: using tech for societal good.
- He felt more compelled by the strategic, upstream conversations around technology’s impact: not just the how, but the should.
- At Bridgespan, Nate refocused on aligning technology adoption with nonprofit missions, not just chasing trends.
“What are the societal implications of technology, good, bad and ugly? And where do I want my role to be within that?” — Nate Wong [02:03]
2. AI as a Societal Turning Point ([03:50]–[06:54])
- Rhea compares AI adoption today to the dawn of the internet.
- Nate introduces the “Worlds” framework:
- World 1: Current, institutional reality, risk-averse, slow to adapt.
- World 2: More authoritarian, rigid.
- World 3: Uncharted, reimagined future. This is where AI can take us, but nonprofits often operate from World 1, stuck in caution and risk aversion.
- Nonprofits risk extinction or irrelevance by ignoring AI.
“I feel like AI is in that world 3…Oftentimes nonprofits are entering and they're entirely risk averse." — Nate Wong [04:46]
3. The Risk of Exclusion: Why Nonprofits Must Engage with AI ([06:54]–[09:14])
- Major risks if nonprofits abstain:
- Losing influence over how AI is shaped and used
- Failing to supply data and human values to AI models
- Falling behind in impact and efficiency
“If you're not [participating], then you're actually not supplying these LLMs…data that could actually be useful.” — Nate Wong [07:37]
- Efficiency argument: While common, focusing only on AI for efficiency is too limiting; AI can help reimagine what the sector can be.
4. AI Adoption when Under-Resourced ([09:14]–[13:34])
- Many nonprofits feel too overwhelmed to start with AI.
- Shadow AI is already happening: staff experimenting with tools without leadership’s approval (“70% of people are actually using unapproved AI" [10:31]).
- Recommendation:
- Encourage safe, iterative experimentation.
- Start small; use free tools in mission-aligned, low-risk ways (e.g. drafting donor letters).
- Build champions within your team—let tech-forward staff lead.
“Just getting your hands on it in very micro ways…can be really exploratory.” — Nate Wong [11:32]
5. Investing in Tech: Making the Case to Funders ([13:34]–[17:03])
- The narrative that donors won’t invest in tech is shifting, but there’s still a lack of tech understanding on the funder side.
- Tech, like computers and Zoom, is essential “infrastructure,” not just a line item.
- Nate’s rule of thumb: Multiply your tech budget estimate by five to include true cost (maintenance, talent, etc.).
- It’s not only up to nonprofits to advocate—donors must also change behaviors and expectations.
“AI is the Trojan horse for a lot larger conversations about what technology can be invested in.” — Nate Wong [14:51]
6. Avoiding the 'AI Sludge' and Maintaining Quality ([17:03]–[20:15])
- Rhea: Using AI for efficiency alone risks flooding the field with low-quality content ("AI sludge").
- Human oversight is essential—AI should draft, but humans must refine and personalize.
- AI should enable better, more thoughtful work, not just faster outputs.
“We talk about saving time, but then what do you do with that newfound time?...It isn't just getting a task done faster, cheaper, but it is getting it done better as defined by the human.” — Nate Wong [18:50]
7. Let the Robots Robot, Let the Humans Human ([20:15]–[22:37])
- AI should free up staff for deeper human relationships and tasks that require empathy, creativity, and trust.
- Frictionless tech isn't always desirable—"healthy friction” (e.g., human connection, trust-building) is vital for meaningful nonprofit work.
“Let the robots robot, let the humans human.” — Rhea Wong [20:28]
"Healthy friction...builds trust, it builds equity, it helps us slow down." — Nate Wong [21:15]
8. Societal Impact: Technology and Relational Breakdown ([22:37]–[26:52])
- Rhea shares a personal anecdote about shortened attention spans due to tech and the loss of basic conversational skills in younger generations.
- They banter about starting a movement: "Make America Relational Again (MARA)."
- Polarization is heightened by lack of real conversation—technology can facilitate, but also hinder relational depth.
9. Navigating Risks: Bias, Privacy, and Ethics in AI ([26:52]–[31:28])
- AI is not all or nothing—engage with a reflective, iterative mindset.
- Use resources like Fast Forward’s AI Playbook to guide policy and adoption with an eye on bias, privacy, and accountability.
- Match adoption to risk level: start with internal, low-risk uses; think role-by-role for specific applications.
- Experimentation and feedback loops are essential to avoid paralysis and lagging behind.
“It's not an all or nothing play…Never going to be a good time…It's important to experiment, but to do it in a responsible way and in a container where there's feedback loops.” — Nate Wong [28:14]
10. AI in Grantmaking: Opportunities and Cautions ([31:28]–[34:56])
- Some funders are interested in AI for grant sourcing and due diligence, but Rhea and Nate both see risks:
- Could reinforce impersonal grant practices
- Could result in "AI talking to AI” with little human relevance or differentiation
- The value lies in using AI to free staff time for real relationship-building, not to replace it or create more bureaucracy.
11. Case Study: Center for Employment Opportunities ([35:25]–[38:46])
- CEO deployed AI to support case managers working with formerly incarcerated individuals.
- AI transcribes case meetings, suggests follow-ups, and helps surface human bias in documentation (e.g., the first vs. last client of the day).
- Example shows AI’s power to reduce pre-existing bias and free staff to be present with clients.
“The starting point for bias, I think we almost think about, oh my gosh, AI is creating bias. But what was the normal human bias that we were starting from?” — Nate Wong [36:48]
12. Fun & Personal Use Cases of AI ([38:46]–[41:48])
- Rhea’s favorite: photographing her wardrobe and asking AI to generate travel packing lists.
- Nate’s favorite: using AI for trip planning (scheduling, weather, activities), and experimenting with self-roasting prompts (“Have it roast you, but as your drunken bestie, it’s hilarious.” — Rhea Wong [41:21]).
Notable Quotes
- “AI is not coming to take your job. Someone who knows how to use AI is coming to take your job.” — Rhea Wong [17:07]
- “If you’re not actually learning how to use this new technology, you’re essentially tying your own hands behind your back.” — Rhea Wong [05:32]
- “Efficiency is still a very narrow view of AI…We have yet to imagine what world 3 looks like.” — Nate Wong [08:44]
- “Multiply your [tech budget] number by five. That is probably more in the realm of what it will take.” — Nate Wong [16:40]
- “It’s not AI itself, we’re so fixated on the tool versus the use of the tool and therefore we can't reimagine a good use of it.” — Nate Wong [25:43]
- “Let the robots robot, let the humans human.” — Rhea Wong [20:28]
- “Make America Relational Again.” — Rhea Wong [23:38]
- “AI should be used to free up your time for you to be a human.” — Rhea Wong [20:23]
Timestamps for Key Segments
- [01:22] Nate’s path to social impact and AI
- [03:50] Is AI the next internet? Nonprofit extinction risk
- [04:46] “World 1, 2, 3” framework & risk aversion
- [06:54] What nonprofits risk by not engaging AI
- [09:14] How to start with AI when overwhelmed
- [13:34] Making the investment case to funders and boards
- [17:03] The downside of AI-for-efficiency-only (“AI sludge”)
- [20:15] Enabling human connection, not just automation
- [22:37] Societal shifts: technology, attention spans, and “MARA”
- [26:52] Navigating bias, privacy, and ethics—no perfect time
- [31:28] AI in grantwriting/grantmaking—tools and traps
- [35:25] Case study: using AI in nonprofit case management
- [38:46] Favorite/fun personal AI use cases
Resources Mentioned
- Bridgespan’s white paper: "AI Cannot be Ignored"
- Fast Forward’s AI Playbook and Policy Builder ([link to be in show notes])
- Center for Employment Opportunities (use case)
- Connect with Nate Wong on LinkedIn ([link to be in show notes])
Tone & Style
The conversation balances playful banter with honest, strategic advice. Rhea and Nate are pragmatic, sometimes irreverent but always focused on practical steps and the human element underlying technology adoption in nonprofits.
For Listeners: Key Takeaways
- AI isn’t a passing fad—nonprofits must at least “stick a toe in” or risk losing influence, efficiency, and relevance.
- Don’t let fear or lack of perfection paralyze your organization as the technology evolves.
- Start small, experiment, and encourage a culture of learning about and from AI tools.
- Advocate for true investment in tech, understanding total cost and not short-changing infrastructure or talent.
- Use AI to enable stronger relationships, not just outputs; be wary of “AI sludge.”
- Seek resources and build thoughtful policies for responsible, ethical AI use.
- Let the robots handle the grunt work—free up your people to deepen what makes nonprofits irreplaceable: human connection.
Final Call to Action:
“Make America Relational Again”—think about where AI frees you up to be more human, and start experimenting, reflecting, and shaping the AI future with your values at the center.
