Podcast Summary: The Analytics Power Hour #292
Episode Title: AI Without Adult Supervision with Aubrey Blanche
Date: March 3, 2026
Host(s): Mo Kiss, Val Kroll
Guest: Aubrey Blanche
Episode Overview
This special International Women's Day episode features a deep dive into the current state of AI, with particular focus on ethical considerations, intentionality, corporate responsibilities, and the unique risks and opportunities presented by rapid AI adoption. Aubrey Blanche, an experienced leader at the intersection of tech, ethics, and diversity, joins hosts Mo Kiss and Val Kroll to unpack how organizations and individuals can foster more responsible and principled use of AI, challenging assumptions of inevitability and examining real-world dilemmas for practitioners.
Key Discussion Points & Insights
1. AI as the "Teenager Home Alone" Metaphor
- [02:34] Aubrey: AI's development feels unsupervised—like a teenager left in charge—requiring more intention in its use.
- "Now is not the time to debate whether we should have a teenager, because the teenager is here."
- The inevitability narrative is pushed by those with vested interests; but it's not preordained that AI will only deliver positive outcomes.
- AI's impact is shaped by current incentives, but there is room for agency and course-correction.
2. Flawed Objectives & The Efficiency Trap
- [04:32] Aubrey: Efficiency shouldn’t be the primary goal of AI; focusing solely on speeding up the status quo is shortsighted.
- "Efficiency itself is a bullshit objective."
- The real question: Can AI help us innovate and create broader societal value—not just do what we do now, faster?
- Acknowledges a skills gap between builders and regulators, impeding productive conversations and resulting in missed benefits.
3. Intentional AI Adoption & Co-Intelligence
- [06:52] Aubrey: Advocates re-centering the objective on human flourishing and innovation.
- Highlights co-intelligence, where human and machine collaboration yields more value than replacement or automation alone. (Reference to Oya at Cambridge)
4. Organizational Risk Appetite & Illusions of Safety
- [11:15] Aubrey: Many organizations stick to "safe" uses of AI (e.g., internal productivity tools, not customer-facing innovations).
- This perceived safety is questioned, especially with labor-impacting choices; invisible risks (e.g., environmental, reputational) are often unconsidered.
- “I rarely see corporate leaders considering the risk of Earth's dwindling fresh water supplies when they're thinking about AI adoption.”
5. Frameworks for Ethical AI Governance
- [14:10] Aubrey: Proposes crafting an AI Use Policy rooted in company values, tested with real-world scenarios, and supported by practical training for staff.
- Principle-based guidelines remain relevant despite fast-moving tech—avoids rapidly outdated "laundry lists".
- "[Principle-based frameworks] actually solve some of that problem." [21:59]
6. Where Does Ethical Risk Ownership Sit?
- [18:52] Aubrey: Governance committees are essential—ethicists, risk professionals, customer advocates, and technical experts must all be involved.
- Example: A UK company that lets any employee escalate an AI ethics concern to a dedicated council.
- “If companies had the will…we could do much better.” [20:34]
- Sustaining ethical programs depends on visible executive support, especially when finance incentives push the other way.
7. Empowering Individuals: Personal Responsibility
- [25:16] Aubrey: Every practitioner has a sphere of influence; ethical decisions should be weighed even where no formal guidance is provided.
- Case example: Using AI note-takers—turn off data training, prioritize informed consent, audit product security credentials, and avoid using AI for sensitive information.
- “For me…the default should be off and people can opt in if they want to. It's a dark pattern.” [25:58]
8. Data Professionals: Closer to Risk, More Responsibility
- [30:55] Aubrey: Handling sensitive information elevates the ethical bar; however, practices from GDPR and privacy can (and should) inform responsible AI use.
- The ability to ask "what could go wrong?" is crucial, but practitioners must be wary of limited perspectives—lived experience matters in surfacing true risk.
9. The Measurement Challenge & "Phantom Value"
- [35:29] Aubrey: There's a disconnect between perception and reality regarding AI-generated value.
- Cites organizational myths—such as believing AI adoption inherently boosts financial returns—often unsupported by empirical evidence.
- Illustrates with examples (e.g., Klarna rehiring staff after failed AI-driven layoffs).
10. Analysts' Unique Role: Proactive Harm Monitoring
- [41:44] Aubrey: Analysts are uniquely positioned not just to measure outcomes, but to help define and track leading indicators for potential harm.
- “Everything could go wrong all the time…so that's what I would say is like, okay, define the bad…analyst can say…maybe we just do 10% of the objective for two or three months to…measure that…”
- Measurement should encompass both intended and unintended consequences; the approach may be driven by ethical or business imperatives, depending on the organization.
11. Agency, Expertise, and Agentic AI
- [45:34] Aubrey: People too easily cede decision-making to AI, especially when lacking domain expertise.
- “The people who are most likely to give up agency are the least skilled at the thing that AI is doing.”
- AI acceptance is higher among the less skilled, which increases risk; emphasizes the need for AI literacy as a basic competency.
12. AI Literacy as Public Health
- [48:26] Aubrey: Proposes AI safety training akin to public health messaging—simple, core behaviors repeatedly reinforced.
- “We need to think about AI literacy as a public health problem…”
- Corporate leaders have a higher ethical responsibility to teach safety behaviors due to the limits of government pace.
13. Intentionality in Product Development
- [52:06] Mo & Aubrey: Planning with AI requires additional pre-emption of uncertainty and ambiguity, but many risk management strategies from other domains already exist and can be borrowed.
- “You also have to operate with an understanding of something will go wrong. And I may or may not detect it with AI…” [54:06] - Aubrey
14. Interdisciplinary Value of Expertise
- [56:43] Aubrey: Technical expertise does not equal ethical (or other) expertise; ethical risk requires expanded perspectives and skills, not just more tech.
Notable Quotes & Memorable Moments
-
On ethical frameworks:
“[Principle-based frameworks] actually solve some of that problem… the underlying function of the technology changing actually doesn't change that as a governance structure…” — Aubrey [21:59] -
On efficiency vs. innovation:
“Efficiency itself is a bullshit objective.” — Aubrey [04:32] -
On personal advocacy:
“Imagine if each of us did one slightly more responsible thing every week. That's actually fundamental systems change…” — Aubrey [51:20] -
On analysts’ role in measurement:
“Analysts…are actually able to translate this idea of harm as a theoretical thing into a set of monitoring procedures that would actually tell you if something's going wrong.” — Aubrey [41:44] -
On who should govern AI risks:
“You need folks who are actual risk management professionals… an ethicist… someone to represent the customer… and some technical folks…” — Aubrey [18:52] -
On "phantom value":
“Entire markets are responding to PR talking points written by people who have an incentive for you to believe that and don't really have any accountability structures to tell the truth.” — Aubrey [36:00]
Timestamps for Key Segments
- AI "teenager" metaphor & inevitability narrative: [02:34]
- Flawed goals—efficiency vs. innovation argument: [04:32]
- Co-intelligence and AI as partner, not replacement: [08:49]
- Org risk strategy & dangers of "safe" adoption: [11:15]
- Frameworks for responsible AI: [14:10]
- Personal responsibility and navigating AI tool choices: [25:16]
- Data practitioners' unique ethical responsibilities: [30:55]
- Phantom value & measurement fallacies: [35:29]
- Analyst’s unique measurement value: [41:44]
- Agency, expertise, and agentic AIs: [45:34], [48:26]
- AI literacy as public health: [48:26], [51:20]
- Planning & intentionality analogies: [52:06], [54:06]
- Interdisciplinary committees & false confidence in solo technical expertise: [56:43]
The Analytics Power Hour "Last Calls"
[59:04+]
-
Aubrey Blanche:
Recommends watching Heated Rivalry: Not just entertaining, but also showcases new norms in media creation, with an ethical, inclusive approach. -
Val Kroll:
Endorses Joel Dickinson's Medium article: "I Don't Care What You Build, Neither Should You." Initially skeptical, she valued the focus on outcomes over outputs, especially the relentless leadership question, “How will we know if the problem is solved?” [60:01] -
Mo Kiss:
Asks for crowdsourced resources and ideas on measuring user engagement and success in AI products—what differentiates "creativity" or value in AI-driven vs. traditional product experiences. [61:01]
Closing Reflections
Theme:
AI's rapid evolution demands intentional, ethically grounded, and multilateral guidance—both at the organizational and individual level. The analytics community is uniquely equipped to not merely adopt AI, but to help guide its responsible integration through measurement, caution, and advocacy.
Aubrey’s final message:
Being a good actor with AI isn’t about waiting for direction from above, but about understanding your power, making thoughtful decisions, and encouraging principled, inclusive discussion wherever you have influence.
For further reading and resources from Aubrey and the hosts, see the episode’s show notes.
