The Monopoly Report: Episode 66
Title: AI Governance is NOT Optional
Host: Alan Chapell
Guest: Shoshana Rosenberg, Chief AI Governance and Privacy Officer, WSP USA
Release Date: February 18, 2026
Episode Overview
This episode delves deep into the imperative of AI governance for the advertising and tech industries, focusing particularly on the concept of digital agency as a fundamental human right. Alan Chapell and guest Shoshana Rosenberg examine explainability in AI systems, the tension between privacy and transparency (especially with Privacy Enhancing Technologies, or PETs), and practical approaches for businesses to implement ethical AI. The discussion centers around moving beyond mere compliance to proactive ethical stewardship, underscored by Rosenberg’s PRISM framework—a practitioner-focused toolkit for AI governance.
Key Discussion Points & Insights
1. What is AI Governance, and Why Is it Important?
[05:45]
- Shoshana Rosenberg: AI governance is about oversight and strategy: knowing how AI may quietly shift an organization away from its intended goals or documentation. It is less about "checkbox compliance" and more about understanding AI’s actual impact.
- “It’s much more about an oversight and strategy component…so that you can see the problems before they arrive as liability.” — Shoshana [05:45]
- Alan Chapell: Effective business leaders anticipate how variables (like AI) will impact their outcomes, not just hope for the best.
2. Origin Story & AI in Ad Tech
[06:19]
- Shoshana’s background as a Navy engineer and international law attorney led her naturally to privacy, tech, and ultimately AI governance.
- Observed early that privacy and machine learning (“not AI, that’s machine learning!”) were colliding, especially as machine learning started shouldering privacy concerns.
- Alan: Ten years ago, “machine learning” could be used to obscure or gloss over privacy implications, but that’s less feasible today.
3. Digital Agency as a Fundamental Human Right
[08:19]
- Definition: Digital agency means being able to understand, accept, reject, or challenge the digital context you're placed in—knowing why an algorithm or system is making a decision about you.
- “Digital agency is about the ability to understand the context of what you are being given so you can decide to accept it, reject it or challenge it.” — Shoshana [08:34]
- Ad Tech Challenge: Black-box targeting undermines agency; users need transparency and control beyond “why am I seeing this?” dialogs.
- True explainability should outline what data is used and how, with options to opt out or adjust preferences.
4. Explainability & Its Layers
[10:44]
- Rosenberg advocates for explainability at several levels:
- User level: Intuitive information and controls.
- Client/B2B/Agency level: More technical detail.
- Full audit level: Deepest possible traceability.
- Explains why post-hoc explainability is inadequate—ad tech’s inferential logic is not robust enough for real accountability.
5. Legal and Regulatory Gaps
[14:07]
- EU’s AI Act and GDPR have gestures toward explainability but lack practical enforcement.
- “There are no teeth there, and there can’t be until you set a mandate for explainable by design, by such and such time.” — Shoshana [14:12]
- Alan and Shoshana agree no current jurisdiction effectively mandates real-time, enforceable explainability.
- Enforcement, tracking, and imposing requirements on big tech remain global challenges.
6. The Role of Government Procurement
[17:29]
- Even when governments reduce regulation, procurement requirements for federal contractors may drive explainability into mainstream business practices.
- Contractual obligations will ripple through vendors, subcontractors—potentially embedding governance standards even where regulatory oversight falls short.
7. Enforcement Challenges
[20:06]
- There's a chronic lack of resources in regulatory offices, making enforcement (and thus meaningful oversight) difficult.
8. Tension Between Privacy and Transparency
[21:37]
- Alan: Many PETs (privacy enhancing technologies) reduce explainability by hiding the “how” and “why” of data use, sometimes creating more opacity than they solve for.
- “Privacy at the expense of transparency isn’t really helpful, particularly given the effectiveness of many of the PETs to actually address consumer concerns about privacy.” — Alan [21:37]
- Shoshana: If PETs undermine transparency/agencies, they're counterproductive. Transparency and privacy are not enemies—done right, both serve digital agency.
9. Trust, Auditing, and Market Incentives
[23:16]
- Platforms often ask advertisers to “trust our numbers” post-privacy tech, but there’s no way to independently audit datasets or campaign outcomes.
- “The platform holds the cards and everyone else—both the end user and those counting on them—are at a disadvantage.” — Shoshana [23:19]
- There is no meaningful external certification, akin to “organic” labels, for PETs or data stewardship.
10. The Infrastructure Debate: Incremental Fixes vs. Overhaul
[27:40]
- Alan: The IAB and others are debating whether to incrementally improve (build on existing infrastructure) or start anew (scrap the broken foundations).
- Shoshana: Advocates a hybrid—fix on the fly, but set a date for explainability by design in all new systems.
- “You want to try to make a fixed date by which they have real incentives, both financial and otherwise, to get to a place where the tech can do what we need it to do.” [28:32]
11. Practical Guidance for 'Little Tech'
[30:31]
- Companies should start embedding explainability in procurement and vendor management, even if partners aren’t ready.
- Be able to clearly document your contributions and controls when leveraging pre-built models or embedding AI.
- “Be able to evidence what you have put in place, the controls you have around it, the ways in which you are auditing and reevaluating both the functional, the outputs and the way it’s working as best you can.” — Shoshana [30:31]
- Recognize you’re building on “wobbly technology”—AI is often predictively statistical, not logically deterministic.
12. PRISM Framework for AI Governance
[33:49]
- PRISM: Principles, Responsibility, Intelligence, Security, Monitoring.
- Designed for practitioners, not just compliance.
- Encourages holistic, adaptive governance that persists across organizational “angles” (roles/departments).
- Not just for writing policies, but for daily operational decisions, problem-solving, and documentation.
Notable Quotes & Memorable Moments
- “Explainability within ad tech might mean... when you look at why you’re seeing [an ad], it says you’ve been to shoe stores, you live within 10 miles of the store... and you have some controls where you can say, don't use this, don't track this, don't sell me this.” — Shoshana [09:34]
- “You have an inalienable right to freedom from interference by dragons, but we don’t have to enshrine it until there are dragons. And we're now at a point where agency is actually eroding faster... Both are on parallel tracks of complete dissolve.” — Shoshana [12:10]
- “Explainability by design by 2029.” — Shoshana [29:25]
Important Timestamps
- [04:53] – Shoshana defines AI governance and its significance to business strategy.
- [08:19] – Digital agency defined; shortcomings in current ad tech.
- [10:44] – The importance and layers of explainability in AI.
- [13:57] – Why explainability is unattainable post hoc (especially in ad tech).
- [14:07] – Legal frameworks: why existing statutes are not enough.
- [17:29] – The power of government procurement in driving governance.
- [21:37] – PETs, privacy, and transparency: finding balance.
- [23:16] – Trust and verification problems for advertisers.
- [28:32] – Shoshana’s “explainability by design by 2029” challenge.
- [30:31] – Advice for smaller tech companies entering the AI race.
- [33:49] – PRISM framework overview.
Takeaways for Listeners
- Digital Agency: It's becoming urgent to treat digital agency as a human right to ensure individuals can understand and control how AI influences their experiences—not just in theory, but in real ad tech contexts.
- Explainability: Transparent AI isn’t a “nice-to-have;” it’s essential infrastructure for responsible advertising and business fairness.
- Privacy vs. Transparency: PETs that “protect” privacy at the expense of verifiability may be creating new problems. Businesses and platforms need to pursue both.
- Practical Measures:
- Start asking hard questions of vendors now—even if explainability notices aren’t standard yet.
- Document your own AI practices, controls, and audits (“know what you put in your chili!”).
- Prepare for coming regulation by building real oversight into your organization.
- Regulation is Coming: Whether it’s government contracts, executive orders, or new laws, external pressure will soon require real AI governance.
- PRISM Framework: A practical, cross-disciplinary approach for organizations to move beyond checkboxes and towards genuine, ethical governance.
Resources Mentioned
- Book: Practical AI Governance by Shoshana Rosenberg (May 2026, Kogan Page).
- Shoshana’s initiatives: Women in AI Governance; Logical AI Governance.
- Chappell Regulatory Insider: Alan’s regulatory outlook resource.
- Show Notes: Will include links to the book, resources, and AI governance materials.
Final Words
“These aren’t just systems that target ads. Increasingly, these systems can manipulate perception, automate decision making at scale, and create feedback loops that amplify bias and misinformation... With AI, the stakes are even higher.” — Alan Chapell [wrap-up at ~38:00]
Call to Action: Don’t wait for perfect standards or for regulators to force your hand. Start demanding and building explainability, documentation, and controls into your AI systems now—those ahead of the curve will hold the future advantage.
For more: Subscribe to the Monopoly Report newsletter and podcast. Pick up Shoshana Rosenberg’s book in May 2026 for an applied roadmap to AI governance.
