POLITICO Tech — "The business case for governing AI"
Date: October 30, 2025
Host: Steven Overly
Guest: Miriam Vogel, President and CEO of Equal AI; former Chair, National AI Advisory Council; co-author of "Governing the Machine"
Episode Overview
In this episode of POLITICO Tech, host Steven Overly sits down with Miriam Vogel to discuss the vital—though often overlooked—role of AI governance in the technology's successful adoption and business return on investment (ROI). Drawing on insights from her new book, "Governing the Machine," Vogel makes the case that robust AI guardrails are not just compatible with innovation and profitability—they actually accelerate both. The conversation explores where the responsibility for AI governance should lie, practical steps for organizations, the government's role in literacy and regulation, and the necessity of earning public trust.
Key Discussion Points & Insights
Miriam Vogel’s Position on AI: Optimism with Caution
- AI Net Positive: Vogel describes herself as overall optimistic about AI, provided governance and literacy gaps are addressed.
- "I'm very excited for where it can propel all of us... But I am concerned because we don't have enough of the guardrails in place." (01:58)
The "Why" Behind "Governing the Machine"
- Filling a Gap: Vogel wrote her book to provide a practical playbook for AI governance, noting the absence of actionable frameworks in the current literature.
- "[The book is] a playbook on how to do governance... We can put all the AI out there we want, but if people don't trust it, they won't use it." (02:47)
Governance as a Business Imperative
- Guardrails = Profit: Vogel argues that effective governance is not an impediment but a key to realizing ROI on AI investments.
- "Not just compatible, they are the accelerant. If we have smart governance in place, people will trust and use AI systems." (04:12)
- Comparison with other industries: "If we didn't trust that our cars had working brakes, we would not be using them." (04:25)
Where Governance Should Come From
- Balance of Government and Industry: Advocates a blended approach:
-
Government’s role: Align best practices, create certainty, and ensure uniformity.
-
Industry’s role: Implement governance day-to-day, ensure accountability.
-
"Majority of the work of governance has to happen within companies." (05:06)
-
Practical Steps for Responsible AI Use
- Internal Mapping & Accountability:
-
Companies should start by understanding all their existing and planned uses of AI.
- Example: Gaps in awareness revealed by a 2024 study on AI use in HR departments (06:39).
-
Assign clear C-suite accountability for AI governance—seen as the strongest indicator of business success with AI (05:56).
-
Open communication and feedback within and outside the company to build trust and address problems proactively.
-
"When your employees know that you've been thoughtful about how you're using AI, they can trust you and the products and services that you're offering..." (07:31)
-
The Meaning of Accountability
- C-Suite Ownership:
-
True accountability means someone at the C-suite level is publicly and practically responsible for AI outcomes.
-
This fosters organizational transparency and supports collaboration across divisions.
-
"The strongest predictor of success with AI... is ensuring that you have accountability in your C suite." (09:00)
-
Applicability to Developers vs. Deployers
- Unified Standards:
-
Whether developing or deploying AI, companies must understand their use cases and users, and put safeguards in place.
-
Legal responsibility exists regardless of a company’s role in creating AI systems.
-
"...At the highest level... it is the same playbook. And no matter if you're developing or deploying AI, it's really making sure that you have safeguards in place..." (10:23)
-
Government’s Current & Needed Actions
-
AI Literacy:
-
Government-led efforts on AI education are crucial to bridging trust gaps and unlocking AI’s benefits for more people.
-
Highlights successful models like Kentucky school districts actively teaching responsible AI use.
-
"Making sure the general public is comfortable with AI use... is one of the best strategies that government can do..." (12:04)
-
-
Regulatory frameworks:
-
NIST’s risk management framework cited as helpful, but more clarity needed on definitions (e.g., what constitutes an AI “incident”).
-
Calls for standardized frameworks and reporting mechanisms.
-
"People need to know the best practices... there are so many different hands and pieces of the process..." (13:47)
-
The Role of Law and Courts
- Courts as Key Actors:
-
While universal regulatory frameworks are important, many of the toughest questions will be resolved through legal cases.
-
Highlights a dramatic increase in AI-related litigation—an evolving body of case law will shape what "governance" means in practice.
-
"I think the courtrooms are increasingly seeing AI based litigation... I think we'll see that increase many times over in the next few years..." (15:26)
-
Public Trust and the Double Conversation
- Trust Requires Transparency and Acknowledging Risk:
-
Good governance frameworks are essential for both trust and safe adoption.
-
Vogel advocates for clear communication about risks; her book categorizes nine major kinds of AI risk (hallucination, privacy, workforce displacement, etc.)
-
A two-part conversation is needed: mitigation of risks and maximization of opportunities.
-
"The solution is implementing good governance... In our book, we provide nine different categories of risks because we want to have a meaningful conversation." (17:53)
-
Notable Quotes & Memorable Moments
-
On governance and trust:
- "If we have smart governance in place, people will trust and use AI systems... that's how you invite them in to using your innovation and trusting themselves and their families with this innovation." (04:12-04:35)
-
On C-suite accountability:
- "The strongest predictor of success with AI... is ensuring that you have accountability in your C suite." (09:00)
-
On fear and trust:
- "Letting people know: good news, you are using AI today... Second of all, we understand, we hear you, we know you have fears and that is rational." (18:22-18:38)
-
On legal liability:
- "Whether you've developed it or deployed it, the courts will be the ones to find whether or not you are liable." (10:23)
Key Timestamps
- 01:58 — Vogel sets her stance on AI optimism versus fear.
- 02:47 — Vogel explains the motivation for writing her book.
- 04:12 — Core argument: governance enables—not hinders—profit.
- 05:06 — The balance between governmental and corporate responsibility in governance.
- 06:39 — Where should companies begin? Self-assessment and accountability.
- 09:00 — What accountability looks like in practice.
- 10:23 — Legal liability: makes no distinction between those who develop and those who deploy.
- 12:04 — AI literacy as a national priority.
- 13:47 — The case for better and more aligned frameworks from government (NIST example).
- 15:26 — When voluntary frameworks aren’t enough, courts may fill the gap.
- 17:53 — Building trust through honest, two-part conversations about risk and opportunity.
Takeaways for Listeners
- Strong AI governance is not only compatible with innovation and ROI—it’s essential for both.
- Trust is the currency of successful AI adoption.
- Both regulators and industry leaders must work together, but much of the day-to-day work happens within organizations.
- Building organizational awareness, accountability, and open communication are the first necessary steps.
- The evolving role of law and the courts will serve as a critical check and guidepost for how AI governance will mature in coming years.
- Educating the public and workforce on AI is as important as technical innovation.
Recommended Reading:
- "Governing the Machine" by Miriam Vogel
Subscribe for future episodes and updates on technology, politics, and policy at POLITICO Tech.
