Loading summary
A
Hey, welcome back to Politico Tech. I'm your host, Steven Overle, and on this show, I break down tech politics and policy with the people shaping our digital future. Tech companies are shelling out billions and billions of dollars to make their AI ambitions a reality. So much money that some economists and even some industry players worry about a market bubble. Well, Miriam Vogel tells me if they want to make their money back, companies need to do something that they often agree to. New AI rules. Miriam is the president and CEO of a nonprofit called Equal AI, and she previously served as chair on the National AI Advisory Council, a body that Congress created to guide the White House on AI policy. Now, Miriam has co authored a new book out this week called Governing the Machine, which lays out steps for keeping AI under control without tightening the leash too much. On the show today, we delve into who should set these ground rules for AI, the connection between governance and trust, and why both will help companies to cash in. Here's our conversation. Miriam, welcome to Politico Tech.
B
Great to be on with you, Steven. Thank you.
A
So I'm excited to dive into your new book. I do want to start, though. I talk to a lot of people in AI and I'm curious, when it comes to AI and how it will shape our future, where do you put yourself on sort of the spectrum from doomsday or to kind of unbridled optimist?
B
Yeah, yeah, good question. I would say I am AI net positive. I'm very excited for where it can propel all of us in the very near future today. But I am concerned because we don't have enough of the guardrails in place. We don't have enough AI governance in companies using AI to ensure that they're using it safely. We don't have enough AI literacy. Too many people don't know that they are using AI or how to use it safely, and so they are not using it. So I think if we can turn that around, then we'll be in a much stronger situation, and I will be AI optimistic.
A
And so you've written this book now, and my desk is sort of stacking higher and higher with folks writing different books, different takes on AI. Why did you decide to write this book? Why did you feel it was really needed?
B
Yeah, fair question. The last thing we needed is another AI book. But what I didn't see out there is a playbook on how to do governance. You know, it doesn't seem like the sexiest topic, but I really think it's exciting. It's the way that we build trust in innovation. We can put all the AI out there we want, but if people don't trust it, they won't use it. And as you're seeing, so many companies are confessing they're getting a low return on their investments, they're making significant investments, they are not seeing the gains of their investments because people are not using it. And people are not using it because we have not put the governance systems in place to confirm to them, to verify that this is a system that deserves their trust. So this book tries to spell out this is what leading companies are doing. If a company cares about AI governance, these are the best practices that they are implementing. This is what smart policymakers are thinking about. And so that this is what you should do when you're using AI.
A
It's an interesting point you make because I so often hear people say that if we put guardrails on AI, it's going to hamper innovation. And these companies are investing trillions of dollars. We're all competing in this AI race and we don't want to do anything to potentially hinder companies that are pursuing that. It sounds like you're saying though, that kind of guidelines or guardrails can be compatible with profit.
B
Not just compatible, they are the accelerant. If we have smart governance in place, people will trust and use AI systems. It's with the same with any innovation. If we didn't trust that our cars had working brakes, we would not be using them. If we didn't trust that the airplane would land safely, there's no way we'd put ourselves and our families into them. By making sure you have smart targeted guardrails in place, by making sure you have governance that the general public is aware of. That's how you invite them in to using your innovation and trusting themselves and their families with this innovation.
A
And so when it comes to governance of AI, how much of AI governance should come from the government, kind of very top down, if you will, and how much should come from individual industries or enterprises really setting their own frameworks for what they should be doing?
B
I mean, you're absolutely right, there is a balance. I see governance as an umbrella and key part of that coverage of that umbrella is the government's role there. We need alignment on what are best practices. We don't want every company for themselves figuring out what best practices are. We don't want every company figuring out for themselves what they're testing, how they're denoting, what they've tested. We want alignment on the use cases and how they're being described so that as AI crosses the many, many hands and users, there's certainty on what has been tested and when. But majority of the work of governance has to happen within companies. So what we need is more companies aligning on those best practices. Again, this book tries to help open the curtain and share best practices, making sure that the AI deserves our trust because it has been governed. There's a survey for the ways that each company is using AI. There's accountability at the highest levels. There's both a carrot and a stick there. If you don't have accountability, governance will not work. But McKinsey 2025 report shows when you do have a CEO that is in charge of responsible taking accountability for your AI governance, that's the highest indicator of return on your generative AI investment.
A
And so what are some of the steps that you recommend companies take? I mean, where do they start when it comes to doing AI and using AI in a really responsible way?
B
Companies have to start by level setting on how they're using AI today, how they're planning to use it. Too many companies don't realize the ways that they are currently using AI across their systems. There was a really interesting Little study in 2024 where they asked leading global companies how many were using AI in their HR systems. About 82% of the chief HR officers reported that they were using AI in HR. Only 68% of the CEOs in those companies reported using AI in the HR systems. Only 48% of the general counsels. So you have to start off by level setting on where AI is and will be used in your company so that you know how to address the risks and think about what the potential risks even are that you want to be mitigating. Next, you have to, as I said, have accountability at the highest levels. You need to make sure that everyone in your organization is clear on who in your C suite has accountability for your AI governance. You want to make sure you have a very clear process in place and you want to make sure that's communicated across the company. And the good news is that is how you're building trust. When your employees know that you've been thoughtful about how you're using AI, they can trust you and the products and the services that you're offering, that's then communicated to the customers, your employees and your customers are your front lines. And so when you've done the work of putting this process in place, you can then have this conversation with them of saying, we are following the best practices known with AI and AI is a new innovation, there will be new patterns, new challenges that arise. And so we want to enlist you as our partner. Notify us, tell us what is or is not working for you, because then you're inviting a conversation. Otherwise, when there is something that goes awry because it's AI, it will happen. You'll instead find about, find out about it. Because they're talking to you in the headlines, Stephen, or because they're filing a lawsuit where, where they will make sure that the courts are addressing the harms.
A
And what does accountability look like? This is a question that frankly I grapple a lot with because people, you know, you hear people say there needs to be accountability. And I think there's a question, does that look like enforcement? Does that come from the government? Is that self imposed by, by industry? What does accountability mean to you?
B
You're right. There are many different layers of it. But the strongest predictor of success with AI, the best way to build trust in your company and your AI products is ensuring that you have accountability in your C suite. That someone in your C suite is saying, I am responsible for the budget, I'm responsible for any mishaps. I'm responsible for making sure that everyone who uses our AI is benefiting from it and it is safe for those who are using it. There's really no other way for the people around the organization to feel safe and comfortable bringing up challenges, talking about where there's going to be budget needs or other ways that you need to demonstrate alignment. There's going to be discussions you're going to need to have between your HR division, your general counsel, your innovation teams. And so you're going to need that person at the top who is governing and deciding where are you going forward, where there are necessary precautions you have to add before proceeding.
A
And does that kind of look different if you're a company that is making this AI versus just using it? Because I can imagine if I'm the CEO kind of deploying an AI system, I'd be a bit nervous in some ways about taking accountability for a system that I didn't develop and maybe don't even fully understand. There's a lot about AI that is still a mystery to people, sometimes even the people who make it.
B
Great question. No, there is. At the highest level, when we're talking about AI governance, it is the same playbook. And no matter if you're developing or deploying AI, it's really making sure that you have safeguards in place to know when and how you're using AI? What are the use cases for which you've developed those AI or are are using deploying those AI systems? What are the use cases and users? Because at the end of the day, whether you're developing or deploying the AI systems, you're going to want to know for whom could this fail. At the end of the day, whether you've developed it or deployed it, the courts will be the ones to find whether or not you are liable. So you're going to want to make sure that you're taking accountability and you know that you're using it for the use cases and for the users for which it was designed.
C
It's okay not to be perfect with finances. Experian is your big financial friend and here to help. Did you know you can get matched with credit cards on the app? Some cards are labeled no ding decline, which means if you're not approved, they won't hurt your credit scores. Download the Experian app for free today. Applying for no ding decline cards won't hurt your credit scores. If you aren't initially approved, initial approval will result in a hard income inquiry which may impact your credit scores experience.
A
Is there anything you think the government, the US Government in particular here should be doing now that it isn't? When it comes to governing AI, one.
B
Thing I'm very encouraged by is that we've seen executive orders on AI literacy and AI education. I think that is a critical missing gap right now in AI. I think so many people are afraid to use AI. You know, the Pew study says that 50% of Americans are more concerned than excited about AI because they are not AI literate, because they don't know that they are using AI. They know they're afraid. They don't know how to address those fears and they're not quite sure of the opportunities that make them want to use AI. We haven't clarified for them why it's worth taking on this challenge of learning how to use AI. So making sure that the general public is comfortable with AI use, wants to use AI, is one of the best strategies that government can do to support it in our education, education systems, in our workforce, making sure people are AI ready. So I think that that is a key step. I hope we'll see more of that. I hope we're clarifying what it means to be a good upskilling program. I hope that more school districts will be straightforward about the fact that we're in an AI world now. We cannot put our heads in the sand and pretend that AI will not be used by students and teachers. I've seen some really smart districts like in Kentucky where they have talked about what the appropriate uses are for AI education system. They've had workshops for teachers to show them how and when to use AI so that it's clear what is considered inappropriate, what is considered cheating for students and what is considered a good use case that is developing critical thinking and supporting people in their AI use.
A
So do you think the government could be doing even more there then kind of pushing even more on that AI literacy piece?
B
I would love to see more clarification from the government, from foundations on where we what are the best mechanisms to ensure that our general population is AI literate? What is a good workforce program? Yes, I think that would be really helpful. I think the other important place for government is what we saw for instance with the NIST risk management framework. People need to know the best practices for addressing using, developing AI. There are so many different hands and pieces of the process and so the more we can provide certainty on what, what good governance looks like, aligning on definitions, what are the good uses, what are safe practices, what is an AI incident? You know, for instance, we know what a cyber incident is. We need to have similar alignment on AI. What is an incident? What are best practices to address such incidents when they do arise? Who within the government should you be speaking with if such an incident arise so that first of all they can share best practices practices but also be aware of oncoming threats to the extent that they're patterns and can take other precautions for national security and protecting other companies from similar threats.
A
One of my questions on frameworks, because as you mentioned, you know, there are frameworks for instance around cybersecurity that are pretty widely followed and yet we do still see security breaches, right? You know, there's privacy frameworks and yet our data is still being harvested in a million ways. I guess a question I have is, is there a point at which, you know, a voluntary framework isn't enough, that we do need the government to, to step in with stronger rules here?
B
There is places for, for strong government for sure. I mean again, I think on alignment, on definitions, on expectations, that's a place where you want uniformity, you don't want every company figuring out on their own. But I think an area where we're going to get a lot of those answers are not necessarily in the state houses, it's in the courthouses. I think the courtrooms are increasingly seeing AI based litigation. The AI index report indicated a six fold increase over a Six year period. I think we'll see that increase many times over in the next few years as lawyers begin to understand the harms coming from AI as well as the deep pockets involved here. So I think another part of our book, we have three chapters dedicating to laws and regulations currently on the books that companies should be aware of. We don't want them caught by surprise that there is governance framework in place today. Whether it's tort law, contract law, it remains to be seen how the courts will determine the liability in these situations. And you don't want to be caught off guard that these laws and regulations will be applicable to your AI use.
A
You're a lawyer by, by training, by background. I know you and I have spoken in the past that you see that a lot of current laws, a lot of existing law is applicable to AI. Right. We just, we just haven't necessarily seen it applied in actual cases that way. So it sounds like you're saying these next couple of years, as these cases play out, may kind of give us in some ways the legal framework that people are questioning, whether it exists today.
B
Absolutely. If a company, an organization is operating, thinking that this is lawless, they're in for a very rude awakening. There are many laws on the books that address any innovation. All of our behavior and likewise will be addressing our AI development and use.
A
You know, one of the key points in the book, and I know this is true of your work@equal AI, and you've mentioned it already in the interview, but this idea of trust, that some of these rules and frameworks are just table stakes for establishing trust in the technology. So much of the tension, I think, around who controls AI seems to come down to trust. And I do think there are people who simply do not trust tech companies to put good, you know, the good of the people over their own profits and really deploy this technology in ways that are people centered. And I wonder what you see as kind of the solution to that.
B
The solution is implementing good governance. AI governance. Part of the motivation of this book is so that there can be certainty with every organization and in the general public as well as policymakers. What does good governance look like? It should be a stamp of endorsement and a way to build trust. And when we can identify those best practices and those measures that are being used, then we know when they're not being used. So absolutely, innovations need good governance. Are people afraid? 100%. And I think we need to have an honest conversation about what those fears are. In our book, we provide nine different categories of risks because we want to have a meaningful conversation. And part of that has to be, first of all, letting people know good news, you are using AI today. If you did not think you are, look at your newsfeed, look at your Netflix, use your gps, you are using AI. Second of all, we understand, we hear you, we know you have fears and that is rational. We're talking about innovation. Historically, that has been a human response to new technologies, but it's also a novel innovation. We're talking about here with AI where even the developers are telling us they don't have fully understand this product. And so the fears are natural. And rather than avoiding or ignoring them, let's name them. So of these nine categories, hallucination is a very real concern. Privacy concerns should not be minimized. Workforce displacement. These are all real and meaningful. So we name them so that people can then think about mitigation strategies and get to the next key part of the conversation. And that is use. There is so much opportunity to for every person, family, company, community. And we want to make sure we're unlocking AI potential for all of them with this meaningful two part conversation of both risks and opportunity.
A
Well, Miriam, the book is called Governing the Machine. It is out now. Thank you so much for being here on Politico Tech.
B
Thanks so much for having me, Steven.
A
That's all for this week's Politico Tech. If you like Politico Tech, be sure to subscribe and recommend the show to a friend or colleague. And for more tech news, subscribe to our newsletters, Digital Future Daily and Morning Tech. Our producer is Nirmal Malikal. Pran Bandy made our theme music. I'm Stephen Overlea. See you back here next week. Sam.
Date: October 30, 2025
Host: Steven Overly
Guest: Miriam Vogel, President and CEO of Equal AI; former Chair, National AI Advisory Council; co-author of "Governing the Machine"
In this episode of POLITICO Tech, host Steven Overly sits down with Miriam Vogel to discuss the vital—though often overlooked—role of AI governance in the technology's successful adoption and business return on investment (ROI). Drawing on insights from her new book, "Governing the Machine," Vogel makes the case that robust AI guardrails are not just compatible with innovation and profitability—they actually accelerate both. The conversation explores where the responsibility for AI governance should lie, practical steps for organizations, the government's role in literacy and regulation, and the necessity of earning public trust.
Government’s role: Align best practices, create certainty, and ensure uniformity.
Industry’s role: Implement governance day-to-day, ensure accountability.
"Majority of the work of governance has to happen within companies." (05:06)
Companies should start by understanding all their existing and planned uses of AI.
Assign clear C-suite accountability for AI governance—seen as the strongest indicator of business success with AI (05:56).
Open communication and feedback within and outside the company to build trust and address problems proactively.
"When your employees know that you've been thoughtful about how you're using AI, they can trust you and the products and services that you're offering..." (07:31)
True accountability means someone at the C-suite level is publicly and practically responsible for AI outcomes.
This fosters organizational transparency and supports collaboration across divisions.
"The strongest predictor of success with AI... is ensuring that you have accountability in your C suite." (09:00)
Whether developing or deploying AI, companies must understand their use cases and users, and put safeguards in place.
Legal responsibility exists regardless of a company’s role in creating AI systems.
"...At the highest level... it is the same playbook. And no matter if you're developing or deploying AI, it's really making sure that you have safeguards in place..." (10:23)
AI Literacy:
Government-led efforts on AI education are crucial to bridging trust gaps and unlocking AI’s benefits for more people.
Highlights successful models like Kentucky school districts actively teaching responsible AI use.
"Making sure the general public is comfortable with AI use... is one of the best strategies that government can do..." (12:04)
Regulatory frameworks:
NIST’s risk management framework cited as helpful, but more clarity needed on definitions (e.g., what constitutes an AI “incident”).
Calls for standardized frameworks and reporting mechanisms.
"People need to know the best practices... there are so many different hands and pieces of the process..." (13:47)
While universal regulatory frameworks are important, many of the toughest questions will be resolved through legal cases.
Highlights a dramatic increase in AI-related litigation—an evolving body of case law will shape what "governance" means in practice.
"I think the courtrooms are increasingly seeing AI based litigation... I think we'll see that increase many times over in the next few years..." (15:26)
Good governance frameworks are essential for both trust and safe adoption.
Vogel advocates for clear communication about risks; her book categorizes nine major kinds of AI risk (hallucination, privacy, workforce displacement, etc.)
A two-part conversation is needed: mitigation of risks and maximization of opportunities.
"The solution is implementing good governance... In our book, we provide nine different categories of risks because we want to have a meaningful conversation." (17:53)
On governance and trust:
On C-suite accountability:
On fear and trust:
On legal liability:
Recommended Reading:
Subscribe for future episodes and updates on technology, politics, and policy at POLITICO Tech.