
A piece of legislation advancing through the legi…
Loading summary
A
Hello and welcome to State Scoop's Priorities podcast. I'm Colin Wood, StateScoop's editor in chief. I recently interviewed Quinn Annex Reese, a senior policy analyst at the center for Democracy and Technology, about legislation advancing in Idaho that would prohibit state agencies from procuring or even using AI models that might promote principles of diversity, equity or inclusion. But before we get into that, here's what's happening this week. Minnesota governor Tim Walls on Monday named John Eichton, the state's deputy IT commissioner, as the new statewide chief information officer, replacing Tarek Tomes, whose official final day with the state was Sunday. Texas has named Tony Sauerhoff, who's been Texas's interim chief information officer since January, as the new statewide CIO source. Sauerhoff, who most recently served as the state's chief artificial intelligence and innovation officer, replaces Amanda Crawford, who was appointed in January as the state's insurance commissioner. And Arizona governor Katie Hobbs last week unveiled a plan designed to save her state as much as $100 million over three years, but without cutting essential services. In previewing the plan during her State of the State address in January, she said, we will not decimate important services that families and businesses rely on. There's a piece of legislation in Idaho that would prohibit state agencies from procuring or using AI models that might promote principles of diversity, equity or inclusion. House Bill 687 has passed the House and is now up for consideration in the state Senate. I asked Quinn Annex Reese, a senior policy analyst at the center for Democracy and Technology, about the bill, starting with some of the definitions it would rely upon to function effectively.
B
You know, I think the challenge, though, in this discussion is that a lot of the definitions we're going to be talking about are very slippery, that they can be really broad, they can be wielded in specific instances to target certain kinds of speech or certain kinds of viewpoint viewpoints or certain kinds of tools. But there isn't necessarily an agreed upon definition of, in this instance, woke or what constitutes unbiased. And so it can create really uncertain terrain with the actors that are then tasked with targeting certain things that are deemed to be woke or with implementing requirements in response to getting rid of woke, quote, unquote woke AI models, which opens up both a lot of challenges around implementing things and a lot of vulnerabilities for for these concepts or requirements to be weaponized against certain viewpoints more than others.
A
Right? And in this case, we're talking about Idaho House Bill687, which is passed, and there's a Companion piece of legislation in the state, that state Senate. Can you summarize what. What's in the. That House bill and what we know about it?
B
Yeah. And this will kind of get at a little bit of, like, how are they thinking about, you know, what might be targeted as woke in this context? So, like you said, this advanced in the House representatives in Idaho on March 2, and if passed into law, the bill would require that all state agencies. And this is going to be really important for conversation. So it's a requirement for all state agencies that they can only procure and use large language models that adhere to the principles of truth seeking and ideological neutrality. And now I think for a lot of people, this might sound pretty nice and neat on the surface, but actually implementing this kind of bill is a really complicated legal, technical, and practical matter. So this is where we get to, like, the matter of how people are thinking about what woke is in the bill text. In demanding ideological neutrality from procured models, the bill actually also calls out particular points of view, including diversity, equity, and inclusion, quote, unquote, transgender ideology, quote, unquote, intersectionality, as concepts that should generally be limited in AI systems. So this is really creating a fundamental tension here where on the one hand, the bill is saying this is about advancing ideological neutrality in systems while also calling out specific viewpoints as targets. I think we can kind of assume in the kind of construct of the bill that these viewpoints are being what are targeted as allegedly woke. And so I think really it comes down to, like, what actually will this mean on the ground, which I think we should. Should dig into in this conversation. But other thing I just kind of want to add, as I think important context here when it comes to Idaho's effort in particular, is that, you know, Idaho's Office of Information Technology Services has published some guidelines and resources for state agencies on AI use. This bill would mark really the first major effort by state lawmakers to put guidelines or restrictions about how state agencies use AI. So I think we should all kind of ask, like, why is this actually the place to start from? And what, like, what will it mean for the state's efforts to modernize and adopt emerging technologies?
A
Right. Well, one thing that immediately occurred to me when looking at the bill text was that, you know, to prioritize truth seeking and ideological neutrality are, I think, at least on, on their face, great goals. But it's. It also sounds to me like the legislators maybe aren't fully appreciating. I think these are things that the people developing these models have been trying to do. They don't want this bill doesn't seem to be targeting hallucinations, but they don't want things that aren't true and they seem to want things that are neutral. Even if this did pass, I don't see how this would be enforceable technologically.
B
Yeah, you know, this gets at one of the core challenges with this kind of effort is that there are fundamental technical limitations here, both on defining neutrality and figuring out benchmarks by which you would evaluate. So on the first one, you know, there's a contradiction in the bill's own definition. Defining neutrality as excluding certain viewpoints makes it inherently political and not neutral. And secondly, there's no single definition that exists for ideological neutrality. So neutrality could mean present both sides default to the center, rotate perspectives, each with its own kind of flaws. Meaning, you know, each of these could be a technical fix you could pursue, but that would each have its own limitations. So, you know, presenting both sides could lend legitimacy to fringe views or those that don't align with clear social consensus. So if really, you know, AI wanted to give up both sides of everything, many kinds of conversations could be pretty ridiculous. So you could think about like, what would a both sides response be to what happened at the Apollo 11 moon landing? That could be something like, the Apollo 11 moon landing was the first space flight to land humans on the moon and is widely regarded as a major feat in modern science. On the other hand, the entire thing could have been faked, right? Like, you would get kind of this ideological extreme if you wanted this kind of answer. And defaulting to, you know, a centrist position, on the other hand, actually still privileges this particular stance. So you have a firsthand question of like, technically, finding a way to be neutral is actually an option you have to make and isn't straightforward. The kind of second thing here is there is no consensus in the field about what type of benchmark you should be using to actually measure neutrality in a system, which in terms of implementation presents a practical problem to state agencies of you're being directed to do a compliance review of all vendors using different kinds of documentation, but you don't actually have a standard fieldwide benchmark to measure that against. How are you going to make procurement decisions?
A
Yeah, actually that's really interesting because you just brought to mind a challenge that states are facing with accessibility right now, which should be as clear cut as accessibility, which has in terms of technology. In the web, there are extensive standards and guidelines, but apparently even there, they're struggling to accurately score what vendors can do because it's very subjective in terms of a person needs to look at the technology. And there's no standardized scoring system, evidently. And even that, so even that is something that's much more concrete, it would seem, than something like a large language model where, you know, the output, you know, they're highly pliable. The output is depending on your prompt. Maybe you ask it like this could, you know, stunt the capabilities of a system like this if you try to comply with such rules. Because the whole idea of these systems, as far as I can tell, is that they're sometimes too much, try and, try and make the user happy, try and give you the kind of output that you want, which is its own problem. But you know, if you, if you're telling it to give you a non neutral answer, does it then just say like, oh, I'm not allowed to do that, or. And what is. And as you said, what is a neutral answer? That's highly.
B
Yeah. In this context, you know, many states are already struggling to procure AI tools effectively. They don't have the right expertise in their procurement offices. They don't have the right expertise in their program offices to know how to evaluate claims made by vendors. So it's already a really challenging landscape like what you're laying out in terms of accessibility. And so layering this pretty ill defined requirement on top of all of that, you know, at best will just worsen and confuse this procurement process where agencies are trying to keep up and at worst really injects opportunities for people to game the system for corruption for, you know, junk science to be now a fundamental part of a procurement process. The other thing that sticks out to me here is, you know, the fiscal note for the bill actually indicates that they anticipate no additional cost for the implementation of this bill. I'm a little bit skeptical of that because, you know, really for this to be implemented, agencies like we were talking about have to do these quote, unquote compliance reviews where they're accessing technical documentation provided by vendors. You're either going to need your current personnel to spend a lot of time working on this, figuring this stuff out, or to bring on additional technical expertise to fully evaluate vendor disclosures. But either way, this is going to be a burden for agencies to implement, especially because it's pretty unclear what these requirements are and how you would actually assess this in a system in the real world.
A
Now, in articles that your organization has put out, has looked at similar federal efforts around this, do you think that the Idaho bill is like a really honest effort to do this or is this more just sometimes we see states doing things to try to align themselves with the White House. So could you summarize some of what we've seen from the federal government and how you think that relates to the Idaho bill?
B
Yeah. So this follows a very nearly identical effort at the federal level that started last year. So on July 23rd of 2025, President Trump signed an executive order titled Preventing Woke AI in the federal government. And really that had many of the same objectives where it essentially laid out that federal agencies could only procure AI models that similarly adhered to the principles of ideological neutrality and truth seeking. And these. This executive order was followed by subsequent guidance from the federal Office of Management and Budget that laid out requirements for federal agencies about essentially what this meant for their procurement process. And like Idaho's bill, basically outlines the different kinds of documentation federal agencies need to get from vendors to evaluate whether or not their tools are actually neutral. But that guidance is still really high level at the federal level and is leading a lot up to individual federal agencies to figure out how to implement this stuff, which is concerning both for people that work on the technical aspects of AI, for all of the reasons we were just talking about, and for folks that work on government procurement because of what it's going to mean for gummy up the works here. Idaho's bill hews really closely to this effort. And so, you know, the worry is that many of those same challenges would port over to Idaho if they were to pass this into law, but that you're then talking about a much more resource constrained environment of a state government that may not have the technical experts or procurement experts in house to really figure out how to operationalize this in a coherent way. And so at both levels, you kind of risk losing opportunities for innovation within government because you just can't do the procurement or scaring away vendors who feel like they can't adhere to these requirements or opening yourself up to a lot of legal liability. Because think about what happens if you have a solicitation for a tool. And as state in Idaho's case, or a federal agency selects a specific vendor and they tell the vendors they didn't select it was because you didn't adhere to our unbiased requirement. It's a really broad, ill defined requirement. Vendors are going to be incentivized to contest those award decisions in court. And if there's not a clear technical rubric that the decision was made on, this could be a really big risk for agencies to make decisions based off
A
of this yeah, it's the whole thing, you know, my job. Well, speaking of neutrality, my job is to be neutral, but I, and I'm doing my best to, to, to give this the benefit of the doubt. But the whole thing seems like a large kind of opportunity cost. I, I think really efforts could be better spent on other things. But kind of in parallel to this, A couple state AGs, including in Montana, have made other demands on LLMs. Could you explain what those have been?
B
Yeah. And so this is the part where it kind of gets interesting about how this Idaho bill and the federal effort are targeted government agency uses, because this has interesting implications for how we think about the sort of First Amendment impacts here. So, like you said, we've seen over the last year attempts to target specific companies by state attorneys generals for the outputs of their tools. So we saw the Montana attorney general send a letter to Google suggesting that they may be in violation of state law for Gemini outputs that, quote, hedged answers about Palestinian violence against Israelis and refused to answer about whether Hamas was a terrorist organization. And also last year, we saw the Missouri attorney general send letters to Meta and others suggesting that it was a violation of state law for a chatbot to rank President Trump last in a response to a prompt about ranking the last few presidents in order from best to worst. Really what this is important to understand is this is happening in the backdrop of a lot of these efforts to target various AI companies based on specific viewpoints. Now, generally, courts have ruled that viewpoint based content restrictions by government actors are unconstitutional. And so if, you know, this state effort, for instance, had tried to outlaw broadly DEI friendly models and outputs for everybody, courts would pretty, almost certainly invalidate that as some kind of restriction on First Amendment grounds. This does become a little bit different when we're talking about how they're creating limitations on vendors interacting with state agencies. Now, within a contract, a public agency can seek a variety of requirements from a vendor. The concern, though, then becomes, is this weaponized to shape other forms of speech outside of the government contract? Does this have downstream effects that impacts how other people using this tool who aren't government agencies can seek information? And that's what really has, you know, troubling implications for access to information and speech for everybody?
A
Hmm. So if this ends up passing in Idaho, and as you mentioned, the kind of outcome of the, the federal, the federal action on this has been like, the consequences of that have been a bit fuzzy. Do you have a clear picture of what this would mean for Idaho and maybe as a precedent for other states
B
that's really the open question, right? Like the, when you look at the bill, it's written so that it will go into effect immediately. There's no lead time given to state agencies to figure out how to implement things to let the, you know, Office of Information Technology Services provide additional guidance or work with agencies, they basically flip on a switch and follow these requirements now. So given that accelerated timeline, given that they're not saying that they're going to appropriate any additional money for agencies to do this, what does that mean? Does that mean agencies will kind of shrug and say, I think what we're using is fine. Will, you know, a really conservative, you know, lowercase C conservative general counsel. And agencies say, oh, this makes me afraid about any LLM we're using. Stop all use indefinitely until we can do this kind of evaluation. So, you know, it could really be. Doesn't do anything, or it could really make it so that agencies have an extremely hard time doing very basic procurements with LLMs and basic use across an agency. If there's sort of an ongoing internal struggle over what it means, or if vendors turn around and feel like they're being unfairly targeted under these new requirements, they could seek to sue a variety of state agencies because they're losing out on really valuable contracts. So if I worked in a state agency in Idaho, I would be really worried about what it could mean for just my use of LLMs in, in my daily practice and really slowing down the efficiency and effectiveness of state government. I think the only other thing I would add, you know, to an earlier point you were bringing up, is there are other efforts happening in states to think about how. How do you make sure that our government is being rigorous in how we evaluate the AI tools we buy to make sure that they're accurate and performing well and that they're not harming our constituents. So there are models out there for thinking about what kind of safeguards are you going to place, what kinds of evaluations you're going to require state agencies to do before procuring a tool. So there's other roads state governments can go down if they're genuinely concerned about the efficacy of tools. So it shouldn't be a zero sum game where it's like either you're doing this kind of effort to target particular viewpoints in LLMs or doing nothing. I think there's a, you know, much more common sense middle ground that a variety of states across the political spectrum have chosen to go down that says, hey, let's make sure our Office of Technology has procurement guidance for state agencies and has standard contract language and has risk requirements. They have to be put in place for, you know, high risk use cases. That could be an alternative to this kind of approach and at the very least would make much more sense as a starting point than immediately jumping to something that's really ill defined and can be hard for state agencies to wrap their arms around.
A
That was Quinn Annex Reese with the center of Democracy and Technology. That's it for this episode. The Priorities Podcast is a production of Scoop News group in Washington, D.C. president production work is done by Adam Butler and Carlin Fisher. I'm Colin Wood. Thanks for listening.
Episode Title: Banning 'woke' AI in Idaho
Host: Colin Wood (StateScoop)
Guest: Quinn Annex Reese, Senior Policy Analyst, Center for Democracy and Technology
Date: March 18, 2026
This episode tackles recently advanced Idaho legislation—House Bill 687—that would prohibit state agencies from obtaining or using AI models deemed to promote principles of diversity, equity, or inclusion (DEI), “transgender ideology,” or “intersectionality.” Host Colin Wood and guest Quinn Annex Reese discuss the language of the bill, its technical and legal implications, and the broader political movement to regulate “woke” AI systems in government technology procurements.
Timestamp: 01:53–05:04
Slippery Definitions:
Reese highlights the ambiguity in the bill:
“A lot of the definitions we're going to be talking about are very slippery ... there isn’t necessarily an agreed upon definition of, in this instance, woke or what constitutes unbiased. And so it can create really uncertain terrain...” (01:53)
Contradictory Goals:
The bill ostensibly aims for “truth seeking” and “ideological neutrality”—which sound reasonable—but directly targets DEI, “transgender ideology,” and “intersectionality” for exclusion, contradicting its neutrality claim.
Precedent for State AI Governance:
This marks Idaho’s first legislative foray into AI guidelines for agencies, despite existing resources from the state Office of Information Technology Services.
Timestamp: 05:04–10:40
Enforcement Problems:
Wood questions enforceability:
“Even if this did pass, I don't see how this would be enforceable technologically.” (05:04)
Ideological Neutrality is Undefined:
Reese points out:
“Defining neutrality as excluding certain viewpoints makes it inherently political and not neutral. And secondly, there's no single definition that exists for ideological neutrality.” (05:51)
She further explains that any technical fix (presenting both sides, middle-ground-answers, etc.) has fundamental flaws—e.g., “both sides” logic could legitimize fringe views.
No Benchmarks:
“There is no consensus in the field about what type of benchmark you should be using to actually measure neutrality in a system.” (05:51)
This creates barriers for honest AI procurement.
Comparison to Accessibility:
Wood notes state struggles in evaluating technology even with clear standards (e.g., web accessibility), intensifying skepticism about applying subjective neutrality requirements to AI (07:53).
Resource Burden:
“...you're either going to need your current personnel to spend a lot of time ... or bring on additional technical expertise... Either way, this is going to be a burden for agencies...” (09:13)
Reese also casts doubt on the bill’s fiscal note, which claims zero additional implementation cost.
Timestamp: 10:40–13:59
Federal Executive Order:
The Idaho bill closely mirrors a 2025 Trump Administration federal executive order—“Preventing Woke AI in the federal government”—which mandated ideological neutrality and truth-seeking for federal agency AI procurements.
Loose Federal Guidance:
Federal requirements remain vague, with agencies left to interpret implementation, raising concerns about transferability to resource-strapped state agencies.
Legal, Procurement, and Innovation Risks:
The potential for lawsuits over “unbiased” requirements is high; lack of clear technical rubrics could chill innovation or deter vendor participation.
Timestamp: 13:59–16:10
State AG Actions:
Some state attorneys general have threatened AI vendors over outputs they disapprove of politically—e.g., Montana AG criticizing Google for “hedged” answers, and Missouri AG objecting to chatbots ranking Trump last.
“Viewpoint based content restrictions by government actors are unconstitutional.” (13:59)
Government Contracts as a Loophole:
While agencies can set contract terms, overreach could affect speech or services for non-government users, threatening access to information and free expression.
Timestamp: 16:10–19:21
No Implementation Lead Time:
The bill would go into effect immediately—with no phase-in for agencies—or added funds.
Scenarios of Noncompliance or Overcompliance:
“Will ... agencies ... shrug and say, I think what we're using is fine? Will ... general counsel ... say, oh, this makes me afraid about any LLM we're using. Stop all use indefinitely ...?” (16:31)
Procurement Paralysis:
Unclear rules could halt LLM adoption or spark vendor lawsuits.
Alternative Approaches:
Reese recommends more common-sense regulatory models:
“There are models ... to make sure ... AI tools ... are accurate ... and not harming our constituents ... It shouldn't be a zero-sum game ...” (18:33)
On Contradictory Neutrality:
“Defining neutrality as excluding certain viewpoints makes it inherently political and not neutral.”
—Quinn Annex Reese, 05:51
On Technical Impossibility:
“Presenting both sides could lend legitimacy to fringe views... like, what would a both sides response be to what happened at the Apollo 11 moon landing?”
—Quinn Annex Reese, 06:26
On Resource Strain:
“I’m a little bit skeptical of that because, really, for this to be implemented, agencies ... have to do these ... compliance reviews ... or bring on additional technical expertise ... Either way, this is going to be a burden...”
—Quinn Annex Reese, 09:13
On Legal Risks:
“Vendors are going to be incentivized to contest those award decisions in court. And if there's not a clear technical rubric... this could be a really big risk...”
—Quinn Annex Reese, 12:56
On Broader Impacts:
“If this ends up passing in Idaho ... it could really make it so that agencies have an extremely hard time doing basic procurements with LLMs...”
—Quinn Annex Reese, 16:31
This episode provides a thorough, critical look at the limitations, risks, and possible unintended consequences of banning “woke” AI in Idaho—offering insights for policymakers, IT practitioners, and anyone interested in the intersection of technology, law, and politics.