Podcast Summary: The Tech Policy Press Podcast
Episode: Following DOGE, US States Pursue 'Efficiency' Initiatives
Date: September 28, 2025
Host: Justin Hendricks
Guests:
- Matty Dwyer, Policy Analyst, Center for Democracy and Technology
- Ben Green, Assistant Professor, University of Michigan School of Information
Episode Overview
In this episode, host Justin Hendricks discusses the aftermath of the federal Department of Government Efficiency (DOGE) and explores how its legacy is being adopted and adapted by state governments. The conversation centers on the proliferation of state-level "efficiency" initiatives that utilize artificial intelligence (AI) and broader data technologies, the motivations behind them, the risks and red flags observed, and what lessons can be learned as public agencies continue to experiment with AI in an era marked by political change and rapid technological development.
Key Discussion Points
1. DOGE and Its Aftermath in State Governments (00:12 – 04:41)
- The federal DOGE, established during the early months of the second Trump administration, aimed to root out inefficiency, fraud, and waste in government, often via AI and data-driven tools.
- Despite DOGE fading from headlines after Elon Musk’s departure from government, its core concepts are now being pursued by at least 29 US states (red and blue).
- 16 states have codified state-level efficiency initiatives through legislation, executive orders, or programs; 13 others are considering similar moves.
- Of these, 11 states specifically reference the use of AI for government efficiency.
- Quote:
“Across all those 29 different state level efforts, we saw that 11 states really specifically addressed data and technology, including incorporating artificial intelligence into making government more efficient.”
— Matty Dwyer, [04:06]
Examples
- Wisconsin’s “Committee on Government Operations, Accountability and Transparency” (GOAT) specifically mandated exploring AI to improve government processes such as streamlining rulemaking or consolidating agencies. [05:30]
2. Concerns Over State-Level Imitation of DOGE (07:08 – 09:43)
- Ben Green expresses strong concerns that states are adopting DOGE’s language, and potentially tactics, not just its high-level aims. He highlights the dangers of AI and “efficiency” being used as pretexts for deep austerity, rapid cost-cutting, and undermining trust in government.
- There are questions about whether these state initiatives are good-faith attempts at reform or if they will replicate DOGE’s problematic approaches.
- Quote:
“The idea of AI and the idea of efficiency were a pretense for just an incredibly aggressive austerity program... The goal was to make cuts. The goal was to make headlines and statements about fraud that were not actually true.”
— Ben Green, [07:24]
3. Analytical Framework: Red Flags to Watch For (10:20 – 15:51)
Matty Dwyer walks through four main “red flags” for assessing government efficiency efforts, based on the fallout from the federal DOGE:
a. Lack of Transparency [10:40]
- Unclear roles, authority, and data access, which undermined public trust.
- Quote:
“We didn't know who was staffing DOGE at first, what was its role within the executive branch and within agencies, and what was it legally allowed to do, and then also what access... to government data?”
— Matty Dwyer, [11:01]
b. Privacy Protection Violations [11:54]
- DOGE’s wide data access sparked lawsuits alleging violation of the Privacy Act of 1974 and other safeguards.
- State efforts must abide by robust privacy and cybersecurity laws to maintain constituent trust.
c. Security Breaches [13:12]
- “Move fast and break things” culture led to several security incidents, including mishandling of sensitive personal data.
- Even lower-profile breaches (e.g., internal staff data leaks) point to inadequate security controls.
d. Weaponization of Data and Unproven AI Use [14:33]
- DOGE went beyond its mandate, breaking down data silos, and using data in ways that facilitated, for instance, immigration enforcement not originally part of its stated purpose.
- States must avoid using AI for high-risk decisions without ensuring tools are suitable to the task and without robust testing or evidence.
4. Core Challenges of Using AI in Government (16:09 – 22:49)
Ben Green outlines three main challenges to effectively using AI for government reform:
1. Limits of AI’s Practical Utility [17:12]
- AI tools that perform well on benchmark tests (e.g., passing the bar exam) don’t automatically translate to usefulness in real-world tasks (e.g., acting as an actual lawyer).
- Quote:
“There’s a huge difference between an AI satisfying the bar exam and actually acting as a lawyer... The idea that you could replace people with AI is really, it’s really quite foolish.”
— Ben Green, [18:40]
2. Integration Into Government Workflows [19:13]
- AI must be contextually embedded in specific workflows; generic tools (like chatbots) often produce lackluster results akin to “a mediocre intern.”
- Example: After DOGE pushed chatbots onto agencies, the feedback was:
“It’s about as good as an intern. Generic and guessable answers.” [20:06]
3. Human-in-the-Loop Fallacy [20:50]
- Having a “human in the loop” is often cited as a safeguard, but people tend to defer to AI (automation bias), making genuine oversight difficult.
- “It’s not actually a reliable form of quality control.” [21:53]
5. Is AI Just an Excuse for Political Aims? (22:49 – 27:19)
-
The discussion explores whether claims of AI-driven efficiency are actually ways of advancing austerity or political objectives, regardless of actual technological outcomes.
- Ben Green gives an example from the Veterans Affairs Agency where code to cut contracts was written in a single day and implemented immediately—a clear sign of process breakdown and lack of care for actual results.
-
Quote:
“In a way, it feels like what the AI is doing is making the implementation of austerity more efficient. It is not making government more efficient.”
— Ben Green, [23:44] -
Matty Dwyer adds that states face significant pressure to use AI, but often lack the capacity, training, or funding to properly implement it, raising the risk of poor outcomes.
6. The Role of Political and Hype Cycles (27:19 – 34:52)
- The current wave of AI deployment in government is shaped by tech hype and political cycles, compounded by commercial interests (e.g., companies creating government-specific AI sales divisions).
- If political leadership changes or enthusiasm for AI ebbs, it’s unclear if changes will stick—or if new reforms will address systemic issues.
- Ben Green draws parallels to prior tech hype waves like “Smart Cities,” where backlash led to confusion and stasis rather than thoughtful reform.
- Green and Dwyer advocate for a more “agnostic” approach to tech adoption—starting from the problem, not the tool, and focusing on genuine public benefit rather than efficiency for its own sake.
- Quote:
“AI is not always the logical solution to the issue that you’re trying to fix. And I think a lot of our work at CDT has been focused on encouraging governments to start from a problem statement place and really assess, like, are there also non-AI alternatives that could be better fit.”
— Matty Dwyer, [33:23]
7. What to Watch Next: Research and Policy Priorities (34:52 – 40:24)
- Transparency:
Matty Dwyer: A major research/practice priority is making AI deployments by governments more transparent, particularly as use continues to expand into critical domains like benefits decisions. - AI Hype:
Ben Green: Investigates how tech “hype” takes root and steers decision-makers, and studies how to shift government away from adopting tech for its own sake. - Beyond Efficiency:
Green warns against a sole focus on “efficiency,” calling for a policy vision that values human dignity and public good.- Quote:
“There needs to be a path... presenting a vision of government that’s based on human dignity and human welfare and the broader public good... Once you’re in the efficiency game it’s really hard to push for anything other than putting guardrails on AI use.”
— Ben Green, [39:11]
- Quote:
Notable Quotes and Moments
-
“The thought that more states now are trying to replicate this is incredibly concerning.”
— Ben Green, [08:22] -
“On its face, combating fraud, waste and abuse and... saving taxpayer dollars I think is a worthwhile effort, but if we don’t have these sort of guardrails in place... it makes it difficult to do that.”
— Matty Dwyer, [10:33] -
“The current moment, we're seeing an administration that largely is pro having no guardrails on AI and it's sort of rapid adoption."
— Matty Dwyer, [35:46]
Timestamps for Key Segments
- 00:12 – 03:14: State-level uptake of DOGE-style efficiency efforts
- 04:41 – 07:08: Specifics of AI use in these initiatives
- 09:43 – 15:51: Analytical framework: red flags for state efforts
- 16:09 – 22:49: Ben Green: why AI is hard to use meaningfully in government
- 22:49 – 27:19: Political uses of AI and the efficiency pretext
- 27:19 – 34:52: AI hype, policy cycles, and challenge of evidence-based adoption
- 34:52 – 40:24: Research/advocacy priorities and advice for governments
Conclusion
The episode offers a critical and nuanced examination of the growth of AI-powered “efficiency” initiatives in state governments, highlighting both the political drivers and the deep practical risks. Guests urge careful consideration not only of how technology is adopted in public agencies, but why, emphasizing transparency, empirical evidence, and a public interest–centered approach.
Hosts and guests encourage listeners to read more from:
- Matty Dwyer’s post: “DOGE-fying Government with Data and What States Can Learn from the Federal DOGE Fallout” (Center for Democracy & Technology)
- Ben Green’s writing at benzevgreen.com and Tech Policy Press
End of Summary
