Scaling Laws: Renée DiResta and Alan Rozenshtein on the ‘Woke AI’ Executive Order
Release Date: August 1, 2025
Podcast: The Lawfare Podcast
Hosts: The Lawfare Institute | Kevin Frazier, Renée DiResta, Alan Rozenshtein
Introduction
In this episode of Scaling Laws, a sub-series of The Lawfare Podcast produced in collaboration with the University of Texas School of Law, host Kevin Frazier delves into the ramifications of President Trump's recent Executive Order (EO) targeting what is termed "Woke AI." Joined by experts Renée DiResta and Alan Rozenshtein, the discussion unpacks the EO's content, legality, and its potential impact on the AI industry.
Overview of the Executive Order
Structure and Content
Alan Rozenshtein begins by dissecting the structure of the EO, highlighting a stark contrast between its preamble and operative sections.
Alan Rozenshtein [04:21]: "It's like section one is a full-throated right-wing MAGA culture war statement about wokeness and DEI and the evils of transgender, and then sections two through five are much more normal, soberly written EOs."
Renée DiResta echoes this sentiment, noting the absence of definitions for key terms:
Renée DiResta [06:15]: "It doesn't define 'woke' either."
Core Provisions
The EO introduces two main principles for federally procured AI models:
- Truth-Seeking: AI systems must prioritize factual accuracy, scientific inquiry, and acknowledge uncertainty.
- Ideological Neutrality: Models should not manipulate responses to favor specific ideological dogmas.
Alan points out additional nuances, such as carve-outs for technical feasibility and national security, and a provision allowing developers to disclose system prompts as a means of achieving ideological neutrality.
Alan Rozenshtein [07:36]: "One way to satisfy ideological neutrality is to simply disclose to the user what your internal system prompt is."
Defining 'Woke AI' and Its Implications
Ambiguity of Terminology
The term "Woke AI" remains undefined within the EO, raising questions about its intended scope and enforcement mechanisms. This ambiguity suggests that the EO may have been partially crafted to appease a specific political base without committing to concrete definitions or requirements.
Impact on AI Development
The hosts discuss the practical implications for AI developers, emphasizing that the requirements are limited to federally procured models. This means that AI companies can continue offering various models to the public while only adhering to the EO's standards for government contracts.
Alan Rozenshtein [07:41]: "If you have WOKE GPT that... you are selling to the public, and then... what's important is the model offered to the federal government."
Renée further elaborates on the technical challenges of enforcing ideological neutrality at the base model level, citing the inherent complexities in steering large language models (LLMs) trained on vast and diverse datasets.
Renée DiResta [16:44]: "The more detailed and specific the outcome you're trying to get at, the harder it is to steer... it's much easier to do at the system prompt level."
Legal Considerations and First Amendment Implications
Government Speech Doctrine
Alan explores the legal framework surrounding the EO, referencing the Rust v. Sullivan (1991) case to illustrate the government's ability to exercise viewpoint discretion in its own speech and procurement practices.
Alan Rozenshtein [20:23]: "The government is allowed to pick whatever model frankly it wants because the government is allowed to have a view of what is the most useful model for its own purposes."
Compelled Transparency
The discussion touches upon the Zuiderer case, which allows for compelled disclosure in commercial contexts when serving a reasonable interest and ensuring factual, uncontroversial information sharing.
Alan Rozenshtein [29:18]: "What I'm asking is like, whatever string of text is appended to my user input when it's sent to the model, I would like to know what that string of text is... on constitutional grounds that's probably kosher."
Renee adds that similar transparency measures are already present in the EU's AI regulations, suggesting a potential alignment or divergence in future U.S. legislative actions.
Potential Industry Outcomes
Winners and Losers
The hosts speculate on which AI labs might benefit from the EO. While initially pointing to models like Grok from XAI as potential beneficiaries, they also consider the bureaucratic inertia of existing government contracts favoring established providers like Microsoft Azure or Google Cloud.
Renée DiResta [48:12]: "I think right now it is still a pretty open field... if some agency is operating on the Microsoft Azure cloud or they have a Google Cloud account... it's just going to be a lot easier to extend that particular contract."
Future Legislative Trends
The conversation anticipates whether similar standards will emerge at the state level or influence broader federal legislation. Currently, there is skepticism about widespread adoption beyond federal procurement, given the lack of significant culture-war-driven AI policies at the state level.
Renée DiResta [32:34]: "I don't think we're going to see all 50 states replicate... there's not much in the way of culture war effacement in AI procurement at the state level."
Defining and Achieving Ideological Neutrality
Challenges in Neutrality
Alan emphasizes the philosophical and technical difficulties in achieving true ideological neutrality in AI models, arguing that complete neutrality is unattainable due to the inherent biases present in training data.
Alan Rozenshtein [37:55]: "There's no such thing as a truly neutral worldview... these models are trained on humanity's corpus of output."
Proposed Solutions
Both hosts suggest that transparency in system prompts and user-facing disclosures are the most viable methods for addressing biases, rather than attempting to eliminate them entirely.
Renée DiResta [47:39]: "Having visibility and transparency into system prompts... is the best possible remedy."
Conclusion
The Scaling Laws episode provides a comprehensive analysis of the "Woke AI" Executive Order, highlighting its structural ambiguities, legal foundations, and practical implications for the AI industry. While the EO aims to promote truth-seeking and ideological neutrality in federally procured AI models, the experts caution against overestimating its enforceability and impact on the broader AI landscape. The conversation underscores the ongoing tension between government intervention in technology and the preservation of free speech and innovation within the private sector.
For more insights and discussions on national security, law, and policy intersecting with technology, visit Lawfare Podcast. Support the show and gain access to exclusive content by becoming a material supporter at lawfare@patreon.com.
