The Lawfare Podcast
Episode: Lawfare Daily: A Right to Warn: Protecting AI Whistleblowers with Charlie Bullock
Release Date: June 25, 2025
Host: Alan Rosenstein, Associate Professor at the University of Minnesota Law School and Research Director at Lawfare
Guest: Charlie Bullock, Senior Research Fellow at the Institute for Law and AI
Introduction
In this episode, Alan Rosenstein engages in a comprehensive discussion with Charlie Bullock about the emerging AI Whistleblower Protection Act. The conversation delves into the necessity, scope, and potential impact of this bipartisan Senate proposal aimed at safeguarding employees who expose dangers within the artificial intelligence sector.
Defining Whistleblowing
Charlie Bullock (02:23):
"Essentially, I view whistleblowing as the act of an employee... reporting corporate wrongdoing, typically to the proper authorities."
Bullock emphasizes that whistleblowing isn't merely about exposing legal violations but also encompasses reporting substantial and specific dangers to public safety associated with AI development and deployment.
Current Legal Framework
Bob:
The discussion highlights the fragmented nature of whistleblower protections in the United States, with a "patchwork of overlapping federal laws, state laws, and common law protections." Federally, there isn't a comprehensive whistleblower protection statute, leaving much to state legislatures.
Charlie Bullock (12:20):
"Federally, there is no overall background federal whistleblower protection law. Right. It’s mostly left up to the states."
California stands out with its robust protections under Section 1102.5 of the California Labor Code, which prohibits retaliation against employees who report violations of federal, state, or local laws to government agencies or within the company. Other states, like Illinois, also offer expansive whistleblower protections, whereas states like New York and many Republican-led states have minimal or no statutory protections.
The AI Whistleblower Protection Act: Overview and Provisions
Charlie Bullock (20:20):
"The law essentially covers three different kinds of reporting and protects them... violation of any federal law, reporting about substantial and specific dangers to public health, public safety, or national security, and reporting about AI security violations."
Key Provisions:
- Federal Law Violations: Protects individuals who report any breach of federal laws related to AI.
- Substantial and Specific Dangers: Extends protections to whistleblowers who identify significant threats to public health, safety, or national security, even if these dangers aren't yet codified into law.
- AI Security Violations: Safeguards those who report vulnerabilities in AI labs that could lead to the theft of model weights or algorithmic secrets.
Additionally, the Act renders non-disclosure agreements (NDAs) and arbitration clauses unenforceable concerning legitimate whistleblower activities, ensuring that employees can report without contractual repercussions.
Scope and Evaluation of the Act
Alan Rosenstein (27:45):
"What are the trade-offs with how broadly you scope whistleblower protections? And... does this bill get the trade-off right?"
Bullock argues that the Act strikes an appropriate balance by requiring that reported dangers be both "substantial and specific." This requirement ensures that protections aren't exploited for vague or unfounded concerns. For instance, speculative fears about AI, such as models achieving sentience or societal disruptions like widespread addiction, may not meet this threshold. However, concrete threats, such as AI systems facilitating the creation of biological weapons, would be protected under the Act.
Charlie Bullock (29:07):
"I think it gets it right. My colleague... suggested the substantial and specific language because it maintains bipartisan support."
He acknowledges that while broader protections might benefit certain safety-minded individuals, the current wording fosters broad support without overreaching, ensuring the bill's practicality and acceptance across the political spectrum.
Political Landscape and Bipartisan Support
Charlie Bullock (38:55):
"Chuck Grassley... has been working on whistleblower stuff for a very long time... This is just his own person."
The Act enjoys bipartisan backing, spearheaded by Senator Chuck Grassley, a Republican with a longstanding commitment to whistleblower protections. Additionally, conservative concerns about Big Tech censorship and preemption of state AI regulations have propelled support from both sides of the aisle. Figures like Josh Hawley have endorsed the bill, aligning it with broader Republican initiatives to standardize AI regulations at the federal level.
Industry Reaction and Potential Impacts
Charlie Bullock (42:49):
"It doesn’t really benefit them in any concrete way. At most, it doesn't go far enough."
Interestingly, major AI companies haven't strongly opposed the Act. Bullock suggests this might be because the legislation doesn't impose significant burdens on them. Instead, it provides a clear framework for lawful disclosures without granting undue advantages to whistleblowers. The lack of industry resistance could indicate either contentment with the current protections or a strategic calculation that the Act won't adversely affect their operations.
Alan Rosenstein (35:48):
"You can imagine they might hire fewer people... but currently, it doesn't appear companies are super concerned about whistleblowing."
Bullock concurs, noting that while theoretical concerns exist about companies tightening hiring or information compartmentalization, the present climate doesn't show signs of such reactions. The immediate response from companies like OpenAI, which withdrew broad non-disparagement clauses when publicized, supports this view.
Potential Downsides and Unintended Consequences
Alan Rosenstein (34:36):
"If you have a very strong whistleblower protection regime, then perhaps companies will hire fewer people... or compartmentalize information, leading to less disclosure overall."
Bullock acknowledges these concerns but maintains that, in reality, the current fear of retaliation isn't sufficiently pervasive to warrant significant changes in hiring or operational practices. He posits that AI companies aren't overwhelmingly worried about whistleblowing, suggesting that the Act won't drastically alter their internal dynamics.
Future Implications and Conclusion
Charlie Bullock (32:41):
"This is a building block... information gathering has to be the first step in any sort of comprehensive AI governance regime."
Bullock views the AI Whistleblower Protection Act as foundational legislation that paves the way for more comprehensive AI governance. By enhancing the government's ability to gather critical information from within the industry, the Act sets the stage for future regulatory measures and safeguards against potential AI-related threats.
Alan Rosenstein (44:08):
"Well, I think that's actually a good place to end it."
Both host and guest agree that, while the Act may not be exhaustive, it represents a significant stride towards addressing the unique challenges posed by AI advancements. Its bipartisan nature, coupled with thoughtful scoping, positions it as a pragmatic solution in the evolving landscape of AI governance.
Notable Quotes
-
Charlie Bullock (03:04):
"So the core example would be you're an employee at a company, you notice that your company is violating the law and you report that to law enforcement." -
Charlie Bullock (20:20):
"The law essentially covers three different kinds of reporting and protects them." -
Charlie Bullock (29:07):
"I think it gets it right... substantial and specific language... maintains bipartisan support." -
Charlie Bullock (32:41):
"This is a building block... information gathering has to be the first step in any sort of comprehensive AI governance regime."
Conclusion
The AI Whistleblower Protection Act represents a crucial development in aligning legal protections with the rapid advancements in artificial intelligence. By addressing the gaps in existing whistleblower laws and tailoring protections to the unique challenges of the AI sector, the Act seeks to empower individuals to report significant dangers without fear of retaliation. While acknowledging potential limitations and areas for future enhancement, the bipartisan support and thoughtful design of the legislation underscore its importance in the broader context of AI governance and national security.
