Podcast Summary: The Lawfare Podcast – "Lawfare Daily: The Double Black Box: Ashley Deeks on National Security AI"
Release Date: July 9, 2025
Host: Alan Rosenstein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare
Guest: Ashley Deeks, Class of 1948 Professor of Scholarly Research and Law at the University of Virginia
Book Discussed: The Double Black Box by Ashley Deeks
1. Introduction to the Double Black Box Concept
[02:10]
Alan Rosenstein introduces the episode by welcoming Ashley Deeks to discuss her new book, The Double Black Box. The central theme revolves around the intersection of artificial intelligence (AI), national security, and democratic accountability. Deeks explores how opaque AI systems, when integrated into the secretive realm of national security, pose significant challenges to oversight and legal accountability.
2. High-Risk AI Applications in National Security
[02:25] – [04:58]
Deeks identifies high-risk AI tools in national security primarily as those capable of inflicting lethal harm or leading to the detention of individuals. She emphasizes concerns in the cyber domain, where AI-driven operations might not be directly lethal but could instigate conflicts resulting in casualties. Deeks broadens the scope beyond lethal autonomous weapons systems (LAWS), highlighting AI's role in intelligence analysis, counterintelligence activities, and homeland security.
Notable Quote:
"I think of high-risk national security AI as primarily AI tools that can inflict lethal harm on people or lead to their detention."
— Ashley Deeks [03:33]
3. Case Study: AI-Enabled Cyber Tools
[04:58] – [06:15]
Rosenstein inquires why Deeks chose AI-enabled cyber tools as a case study in her book. Deeks responds that cyber tools represent a realistic starting point for early adoption of autonomous AI systems, both defensively and offensively. She argues that cyber operations can quickly push Congress out of the loop, undermining its constitutional role in overseeing the use of force.
Notable Quote:
"I thought that some of the things that I think would play out in a cyber autonomy context would show how quickly Congress could lose control of its role."
— Ashley Deeks [05:16]
4. The AI Black Box and Explainability Challenges
[07:13] – [26:39]
The discussion delves into the complexities of AI systems, particularly focusing on the "AI black box"—the inherent opacity in how advanced machine learning models make decisions. Deeks asserts that even with advancements in explainable AI, fully understanding AI decision-making processes remains elusive. She highlights the trade-off between AI system effectiveness and explainability, suggesting that increased transparency may reduce operational efficiency.
Notable Quotes:
"As we are trying to shift to Meta's super intelligent systems, it's going to be even harder to really produce quality, explainable AI."
— Ashley Deeks [07:13]
"The more you make a system explainable, the less effective it is. There’s some tension between wanting to improve its explainability and improving its capacity."
— Ashley Deeks [24:58]
5. The Double Black Box Explained
[12:14] – [19:01]
Deeks elaborates on the "Double Black Box" metaphor:
-
National Security Black Box: Represents the secrecy inherent in governmental national security operations. Decisions are made behind classified walls, limiting transparency and accountability.
-
AI Black Box: Within the national security black box lies the AI systems themselves, which operate with their own opacity, making it difficult to discern how decisions are made even among insiders.
Notable Quote:
"Inside the national security black box, we are dropping a series of AI black boxes that are going to inform, advise, and operationalize national security activity."
— Ashley Deeks [17:54]
6. Institutional Oversight: Congress and Its Limitations
[28:50] – [36:03]
The conversation shifts to institutional oversight, with a focus on Congress's role in regulating national security AI. Deeks acknowledges Congress as the traditional counterbalance to executive power but expresses skepticism about its effectiveness in the AI era. She proposes that Congress could mandate presidential sign-offs on high-risk AI deployments and require notifications to maintain oversight. However, she remains cautious, especially considering the bipartisan push to outpace adversaries like China in AI development.
Notable Quote:
"If Congress regularly asked the actors who were coming in front of it to describe to them a particular high-risk use of national security AI, ... officials have an ex-ante incentive to make sure that their systems produce some form of explanation."
— Ashley Deeks [34:57]
7. The Role of Courts in AI Oversight
[36:03] – [42:25]
Rosenstein raises concerns about the judiciary's capacity to oversee AI within national security due to their generalist background and the complexity of AI technologies. Deeks concedes that courts may have limited direct oversight but suggests that judicial demands for explainability in specific cases could indirectly push for greater transparency and accountability in AI systems.
Notable Quote:
"I don't put a lot of weight on or hope in the courts having a significant role here, I guess with maybe three minor caveats."
— Ashley Deeks [39:47]
8. Executive Branch Dynamics and Legal Oversight
[42:52] – [49:13]
The discussion turns to the internal dynamics of the executive branch. Deeks observes a potential power shift from lawyers to engineers as AI systems become more integral to national security operations. She emphasizes the need for lawyers to be involved early in the AI development process to ensure legal compliance and accountability, rather than merely addressing issues post-deployment.
Notable Quote:
"It would be far preferable, I think, to do it on the front end and just at a more macro level."
— Ashley Deeks [47:02]
9. International Cooperation and Analogies
[54:56] – [60:48]
Addressing the international dimension, Deeks critiques the analogy of AI regulation to nuclear non-proliferation, suggesting it's a flawed comparison due to differences in technology dual-use and verification challenges. Instead, she likens AI governance to cyber norms, noting the modest progress made in establishing international agreements. Deeks underscores the difficulties in fostering trust among major powers like the U.S. and China, which hampers robust international AI regulation.
Notable Quote:
"Nuclear weapons are not dual-use systems. They are hard to make, built by governments, you can count AI, not that at all, sort of the opposite of that."
— Ashley Deeks [56:06]
10. Conclusion and Final Thoughts
[60:48] – [61:00]
Alan Rosenstein wraps up the conversation by congratulating Ashley Deeks on her insightful and timely book, highlighting the importance of addressing the double black box issue in national security AI.
Notable Quote:
"Ashley, congrats on a really excellent and timely book and thanks for coming on the podcast to talk about it."
— Alan Rosenstein [60:48]
Key Takeaways
-
Double Black Box: The integration of AI into national security creates a dual layer of opacity—government operations remain secretive, and AI decision-making processes are inherently opaque.
-
Oversight Challenges: Traditional oversight bodies like Congress and the courts face significant hurdles in regulating and understanding AI within national security due to complexity and politicization.
-
Explainable AI: Efforts to make AI systems more explainable are ongoing but face trade-offs between transparency and operational effectiveness.
-
International Regulation: Comparisons to nuclear non-proliferation are inadequate for AI; cyber norms offer a more fitting analogy, albeit with limited success thus far.
-
Institutional Dynamics: There's a potential shift in power from legal professionals to technologists within the executive branch, necessitating early legal involvement in AI development.
Additional Resources:
- Book Mentioned: The Double Black Box by Ashley Deeks
- Podcast Series: Scaling Laws
- Lawfare Institute: www.lawfareblog.com
This summary encapsulates the core discussions from the Lawfare Podcast episode featuring Ashley Deeks, providing insights into the complexities of AI integration within national security and the attendant challenges for democratic accountability.
