The Lawfare Podcast: Lawfare Daily with Ben Brooks on the Rise of Open Source AI
Release Date: May 9, 2025
Introduction
In this episode of The Lawfare Podcast, host Kevin Frazier, the AI Innovation and Law Fellow at Texas Law, engages in a comprehensive discussion with Ben Brooks, a fellow at Harvard's Berkman Klein Center and former Head of Public Policy for Stability AI. The focal point of their conversation revolves around the burgeoning landscape of Open Source Artificial Intelligence (AI), exploring its definitions, benefits, controversies, and the intricate balance between innovation and national security.
Understanding Open Source AI
Kevin Frazier initiates the conversation by addressing the multifaceted nature of open source AI:
"Open source, like responsible AI or human-centered tech, is the sort of phrase that generally has positive connotations but is capable of nearly endless definition."
— Ben Brooks [03:05]
Ben Brooks elaborates on the complexity of defining open source AI, emphasizing the importance of publicly available model weights (parameters):
"...the weights, the distinctive settings or parameters for the model are publicly available, which means that a developer can come along, download those weights, integrate that model into their own system, modify the model and inspect the model."
— Ben Brooks [03:54]
From a policy and regulatory perspective, the accessibility of these weights allows for greater scrutiny and potential customization but also raises concerns about misuse and security vulnerabilities.
The Open vs. Closed Source Debate
The debate surrounding open source AI is categorized into three primary factions:
-
Cautious Researchers and Civil Society Groups: Advocating for restraint in AI development and release due to unpredictable capabilities and potential misuses.
-
Accelerationists: Proponents who believe in rapid development and open dissemination of AI technologies to spur innovation and economic growth.
-
The Middle Ground: Entities that support AI development but advocate for controlled and cautious release of AI models to mitigate risks.
Ben Brooks articulates these positions, highlighting the inherent tensions:
"...there is this underlying assumption that limiting or restricting access to models or restricting the capabilities of models is the primary and maybe the only effective mitigation against the worst risks."
— Ben Brooks [06:15]
National Security Implications
The conversation delves into the national security risks associated with open source AI models:
-
Misuse by Non-State Actors: Open access to AI models can enable malicious use, such as developing catastrophic weapons or executing large-scale cyber attacks.
"The risk of misuse, the risk of accidental or runaway behaviors..."
— Ben Brooks [12:17] -
Strategic Competition with China: Concerns that China could leverage open source AI to extend its technological dominance and create global dependencies.
"If we pull up the drawbridge... China will decouple and other jurisdictions will start to fill that vacuum."
— Ben Brooks [29:52] -
Online Safety Risks: Issues like the creation of deep fakes and deceptive content that can undermine democratic processes and public trust.
"One of the biggest concerns... is these sort of more quotidian online safety risks."
— Ben Brooks [12:17]
Key Moments in the Open Source AI Debate
-
Meta’s Llama Release:
-
Meta introduced Llama, an open source AI model, sparking bipartisan criticism for potentially aiding adversaries.
"Senators Hawley and Blumenthal... saying what are you doing? We get that open source is important, but what you're doing is reckless."
— Ben Brooks [16:39]
-
-
Deep SEQ’s Advanced Model:
-
The release of Deep SEQ R1 showcased significant AI advancements achieved independently of U.S. export controls, challenging the effectiveness of such regulations.
"...with a marginal cost of $6 million... How much of these breakthroughs are going to come through your efficiency and through familiar techniques."
— Ben Brooks [20:20]
-
-
OpenAI’s Policy Shift:
-
Sam Altman of OpenAI announced a pivot towards balancing open and closed models, recognizing the commoditization of AI models and the need to focus on product and application layers.
"It's not open or closed, it's open and closed, and they both have a role to play."
— Ben Brooks [34:24]
-
Policy Recommendations and Future Directions
Ben Brooks offers a nuanced perspective on policy approaches to open source AI:
-
Safe Diffusion as a Strategy:
-
Encouraging the widespread, secure distribution of AI models to boost economic productivity and innovation without monopolizing control.
"Safe diffusion is how we're going to boost productivity, innovation and win in AI."
— Ben Brooks [44:40]
-
-
Last Resort for Restrictions:
-
Advocating that restrictions on AI should be a last resort, emphasizing the importance of transparency in model development and deployment.
"Restrictions on useful, capable intangible technology should be a last resort, not a first resort."
— Ben Brooks [44:40]
-
-
Building Government Monitoring Capabilities:
-
Highlighting the need for robust monitoring within government to proactively address AI trends and risks rather than reacting post-deployment.
"We need a monitoring capability in government... US AI Safety Institute is so important."
— Ben Brooks [44:40]
-
-
Avoiding Reactive Legislation:
-
Stressing that delayed policy responses can lead to restrictive measures that stifle open innovation and economic benefits.
"The longer we take to just set up a baseline approach to transparency and monitoring, the greater the odds of reactive legislation."
— Ben Brooks [48:25]
-
Conclusion
The episode underscores the critical balance between fostering AI innovation through open source models and mitigating the associated national security risks. Ben Brooks advocates for a strategic approach that embraces open source for its economic and innovative benefits while implementing robust monitoring and selective restrictions to safeguard against misuse. The discussion highlights the evolving nature of AI policy and the imperative for collaborative efforts to navigate the complexities of open source AI in a competitive global landscape.
Notable Quotes:
-
Ben Brooks [03:05]: "There is this fascinating and very important tribal war taking place around the definition of open source."
-
Ben Brooks [06:15]: "We're talking about catastrophic risks of misuse, the risk of sort of accidental or runaway behaviors."
-
Ben Brooks [12:17]: "You're transmitting sensitive data back and forth with two or three APIs for the rest of eternity."
-
Ben Brooks [34:24]: "It's open and closed, and they both have a role to play."
-
Ben Brooks [44:40]: "Safe diffusion is how we're going to boost productivity, innovation and win in AI."
This comprehensive discussion on The Lawfare Podcast provides invaluable insights into the rise of open source AI, its potential impacts on national security, and the policy frameworks necessary to harness its benefits while mitigating inherent risks.
