Podcast Summary
Podcast: AI + a16z
Episode: How Should AI Be Regulated? Use vs. Development
Date: January 20, 2026
Host & Guests:
- Host (D)
- Martín Casado (A), General Partner, a16z
- Jayaramaswamy (B), Chief Legal and Policy Officer
- Matt Perault (C), Head of AI Policy
Overview: Main Theme and Purpose
This episode tackles the question: Should AI regulation focus on how models are developed, or how they are used? Drawing on decades of software governance debates, the discussion critiques proposals to regulate the underlying development of AI (such as FLOPS thresholds or bans on open source) and instead argues forcefully that effective, innovation-friendly policy must be rooted in regulating bad uses and concrete harms. The panel addresses fast-moving technology, uncertainty’s chilling effect especially on open source, and why evidence-based, technology-neutral laws are both possible and necessary to protect the public without stifling innovation or ceding leadership to rivals like China.
Key Discussion Points and Insights
1. The Regulatory Dilemma: Development vs. Use
- Development-based regulation: Efforts to control the creation of models (e.g., through FLOPS thresholds, licensing), have been proposed both in the US and internationally.
- Use-based regulation: A tradition in software governance, focusing on restricting how technologies are deployed and preventing their harmful misuse.
“Historically we have typically regulated behaviors, human behaviors and bad behaviors, typically, as opposed to regulating invention, creation and the sort of development of things. … To go down [the development] path raises some real problems…”
— Jayaramaswamy (B), [02:51]
- The hosts argue that use-based regulation is a proven, flexible approach that doesn’t stifle innovation or fall quickly out of date.
2. The Moving Target of AI Definitions
- Policymakers currently struggle even to define “AI,” let alone anticipate its risks.
- Any law that encodes a technical definition is likely to be obsolete as technology rapidly evolves.
“There actually is no single definition for AI. And everyone we've used now looks totally silly because it's evolved.”
— Martín Casado (A), [00:47], [45:10]
3. History Lessons: Encryption, Malware, and the Internet
- Malware Example: Creation is not a crime; malicious use is.
- Encryption Example: Early calls for backdoors ultimately would have killed both innovation and Internet commerce.
“The creation of malware itself isn't, in fact, a crime. What's a crime is the transmission of software to compromise other computers... It's very hard to distinguish, at the programming layer, good uses from bad uses.”
— Jayaramaswamy (B), [02:51]
- Drawing parallels with the Internet’s rise, the panel notes that major harms and their remedies only became clear after deployed, not in advance.
"To claim that at the outset of the Internet you could have foreseen how social media would develop, be used and misused is kind of a fairy tale... It can only happen once the risks emerge and are known..."
— Jayaramaswamy (B), [00:29], [20:22]
4. Open Source as an Innovation Engine (and Why It’s Under Threat)
- Open source is portrayed as essential to academic research, startups, and ultimately the industry’s long-term progress (as seen before with Linux and the Internet).
- Regulatory uncertainty is now discouraging US labs from releasing strong open models, creating a vacuum quickly filled by Chinese projects.
"Open source is always a critical part of the innovation ecosystem... This uncertainty in the regulatory environment is keeping US companies from releasing open source models that are strong. And as a result, the ... next generation ... are using Chinese models. And I think that's actually a very dangerous situation..."
— Martín Casado (A), [00:00], [30:32]
- China’s dominance in open source is both a commercial and geopolitical concern.
“If I'm a Chinese company that's providing open source models and I want to have an advantage, I just keep the largest model and give myself a six-month advantage with it before I release it again. And everybody's dependent now on you…”
— Martín Casado (A), [33:17]
- Open source’s chilling also disproportionately hurts startups, giving incumbents a regulatory advantage.
5. The “Equilibrium” of Innovation and Risk
- The US’s historical equilibrium of regulating use (not core invention), balancing innovation and safety, is acknowledged as imperfect—but better than alternatives.
- The EU’s rapid, development-based regulation is cited as a cautionary tale.
"Look who was the first out of the box with AI regulations. It was Europe, right? They put something in place that has had a hugely detrimental impact... The EU just recently came out with a recognition that the AI framework is flawed and they need to walk it back...”
— Jayaramaswamy (B), [26:52]
- Changing this equilibrium without clear evidence of specific new marginal risks is warned against.
6. The Impact on Startups: Uncertainty = Death
- Startups struggle most under complex, ambiguous regulation; incumbents can weather legal uncertainty with large compliance departments, VCs have started pulling term sheets due to legal risk.
- This leads to less new company formation, innovation, and dynamism in the US industry.
"Uncertainty really is death…in startups and we see it all the time... So for example, in the last two weeks I actually had ... [a] VC pull a term sheet ... because they're just uncertain about the regulatory environment... There's kind of no aspect of a startup that isn't impacted by uncertainty in the regulatory environment.”
— Martín Casado (A), [39:31]
- Regulatory complexity grants massive advantage to large, established businesses.
7. Policy Prescription: Evidence-Based, Tech-Neutral, Narrow
- Identify specific gaps in existing law created by new AI risk (the “marginal risk”) rather than assuming the field is unregulated.
- Where new rules are needed, they should target uses and be “technology-neutral”—not tied to fleeting technical definitions.
“If you were to create a very specific LLM-focused model of abuse, in about a generation of models, it's going to be irrelevant because the definition won't apply to the world models that are now being produced... So, you should pass laws of general applicability that are technology neutral…”
— Jayaramaswamy (B), [42:28]
Notable Quotes & Memorable Moments
-
On How Policy Should Be Rooted in Use, Not Development:
"If we focus on development and we don't focus on use, you end up introducing tremendous loopholes because it requires you to describe the system that's being developed."
— Martín Casado (A), [45:10] -
On Historical Parallels:
“Has the encryption problem gone away? No, there’s still conflicts…But we figured out ways of navigating through this. Not perfect, but ways that…don’t hamper innovation, that foster, that don’t throw the internet out just because a lot of bad stuff happens on it.”
— Jayaramaswamy (B), [22:09] -
On Regulatory Chilling Effect:
“We have had a different approach. The net in retrospect is we’ve given China a head start. … They are currently dominant.”
— Martín Casado (A), [27:49] -
On Marginal Risk Principle:
“You trust that the policy work you've done to date still applies…If you don't understand the marginal risk, you actually can't come up with effective policy.”
— Martín Casado (A), [08:47] -
On Equilibrium of Regulation:
“We've hit this equilibrium where we're balancing a lot of things like what good guys can do versus what bad guys can do, right? Like this is an equilibrium state. Innovation versus safety.”
— Martín Casado (A), [25:09]
Timestamps for Key Segments
- [00:00]–[01:12]: The danger posed by regulatory uncertainty, open source, and US competitiveness.
- [02:51]–[08:07]: Why regulating use, not software development, has served society well (malware, encryption, Internet analogies).
- [14:02]–[18:50]: Where to draw the line between use and development—why “marginal risk” and evidence are needed.
- [19:29]–[20:22]: Debating historical parallels with social media: Is “wait and see” a mistake?
- [26:52]–[28:29]: EU’s approach as a warning; how and why China took the lead in open source.
- [29:28]–[34:31]: How regulatory uncertainty impacts US open source, startups turning to Chinese models.
- [39:31]–[42:09]: Why regulatory ambiguity is especially deadly to startups and “little tech”.
- [42:28]–[45:10]: Policy recommendations: Use-based, tech-neutral law; identifying real gaps, focusing on actual harms.
Conclusion: Takeaways for Listeners
- Regulating how AI tools are used—not how they are researched or developed—best balances innovation and safety.
- Overly broad or hurried regulation risks stifling the innovation engine and ceding a strategic lead to rivals, notably China.
- The policy process should remain evidence-driven, responsive, and technology-neutral, adapting as real risks emerge.
- Startups and open source are essential to the AI ecosystem but are most exposed to uncertainty and complex rules.
- Past experience (Internet, malware, encryption) strongly counsels incremental, responsive, use-focused policy—not sweeping restrictions based on speculative future harms.
For those who haven’t listened:
This episode is a detailed, vigorous conversation about the profound trade-offs in AI regulation—with clear positions, vivid historical analogies, and plain English explanations aimed at both policymakers and industry insiders. It’s particularly valuable for understanding why evidence-based, use-focused, and technology-neutral regulation is both a pragmatic and deeply American approach to AI governance in a tumultuous global landscape.
