Transcript
A (0:00)
Open source is always a critical part of the innovation ecosystem because while it's not the number one business driver like the proprietary models is, it's what's used by hobbyists, it's what's used by academics, it's what's used by startups, and that tends to be the future. And so this uncertainty in the regulatory environment is keeping US companies from releasing open source models that are strong. And as a result, the next generation, the hobbyists and the academics, are using Chinese models. And I think that's actually a very dangerous situation for the United States to be in.
B (0:29)
To claim that at the outset of the Internet you could have foreseen seeing how social media would develop, be used and misused is kind of a fairy tale. Like that couldn't have happened back then. It can only happen once the risks emerge and are known and then you can figure out what the bad things are that you want to regulate.
A (0:47)
If we focus on development and we don't focus on use, you end up introducing tremendous loopholes because it requires you to describe the system that's being developed. And right now there actually is no single definition for AI. And everyone we've used now looks totally silly because it's evolved. So if actually the lawmakers want to have effective policy, the only area that you can actually specify is the use of these things.
C (1:12)
In this episode, A16Z's Jayaramaswamy, Chief Legal and Policy Officer, Matt Peralt, Head of AI Policy, and Martin Casado, General Partner, take a first principles look at AI regulation, arguing that if policymakers want an effective way of protecting people from AI related harms, they should focus on targeting those harms directly rather than model development until AI's marginal risks are better underst. Drawing on decades of software governance debates, from encryption to cybersecurity, they explain why development level rules are difficult to define, easy to loophole, and likely to become obsolete in a fast moving field where even the definition of AI remains unstable. The conversation also examines how regulatory uncertainty is already shaping US competitiveness by chilling open source research, advantaging incumbents over startups and pushing the next generation of builders towards Chinese open models. Making the case for evidence based technology neutral policy that protects against bad behavior behavior without stifling innovation.
D (2:12)
This is a fun conversation for me because I get to ask Martina Jay some questions about how you guys were thinking about AI policy before I joined the firm. So a couple of years ago the scene was really different than it is today. Sam Altman's testifying in Congress, Brad Smith at Microsoft is talking about things like licensing regimes for AI, an international regulatory agency that would regulate AI just like nuclear. International nuclear regulatory agencies do. Jay, can you just start with telling us a little bit about how the firm reacted to that? Like, how did we put that in context in terms of what AI policy might look like and what we were concerned about?
