
Who’s speaking up for startups in Washington, D.C.? In this episode, Matt Perault (Head of AI Policy, a16z) and Colin McCune (Head of Government Affairs, a16z) unpack the “Little Tech Agenda” for AI- why AI rules should regulate harmful use, not model development; how to keep open source open; the roles of the federal government vs states in regulating AI; and how the U.S. can compete globally without shutting out new founders.
Loading summary
A
There have been these big institutional players in D.C. in the state capitals for a very long time. There wasn't anyone who was actually advocating on behalf of the startups and entrepreneurs, the smaller builders in the space.
B
They're trying to build models that might compete with Microsoft or OpenAI or Meta or Google. For those companies, what are the regulatory frameworks that would actually work for them as opposed to making that competition even more difficult than it already is? Regulate use, do not regulate Development somehow is interpreted as do not regulate.
A
I actually can't think of a single example across the portfolio portfolio in which we are arguing for zero regulation.
C
Who's speaking up for startups in Washington D.C. today? I'm joined by Matt Peralt, head of AI policy at a16z and Colin McCune, head of government affairs at a16z to talk about the little tech agenda, a framework designed to ensure that regulation doesn't just work for the giants, but also for the 5 person teams trying to build the next breakthrough. Their approach? Regulate harmful use, not development. We'll cover federal versus state rules, open source export controls, and what smart preemption could look like. Let's get into it. Colin. Matt, welcome to the podcast.
B
Thanks so much.
A
Thanks for having us.
C
So there's a lot we want to get into around AI policy, but first I want us to take a step back and reflect a little bit. We had publicly announced the Little Tech Agenda July of last year. There's a lot that's happened since. Why don't we first take a step back, Colin, and talk about what is the liltech agenda and how did it.
B
Come to be at the firm?
A
Yeah, I mean, look, a ton of credit to Mark and Ben for having sort of the vision on this. I think certainly when I first started here, I arrived, we started advocating on behalf of technology interest, technology policy. And I think what we realized was there have been these big institutional players that have been in D.C. in the state capitals for a very long time. Some of them have done a lot of really good work on behalf of the entire tech community. But there wasn't anyone specific who was actually advocating on behalf of what I think what we call little tech, which I think in my mind are the startups and entrepreneurs, the smaller builders in the space. And I think beyond that, what we realized was, well, they're not always 100% aligned with what's going on with the big tech folks. And that's not necessarily always a bad thing or a good thing, but I think that was the whole impetus of this. You know, how Are we going to think about positioning ourselves in D.C. in the state capitals, in terms of our advocacy on these issues? And how do we differentiate in between sort of the big tech folks who come with their certain degrees of baggage?
B
Yeah, and the left and the right.
A
From the left and the right and the smallest of the small. So that was really sort of the basic impetus of this for me.
B
It was actually sort of almost a recruiting vehicle. So when it hit in July, I was not yet at the firm. I started in November. And when I first read the agenda, it sort of transformed the way that I looked at the rooms that I would sit in, where there would be policy conversations, where all of a sudden you could see essentially an empty seat and little tech's not there. There would be conversations where people would say, and in this proposal, we want to add this disclosure requirement, and then we'll have companies do a little bit more and a little bit more. And when you've read the little tech agenda, all of a sudden you start thinking, how is this going to work for all the people who aren't in the room? And so for me, the question, like, thinking about coming into this role in the firm was, is this a voice? Is this a part of the community I want to advocate for and think about? And when you start looking at the policy debate from a perspective, little of little tech, and you see how many of the conversations don't include a little tech perspective. It comes from my point of view. It was very compelling to think about. How can I advocate for this part of the Internet ecosystem?
A
Right.
C
And, Colin, why don't you outline some of the pillars of the little agenda or some of the things that we focus the most on and maybe how it differentiates from sort of big tech more broadly?
A
Yeah, I mean. Well, I mean, just from a firm perspective. Right. Obviously, we're verticalized. You know, we all live and breathe this. And I think that that's been very, very competitive for us on the business side. But I also think it's very competitive on the policy side, too. Right. Obviously, mad leads are AI vertical and sort of our AI policy lead. We have a huge crypto effort. We have a major effort around American dynamism. And then this is sort of defense procurement reform, which is something that the United States has needed forever and ever. We have, you know, other colleagues who work on the. On the bio and health team, and. And they're fighting on behalf of, you know, FDA reform. Everything from PBMs. There's a whole vertical there that they're working on. We're working a lot on fintech related issues and then you know, just like classic tech related sort of Internet entrepreneurs coming up. What does that relate to? There's a lot of tax issues that come along with it. And then of course obviously there are the venture specific things that we have to deal with. But look, I think, I try and think about this from a basic point of view which is just like if you're a small builder, what are the things that should differentiate you between someone who's a trillion dollar company and you have hundreds of thousands of employees? Right. If you're five people and you're in a garage. Yeah. How are you supposed to be able to comply with the same things that are built for a thousand person compliance teams? Like it's just not the same thing. Right. And like there are categories and categories that you know, Matt and I are dealing with on a regular basis. But that's probably the main pillar which is five person versus trillion dollar company, not the same thing.
B
It's made my job actually really hard in certain ways since I started at the firm because the kinds of partners that you want within our portfolio often don't exist in that. Like a lot of the companies don't have a general counsel, they don't have a head of policy, they don't have a head of communications. And so the kinds of people who typically sit at companies thinking all day about like what is this state doing in AI policy, what is this federal agency doing in terms of rulemaking, they're not at startups that are just a couple of people and engineers trying really hard to build products, those companies face this incredibly daunting challenge. I mean, it seems so daunting for someone like me, like non technical. And I've never worked at a startup. If they're trying to build models that might compete with Microsoft or OpenAI or Meta or Google, and that is unbelievably challenging. In AI you have to have data, you have to have compute. There's been a lot written about the cost of AI talent recently. It's incredibly, incredibly daunting. And so the question that Colin and I talk about all the time is for those companies, what are the regulatory frameworks that would actually work for them as opposed to making that competition even more difficult than it already is?
A
Yeah.
C
One of the principles I've heard you guys, you know, hammer home is we want a market that's competitive, where startups can compete. We don't want a monopoly, we don't want even oligopolies, you know, A cartel like system. And that, that doesn't mean no regulation because that can, as we've seen, that can be destabilizing too, but it means smart regulation that enables that composition in the first place.
B
Yeah, so I think one of the things that's been surprising to me to learn about venture is the time horizon that we operate in. So our funds are 10 year cycles. So we're not looking to spike an AI market tomorrow and have a good year, a good six months or a good two years. We're looking to create vibrant, healthy ecosystems that result in long run benefits for people and long run financial benefits for our investors and for us. And that means having a regulatory environment that facilitates healthy, good, safe products. It doesn't mean like if people have scammy problematic experiences with AI products, if they think AI is bad for democracy, if they think it's corroding their communities, that's not in our financial incentive, that's not good for us. And so that really animates the kind of core component of the agenda which is not trying to strip all regulation, but instead focusing on regulation that will actually protect people. And we think that there are ways to do that without making it harder for startups to compete.
A
Yeah. To Matt's good point, I walk into a lot of lawmaker offices. You know, it sounds like I'm pitching my book, but I genuinely say, like, our interests are aligned with the United States of America's interest because the people that we're funding are on the cutting edge. They're the people who are going to build the companies that are going to drive the jobs, they are going to drive the national security components that we need, and they're also going to drive the economy. And like we want to see them build over a long time horizon. And like that is exactly how we should be building policy in the United States. Of course, like half the offices I walk into, like, all right, great, get that guy out of here.
B
99.9% of people we talk to think that all we want is no regulation. And despite, like writing extensively, like both of us writing, speaking extensively about the importance of good governance for creating the kind of markets that we want to create. Colin can speak more to it in crypto. I've learned a lot from our crypto practice because the idea there is you really need to separate good actors from bad actors and ensure that you take account for the differences. And it's true in AI as well. If we don't have safe AI tools, if there is absolutely no governance, that's not going to create a long run healthy ecosystem that's going to be good for us and good for people throughout the country.
A
I actually can't think of a single example across the portfolio in which we are arguing for zero regulation.
B
The core component of our AI policy framework, which was developed before my time, I wish I could take credit and I can't, is focused on regulating harmful use, not on regulating development. And that sentence, regulate use, do not regulate development, somehow is interpreted as do not regulate. And people just omit for some reason the part that we focus on, on focusing on regulating harmful use. And that in our view, is robust and expansive and leaves lots of room for policymakers to take steps and that we think are actually really effective in protecting people. So regulating use means regulating when people violate consumer protection law, when they use AI to violate consumer protection law, or when they use AI in a way that violates civil rights law at the state and federal level, or violating state or federal criminal law. So there's an enormous amount of action there for lawmakers to seize on. And we really want that to be like an active component of the governance agenda that we're proposing. And for some reason it's all passed over and the focus is just on don't regulate development. I don't exactly understand why that ends up being the case.
A
Easy headline.
C
So there's been a lot that's happened in AI policy and I want to get to it. But first, perhaps, Matt, you can trace the evolution a bit over the last few years. I believe there was a time where, like pattern matching with social media regulation a bit. Why don't you trace some of the biggest inflection points, kind of the debates over the last few years and we'll get to today, maybe.
A
Colin, I think we have to play a little bit of history and I want to get to, you know, sort of a point that I think is the really critical point of what we're all facing here. For us, for me, I would say from a policy and government affairs perspective, this conversation started early 2023. That was sort of like the kickoff of the gun. It sort of puttered along and became more and more real over time. But in the fall of 2023, so almost exactly to the day two years ago, there was a series of Senate hearings in which, you know, some major CEOs from the AI space came and they testified. And I think that the message that folks heard was one, we need and want to be regulated, which I think maintains that's true today. That's obviously, you know, What Matt and I are working on on a regular basis. But I think included in some of that testimony was a lot of speculation about the industry that led to, and sort of absolutely jump started this whole huge wave of conversation around the rise of Terminator. Yeah, you know, go hug your families because we're going to all be dead in five years. And that spooked Capitol Hill. I mean, they absolutely freaked out about it. And, and look, rightfully so. You have these really important, powerful people who are building this really important, powerful thing and they're coming in, they're going to tell you that, you know, everyone's going to die in five years. Right. That's, that's a scary thing for people to hear. And oh, by the way, we want to be regulated, which, you know, like that, that starting gun, I think moved us in hyperspeed into this conversation around how do we lock this down, how do we regulate it? Very, very, very quickly. I think that led to the Biden executive order, which, you know, we have publicly sort of, you know, denounced in certain categories. That executive order led to a lot of the conversation that I think we're having in the states. A lot of the, you know, sort of bad bills that we've seen come through the states. And I think it also led to a number of federal proposals that we've seen that have not been very well thought through also. And look, you know, I think people are kind of sitting around, they're like, oh, well, you know, was it just like, you know, some testimony from these CEOs that did this? And the answer to that is no. You know, from my, from my point of view. And look, you know, they, they deserve a lot of credit. I think the effect of altruist community for 10 years backed by large sums of money were very, very effective at influencing think tanks and Nonprofit organizations in D.C. and the state capitals to sort of push us in a direction where people are very fearful about the technology. And that has shaped, significantly shaped the conversation that we're having throughout D.C. and the state capitals. And candidly, on a global stage, the EU acting, the EU AI Act Republican on that, there's a lot of very, very problematic provisions in there. All of this banner of safetyism came from this 10 year head start that these guys have had. So when I always, you know, that's kind of a bit of the history, but sort of as an aside to this, I always, I always just have to smirk or, you know, smile to try and laugh it off. But I mean, when people are writing these articles about the fact that the AI industry is, you know, pumping all this money into the system certainly like, I'm not suggesting that there's not money in the system. We're obviously active on the political and policy side. We're, you know, we're not hiding that. But it is dwarfed by the amount of money that is being spent and has been spent over a 10 year window. And candidly, I mean the reason that Matt and I have jobs is because we are playing catch up. We are here to try and make sure that people understand what is actually going on in this conversation and be a counterforce to this group of people and this idea, this ideology that has been here for a long period of time. So that's, I look, you know that that's kind of the briefer on this.
B
Yeah, I mean, and companies, I think we're ready to consider some policy frameworks that, that I think were probably really going to be challenging for the AI sector in the long run. And I think that's because I was at Meta, then Facebook starting in 2011 and through 2019. And so after really like 2016 there was aggressive criticism of tech companies and the general framing is like you're not being responsible and regulation needs to catch up. You governance of social media is behind where the products are. And whatever you think about that, that was really the kind of strong view in the ecosystem that like governance has allowed, the lack of governance has allowed problematic things to happen. And so I think when AI was starting to accelerate and, and and you had certain sort of prevailing political interests, I think that were driving the conversation, companies rushed to the table. And I think it was a group of 5, 3, 57 companies who went into the White House and negotiated voluntary commitments. I mean we don't even have to make the argument about the importance of representing little tech in. When you see that there is a set of companies who negotiated an arrangement for what it would look like to build AI at the frontier with all current developers who weren't those companies and all future startups not represented at the table. I think that is why like we started to think about the value of having more dedicated support around AI policy because clearly the views of little tech companies aren't represented in the conversation.
A
Yeah, well, I mean, let me, let me just add one thing to this and I, it's Mark and Ben's story. They've told it many times. I was in the meeting as well, you know, and like, you know, everything they've said has been 100% true and accurate. But there was a. There was a prevailing view by very, very powerful people of the previous administration that this was going to be only two or three major companies able to compete in the AI landscape. And because that was the case, they needed to be basically locked down and put in this incredibly restrictive view from a policy and regulatory perspective. And that was going to be kind of like this. This entity that was kind of like an arm of the government. And I think that that was the most alarming thing that I think we had heard from the administration on top of an incredibly alarming series of events that happened on the crypto side, including sort of wanting to eradicate it off the face of the planet, it seemed like. So I think that that all led to kind of the position that we're in now and certainly like Matt's hiring and the thing, you know, like us building out the team, et cetera.
B
So that narrative is clearly like a very alarming, maybe the most alarming version of this. But even since I've been in this role, I've heard other versions of it where people say, oh, don't worry about this framework. It just applies to three or five companies, or it just applies to five to seven companies. And I think they mean that to provide comfort to us, like, oh, this isn't going to cover a lot of startups. But the view of the AI market, where there are only a small number of companies building at the frontier, is not the. That's not the vision for the market that we have. We want it to be competitive and diverse at the frontier. And the policy ideas that were coming out of the period that Colin's talking about were dramatically different from where they are today in a way that I think, like, some people have even, like lost sight of exactly where we were a couple years ago. There were ideas being proposed by not just government, but industry to require a license to build frontier AI tools. And for it to be regulated like.
A
Nuclear energy should be historic for software development.
C
Yeah, right.
B
Unprecedented.
C
Yeah, yeah.
B
And for it to be regulated like nuclear energy with like an international style nuclear, like, sorry, an international level nuclear style regulatory regime to govern it. And we've moved like, no matter what you think about the right level of governance, there are not a lot of people now who are saying what we need is a licensing regime where you literally apply for permission from the government to build the tool. But that wasn't that far in the rearview mirror.
A
Yeah. And look. And we're also talking about bans on open source. I'm like, we're still kicking around that idea at the state level. And look it all, you know, look for us who live and breathe the tech stuff on a daily basis, this is, you know, this sounds insane, crazy, but let me, you know, like just to make it a little bit more real. Right. Like the nuclear policy in the United states has yielded 2, 3 new nuclear power plants in a 50 year period since these organizations have been started. And look like you can, Some people are pro nuclear, some people are anti nuclear. I don't want to get into that debate. The point though is, is that that was not the intended policy of the United States of America. That was the effect of putting together this agency. And what has come from that. And I think, you know, look, if we do the same thing to AI, had we done the same thing in AI in that period of time, then you don't have the medical advancements, you don't have the breakthroughs, you don't have all of the things that come from this that are incredible. But beyond that, we lose to China full stop. You lose to China and then our greatest national security threat is, becomes the one who has the most powerful technology in the world. Right.
C
And I think, I think the early concern on the open source was that we would be somehow giving it to China. But then we've seen with deep seek, etc. That they just have it anyways.
A
Yeah, yeah, exactly. Right, exactly. You know, the idea that we could lock this down, I think, I think, you know, I mean, Mark and Ben have talked about this. I mean, I think they've debunked that a number of times. Yeah.
C
Just understand was for the previous administration, what was their calculus? Was it that they were true believers in the fears? Was it that there was some sort of political benefit to having the, the views that they had, especially on the crypt. I don't understand what's, what is the constituency for anti crypto stance? How do you make sense of, of sort of the players or the intentions or motivation just to understand sort of the calculus there?
A
Yeah, you know, I mean, look, I think that that's a really, I think that's a really hard one to answer. And I'm not sure I can pretend to be completely in their minds. I think there's a couple of different competing forces here. Like one is, you know, what are the constituencies that support sort of that administration? What are the constituencies that support that side of the aisle? And I think that especially over the last 10 to 15 years it has been very, very heavy focus on consumer, consumer safety, which I think, look, a very important thing and we're obviously in alignment on that. I think everyone should be alignment, have to protect consumers, have to be able to protect the American public. But I think that a lot of that conversation has been weaponized. I think that it is, it is a big time money maker. I think a lot of these groups either get backing from very, very wealthy, special interest or they are small dollar fundraising off of quick hits like, you know, AI is coming for your jobs, donate $5 and we're going to, you know, and we'll make sure that we take care of this in Washington for you. And you know, like pretty easy, you know, it's a pretty easy manipulation tactic. You know, it's used like from a bunch of people. But, but I think that that's like a very, that that held very seriously true. Right. And I think, you know, the other thing here is that I think personnel is policy. It's the old saying, personnel is policy. And I think a lot of the individuals that were in very senior decision making roles within that White House and that administration came from this sort of consumer protection background where they've seen this, that was constituency, they were put in this position to come after private enterprise. Like, you know, that was, that was the, that was the goal. Like there's this whole idea out there, I think among some of those folks that, you know, Senator Warren has, has, you know, proposed this many times is, is like if you're not getting, you know, if you're not going after and getting people on a regular basis in the private sector, then you're not working hard enough. And like I just, you know, I think that that, that is, is probably like the second thing and like the third is just we're at this very weird moment where being a builder and being in private enterprise is, is a bad thing. To some policymakers it's not, you know, you're not doing good because you're earning a profit. And you know, they certainly won't say that, but the activities and the things that they're doing are 100% alive with that type of idea. So I think that's the basic crux of it.
B
I think the things that motivated that approach were done in good faith. And I think it's what you alluded to earlier, which was I don't share this view, but there are a lot of people who believe that social media is poorly regulated and that because policymakers were asleep at the wheel, we woke up at some point, I don't know, sometime in the 2014-2018 period and realize that we had technology that we thought was actually not good for our society. And I think that whether or not you think that that's true or not, that I think that was, that has been a widely held view. It's a, it's a held view on the right and on the left, it's a bipartisan view. And so I think when this new technology came on the scene, this was a do over opportunity for policymakers. Right? Like we can get this right when we didn't get the last thing right. And so I understand that motivation, it makes a lot of sense. I think the thing that we, that we strongly feel is the set of policy ideas that came out of that, like good faith belief, were not the right policy ideas to either protect consumers or lead to a competitive AI market. Like many of the politicians who were pushing concepts that would have really put a stranglehold, I think, on AI startups and would have led to more monopolization of a market that already tends toward monopoly because of the high barriers to entry. Already those politicians three years before had been talking about how problematic it was that there wasn't more competition in social media. And then all of a sudden they're behind a licensing regime which is not. I don't think there's much economic evidence that licensing is pro competitive. It typically is the opposite. The disagreement is less with the core feeling like we want to protect people from harmful uses of this technology and more from the policy concepts that came out of that feeling that we think would have been disruptive in a problematic way to the future of the AI market.
C
Anecdotally, it seemed from afar that some of the concerns early on were almost to match social media, like around disinformation or even DEI concerns. And then people were trying to sort of make sure the models were incompatible with, compatible with sort of the sort of speech regime at the time. But then it kind of shifted to, oh wait, is there more existential concerns around jobs? Or is AI even like nukes in the sense of people doing harm or AI itself doing harm? But it seemed to escalate a bit and you know, maybe aligned with that testimonial that you alluded to.
B
I experienced it as feeling like the gold posts always move. And one of the things that I said like that I said that I started asking people when I was really trying to settle into this regulate use, not development policy position is what do we miss? Like if we regulate use primarily using existing law, what are the things that we miss? And I haven't gotten very many clear answers. To that, like you can't do illegal things in the universe and you also can't use AI to do illegal things. And typically when people list out the set of things that they're most concerned about with AI, they're typically things that are covered by existing law. Not probably, not exclusively, but primarily. And so that at least seems like a good starting point. Some of the other issues that I think are like, understandably ones that we should be concerned about have a range of different considerations associated with them. Like, like if you're concerned about misinformation or like speech that you think might not be true or might be problematic, there are significant constraints on the government's ability to regulate that. The First Amendment imposes pretty stringent restrictions and I think for very good reason, because you don't want the government to dictate the speech preferences policies of private speech platforms for the most part. And so those issues might be concerns, but they're not necessarily areas, I think, where you want the government to step in and take strong action. And so I think there are things that we should probably do as a, a society to try to address those issues. But regulate government regulation maybe isn't the primary one. And again, in most of the things that people are most concerned about, like real, real use of the technology for clear, cognizable, real world harm, existing law typically covers it.
A
I have a theory on this. So I, I think everything that Matt just said is, is spot on. But, but you know, like then, then you're kind of sitting around, you're kind of scratching your head, it's like, okay, well if use covers it, and there hasn't been a very incredibly fair rebuttal onto why use is not enough in terms of focus on the policy and regulatory side, what's the answer? I think we're experiencing sort of this, I don't know if it's phenomena, but we're experiencing this pattern on the crypto side too, which is we're having a very, very spirited debate on the crypto side of things on how to regulate these tokens and how do you launch a token in the United States as a security or as a commodity. And this is this age old debate that's plague securities, traditional securities laws for years, but also certainly crypto industry. But what we have found is there are, there are a number of people who have entered this debate who are actually trying to get at the underlying securities laws. Like they, they want to reform securities laws, they don't want to reform crypto laws that involve securities. And this is their only venue by which they can enter that conversation. Because we're not having. There's no will from the Congress or from policymakers to go and overhaul the securities laws right now. You know, it's just not there. But what is moving is crypto. So people, you know, there are all these people that are now trying to enter this debate and like, oh, we should relook at this and like, well, this doesn't have anything to do with it. We shouldn't be entering this conversation yet. They're still pushing, right? And that's kind of muddy the water. I think a very similar thing is actually happening on the AI side, which is, you know, there are a number of members of Congress that feel like, well, we missed it on the 96 telecom act, like that wasn't, we didn't do good enough around then. So we need to rewrite the wrongs through the venue of an AI policy conversation. Right, because if you, if you think about it, right, assuming that use doesn't go far enough for someone, right, and this is the same conversation that we're having in California right now or in Colorado right now. If uses does not go far enough. Okay, well then it would be really, really simple if you could have a privacy conversation around this, if you could have an online content moderation conversation, an algorithmic bias conversation around it, you could do all of that, wedge it through AI. And then assuming AI is actually going to be the thing that we all think it's going to be. Now you've put basically a regular regulatory funnel on the other side. Like you've put a mesh screen where everything has to run through AI and therefore it runs through this regulatory proposal you put together.
B
Yeah, the, the thing that I've really been wrestling with in the last few weeks is whether those kinds of regimes are actually helpful in addressing the harm that they purport to want to address. In Colorado is a really good example. So there are all these bills that have been introduced at the state level. Colorado is the only one that's passed so far that set up this, this regime where you basically have to decide are you doing a high risk use of, of AI or a low risk use of AI. And this would be for startups that don't have a general counsel, don't have a head of policy, can't hire an outside law firm.
A
Figure it out.
B
High risk, low risk. And then if you're high risk, you have to do a bunch of stuff. Usually impact assessments sometimes audit your technology to try to anticipate is there going to be bias in your model in some form, which maybe an impact assessment helps you figure that out a little bit, but it's probably not going to eliminate bias entirely. It certainly isn't going to like end racism in our society. There was a. Colorado is now their governor, their attorney general have, have put pressure on the legislature to roll back this law because they think it's going to be problematic for AI in Colorado. And so there was just a special session there to consider various different alternatives. One of the alternatives that was introduced proposed codifying that the use of AI to violate Colorado's anti discrimination statute is illegal. That's consistent with the regulate harmful use framing that we've talked about. And it's. Instead of having this like amorphous process where maybe you address bias in some form, maybe you don't, this goes straight. It's not a bank shot. It goes straight at it where if someone uses AI in a way that violates anti discrimination law, that would be, that could be prosecuted, the attorney general could enforce. And I don't, I still don't understand why that approach is not, is somehow less compelling than this complex administrative paperwork approach. I think it's kind of the reason that Colin's describing, which is like people want another, a different bite at the apple of bias, I suppose. But it's not clear to me that, that it's actually the best way to effectuate the outcomes that you want as opposed to just criminalizing or creating civil penalties for the harm that you can see clearly.
A
It's also, I mean in policymaking and bill writing it, it's really, really easy to come up with bad ideas. Yeah, it's easy, right, because they're not well thought through. The first thing comes to your head, someone publishes a paper on something, here we go. It takes real hard work to get something that actually works. And then it's even harder to actually go through a political and policy negotiation with a diverse set of stakeholders and actually land the plane on something.
B
Yeah, I think that's part of the reason that people think that we are anti governance because when we, I mean Colin again, he lived this history. I'm coming in late to it. But like as we were ramping up our policy apparatus, these were the ideas in the ecosystem. Licensing, nuclear style regulation, like flops, threshold based disclosures, really complicated transparency regimes, impact assessments, audits, which are a bunch of ideas that we think are not going to help protect people and are going to make it really hard for low resource startups. And so we've been trying to say, no, no, no, don't do that. And so that sounds like deregulate. But. But for whatever reason, it's been hard so far to shift toward. Like, here's another set of ideas that we think would be compelling in actually protecting people and creating stronger AI markets right now.
C
We don't see, you know, terrorists or criminals being aided, you know, 1000x with AI and in performing terrorism or crime. Like, when I ask people, like, what are you truly scared about? Like, give me a concrete scenario. People, you know, they'll be like, oh, what about like bioterrorism or something? Or what about, you know, cybersecurity, you know, theft or something? We seem very far away from that. Is there any amount of development at, you know, in the next few years, Any amount of breakthroughs where you, where you might say, oh, you know, maybe use isn't enough? Or do we think that that will.
B
Always be a. I think it's conceivable. I mean, and I think we've been open about that. Like we, we, we think existing law is a good place to start. It's probably not where we end. So Martin Casado, one of our general partners, wrote a great piece on marginal risk in AI, basically saying, like, when there's incremental additional risk, that we should look for policy to address that risk. And so the situation you're describing, I think might be that. I think what you're getting at is a really important question about just potential significant harms that we don't yet contemplate. We get asked often about our regulate use, not regulate development framework. Are you just saying that we should address issues after they occur? And I understand why that's a concern. Like there might be future harms and wouldn't it be nice if we could prevent them in advance? But that is how our legal system is designed. And typically when you talk to people about ways that you could try to address potential criminal activity or other legal violations ex ante before they occur, that's really scary to people like Eric. What if we just learned a lot of information about you and then predicted the likelihood that you might do something unlawful in the future? And if we think it's exceeded a certain threshold, then we're going to go and try and take action against you before you've done it so that we can prevent future crime? That you're laughing because it's laughable. We don't want a kind of ex anti surveillance, both because it feels invasive, but also because it often is ineffective, like you might. It might. We Might run some test that shows that maybe you're likely to be predisposed to some kind of criminal activity. But we don't know until you've done it that you're going to do that you've done it. And so I think that kind of approach, again, I think it's motivated by a really valid concern and a valid desire to prevent harm. What if we could prevent harm before it's occurred? The challenge is the regulatory framework, I think probably won't do that. It probably won't have the effect of preventing harm. And there are all these costs associated with it, mainly from our perspective, inhibiting startup activity.
C
Yeah. Mark once told me in a podcast, he told me the joke, which is, man goes to the government. You know, I go to the government because I have this big problem now. I get a lot of regulation now I have two problems. Okay, let's talk about the state of AI policy today. There's a lot that's happened the last few months with the moratorium, the action plan. What are some of the things that we're excited about right now? What are some of the things we're less excited about right now? Why don't we give a breakdown of.
A
Where we're at right now?
B
So I think given what Collins described about where things were a couple years ago, it's great to see the federal government, certainly the executive branch, but not just the executive branch. I think this is in Congress across both aisles being supportive of frameworks that we think are much better for little tech. So trying to identify areas where regulatory burden outweighs value and where we can right size regulation to make it easier for AI startups. As Colin said, support for open source, we were in a really different place on that a couple years ago. Now it seems like there's much more consensus. And again, actually it was across the end of the last administration and the current administration around the value of open source for competition and innovation. The National AI Action Plan also had great stuff in it about thinking through the balance between the federal government and state governments, which is something that we've done a lot of thinking about. There's an important role for each, but we think the federal government should really lead regulation of development of AI states should police harmful conduct within their borders. And I think there's stuff in the action plan that would try to ensure those respective roles. There's also a lot of stuff in the action plan that wasn't really talked about much. It wasn't sort of the headline grabbing stuff that I thought was incredibly compelling. In terms of again, trying to create a future for AI that just works better for more people. And a really good example is the stuff on worker retraining that focused on different programs that could help workers if they're displaced as a result of AI, as well as monitoring AI markets and labor markets to make sure that we understand when there are significant labor disruptions. So I think it sort of gets at a point that you were, were alluding to a couple minutes ago about like what happens when there's something really disruptive in the future. Can you predict with certainty that there won't be this crazy disruptive thing? And no, we can't. There, there might be significant labor disruption. Others at the firm have talked extensively about how typically there's always. There are worries about labor disruptions when there's new technology introduced. Typically there are increases in productivity that end up being good for labor overall. We think that's the direction of travel, but. But you never know. We can't predict it with certainty. And so I think it's a really strong step to try to just monitor labor markets to see what the disruption might look like so that we're set up to take strong policy action in the future.
A
Can I just say one thing about the action plan and I don't want to juxtapose this to what we saw under the Biden administration, which is incredible amount of activity in the Biden administration, an incredible amount of activity under the Trump administration. But you know, look, I, I kind of view these executive orders and these plans that come out from administration are very, very important. And some of them have true policy. They direct the agencies to do things to come out with rewards and then take on a rulemakings and things like that. But from an AI action plan perspective, for me it was so significant because I think it turned the conversation on its head. Before it was we have to, we have to only focus on safety with a splash of innovation. And now it is. We understand how important this is from a national security perspective. We understand how important this is from an economic perspective. We need to make sure that we win while people, while keeping people safe. Right? And that dynamic and that shift of rhetoric is incredibly important because what that does is it signals to the rest of the world, it signals to other governments that this is the position of the United States and will be the position for the next three and a half years. And this is the position of the United States to the Congress. So when the Congress is looking at potentially taking up pieces of legislation or taking actions or even Committee hearings, which, you know, for the broad base of what we're talking about are fairly insignificant. All of that is sort of kept in mind. So now the conversation has shifted significantly and that, that is really, really important.
C
Speaking of winning, Colin, I'm curious for, for our thoughts on a policy vis a vis China, whether it's export controls or any other, you know, issues we care about.
A
Yeah, I mean, well, I mean, look, first and foremost, we've talked about already. I mean, we have to win, right? And I think, I think that that is, that is, that that is at the main thrust of a lot of what we're doing here. And a lot of the way that we think about this from a firm perspective. You know, I think first is making sure that the, the founders and the builders can build appropriately with appropriate safeguards and an appropriate regulatory structure. The second is how do we win and make sure that America is the place where AI is, is probably the most functional and foundational of vis a vis China. You know, I, I think that there has been a long conversation. The diffusion rule that came out from the Biden administration specifically on export controls, many, I think, panned that proposal. I think that that was, a lot of people suggested it was probably too restrictive. It wasn't the right way to think about things. I think, you know, we have spent most of our time, Matt leading this effort, has spent most of his time, our time specifically focused on how are we regulating the underlying models and how are we regulating, hopefully the use of these models versus specifically sort of on the export control piece. What I will say, though, is very concerning, sort of some of the proposals that came out from the Biden administration, some of the proposals that we've seen at the state level and some of the proposals that we've seen at the congressional level at the federal standpoint that dealt with specifically export controls on models themselves. And we're still kind of having this conversation. There's, there is a policy set that has been kicked around for a while. It's called the outbound Investment policy, which is basically how much US Money from the private sector is flowing into Chinese companies. And very noble, laudable, super supportive of that concept. We are a very primary America, America first organization here. We're investing primarily in American companies and American founders. So, you know, we're very supportive of it. But when you, when you sort of edge into the idea that we might inadvertently ban US Open source models from being able to be exported across the country, like by definition of open source, there is no, there are no walls around These types of things. So that's one of the areas that we've been very, very focused on. And I think, I think obviously very important to make sure that we don't have these very powerful technologies, US made technologies, in, in the hands of our Chinese counterparts and the play and the CCP using this against us. But I also think that we need to make sure that we're not extending too far and limiting the power of open source technologies to be able to kind of be the platform around the world. You know, the final point that I'd make here is we do ultimately and fundamentally have a decision to make, as you know, the US which is do we want people using US products across the world, which helps for a whole bunch of different reasons, but certainly on soft power from a national security perspective, or do we want people to use Chinese products? The more that we lock down obviously American products, the more Chinese, the Chinese will enter those markets and sort of take a land grab in that space.
C
When you get into more what happened with the moratorium and the fallout that.
A
Ensued, I think this one is a bit complicated. There was a perception about the moratorium when it came out that it would have prohibited all state law from existing for a 10 year window. Obviously that's a long period of time. I'm not sure we would necessarily completely agree with that policy stance that from our point of view is a misinterpretation for a whole bunch of different reasons of actually what the language said. But you know, sometimes in D.C. a lot of times in D.C. perception is reality and that, that kind of, that kind of took hold. But I, I also think that, you know, there are also, you know, strong competing forces like we've discussed, right from the, I think the doomer crowd or the safety crowd that were very, very anti, that had, had used all of their tentacles that they've spread out over the last decade to try and move in and try and kill this. I think they also were successful in leveraging some other industries to try and come in and also move forward to try and kill this thing. And look, you know, by virtue of the vehicle, the underlying procedural vehicle, this reconciliation package that it was moving in, it was a partisan exercise. It was going to be Republicans on Democrats and that was that. Right. And there was nothing, even a prominent AI policy that was going to be dropped in a reconciliation package that was ever going to drag Democrat votes over it because it was such a big sort of Christmas tree style thing that had all kinds, all kinds of tax reform positions, et cetera and if you're in one of those situations, the margins on the votes become very, very, very small. So all it took was, you know, one or two Republican senators hitching their wagon to some of these ideas that were out there to tank this thing. Right. And look, I, I think that's going to be a situation that you're going to fight in any sort of political policy, legislative outcome, or any sort of, any, any sort of issue that you're going to be running within the Congress. Right. But I think more so than anything, and we heard this repeatedly from a whole bunch of different people and this is what we've also experienced. The industry was just not organized well enough. Right. And that's not just the industry, it's also the people who care about this thing that aren't actually industry stakeholders. The stakeholders who were pro some level of moratorium or some level of preemption were just not organized. And I think that, that, that was a, you know, both a eye opening moment, but also an important moment because I think what we have done in the preceding, you know, three, four months since this thing has gone down is we've taken a long, hard look at what we need to do collectively from a coalition to be able to be in a better position next time we're there. And so what does that look like? Right? I mean, first and foremost, it comes with writing, doing podcasts, talking about these things, talking about the details of what's actually in these proposals and what it actually means for states and, and, and the federal government to make sure that we're fighting through the FUD that's coming through because it's always going to be there. There's misrepresented, misrepresentation all over the, all over the field. The second piece is let's all get on the same page, which I think we've, we've worked very hard to do. And where we can find alignment, we, I think we've found that alignment between big, medium and little. And then I think the third, and probably the most important is what are we doing on sort of the political advocacy side to make sure that we have the appropriate tools to be able to push forward in a way that ensures that America continues to lead and that we don't lose out on this race to China. And that's part of the reason that we have recently announced our donation to Leading the Future pack, which will have several different entities underneath it, which I think is designed to sort of be that political center of gravity in the space and that will fight at the Federal level and the state and local level. So we're happy to be a part of it. And I, we expect, you know, there will be others that join this sort of common cause fight on the AI side.
C
If we could wave a wand, what would we like to be done at the state level or we like to versus the federal level versus how should we think about that, that interplay compared to where we're at now?
B
Yeah, so, so I think they're the, the, the helpful answer here comes from the Constitution. Constitution actually lays out a role for the federal government and a role for state governments. Federal government takes the lead in interstate commerce. So governing a national AI market and governing AI development we think is primarily Congress's role. Sometimes when, when people say that, I think the what other people hear for some reason is states should do nothing. And we have been, we have tried very hard to be very deliberate in not saying that and making clear that states have an incredibly important role to play in policing harmful conduct within their jurisdictions. So criminal law is a perfect example. There is some criminal law at the federal level, but the bulk of criminal laws at the state level. Like when you think about routine crimes, if you are going to prosecute someone, prosecute a perpetrator, it's likely that that would occur under state law. And so to the extent we want to take count of local activity that that would where there's criminal conduct involved and we want to make sure that the laws are robust enough to protect people from that activity, that's going to be primari state law. Oddly enough. I mean as Colin is describing, like we, this isn't the delineation that we started out with. There are a lot of state laws that have sort of taken the approach of sometimes explicitly Congress hasn't acted. So we have a responsibility to act. And that's true to some extent. Like you can act within states can act within their constitutional lane. Some of what states have done have gone outside that lane. And so we actually just this week released a post on, on potential dormant commerce clause concerns associated with state laws. And the basic idea there is that there's a constitutional test that says that states cannot excessively burden out of state commerce if when it, when that greatly exceeds the in state local benefits. And so courts actually weigh that there's a balancing test. Are the harms cost to out of state activity, do those significantly outweigh the benefits on the local side? And we think that at least for some of the proposals that have been introduced, it's likely that they won't that the Benefits are somewhat diminished relative to what the proponents think they are and that the costs are significant. Like the cost of a developer in Washington State for complying with a law that's in California or a law that's in New York is going to be significant. And so our hope, I think, is not that the dormant commerce clause ends up serving as a function that makes it hard for states to enact laws, but actually just serves as a guidepost for states around the kinds of laws that they might actually introduce. And I think it pushes in the direction that's consistent with our agenda, which is to take an active role in legislating and enforcing laws that are focused on harmful use.
C
Looking in the next six months to a year, what are the issues that we're most focused on or that we're thinking about are going to be playing a role in the conversation?
A
Yeah, I think it's first and foremost some level of federal preemption. And I want to be very specific about this. Again, to Matt's point, we're not talking about preempting all state law. We're talking about making sure that we have a federal framework specifically for this model regulation and hopefully how the models can be used. Right. I think that's going to be so critical because we can't just like any other technology. No, no technology can live under a 50 state patchwork. And that's been the biggest issue that we've been fighting over the last year and a half or so. So I think that. I think that there are some other sort of policy sets that I think will be handled beyond that, that I think can kick into sort of workforce training. I think there's some literacy things that should be coming up. Obviously there's a huge robust conversation around data centers and energy that I think it will be really, really important. But above all, I think most of our time and energy will be focused on trying to have some level of federal standard here to try and drive the dividing line between the federal and state government, which I think Matt has already done a ton of great work on.
B
Yeah, I think this is just a super exciting policy moment for AI. There's the last couple years where I think there are a bunch of ideas that have been proposed and for the reasons that we've discussed, we think those ideas fall short both in terms of protecting consumers and in terms of ensuring that there's a robust startup ecosystem. Most of those laws I think have actually not succeeded in passing. So, like there were a number of laws introduced at the state level in this past year's legislative sessions that we thought had a strong likelihood of passing. And I think to date none of them have passed. Colin has also been building out the expertise and skill set and capacity on his team. We just hired Kevin McKinley to lead our work in state policy and he I think will help us to take a real affirmative position in the legislative sessions ahead on what might actually be AI policy that's good for startups. So instead of being in the position of saying no because we're sort of starting late and kind of with one hand behind our back, I think we're in a position to really actually try to articulate in advance a proactive agenda in AI that's compelling. I think Colin hit the main parts of it. Ensuring proper roles for the federal and state governments, focusing on regulating harmful use, not development. And there are specific things that you can do there in terms of increasing capacity and enforcement agencies, making clear that AI is not a defense to claims brought on under existing criminal or civil law and technical training. I think for government officials to make sure that they can identify and prosecute cases where AI is used in a harmful way. And then all this infrastructure and talent stuff that Colin is describing, worker retraining, AI literacy. We've also given some thought to the idea that has been articulated by a number of lawmakers and was in the national AI Action Plan of creating a central resource housed in the federal government and you could also do it in state governments as well that lower some of the barriers to entry for startups, you know, compute costs and, and data access. And we think that's really compelling in terms of ensuring that startups can compete. And that idea, like many of these is bipartisan. It's been supported by the current administration, it was supported by leading Democrats cuts over the last couple years. So that's the kind of thing that we are hoping that when we have the room and position to really advocate for an affirmative agenda that we'll get some traction in policy circles.
A
We are not always in 100% alignment with other people in the industry and I think that that's big, medium, little across the board. There's other sort of consumer advocacy groups that obviously feel differently about these things. I think for the most part the industry is generally aligned on some level of a federal standard here and understanding that the thing again that won't work is a 50 state patchwork. And I think that that's super, super important because I think for the first time you actually have this sort of alignment there. And if you have that sort of alignment, that's kind of momentum that you can to actually push things over the finish line and get something done. And I think, look, also the Trump administration, to their credit, has also been incredibly supportive of this idea too.
B
There's a, like, that's an incredibly important point. One criticism usually raised in sort of an implicit criticism sort of way is, hey, you're the little guys, but often you align with the big guys. So aren't you just saying, aren't you just in favor of a deregulatory agenda that works for big tech? And one of the things that I think is really extraordinary about the little tech agenda is it's really nonpartisan and it doesn't take a position on Big Little. It basically says, here's the agenda, and when you agree with us, we'll support you, and when you disagree with us, we'll oppose you. And that's not party line, it's not big little. And so I think what we saw over a certain. The phase that Colin was referring to kind of initially in the recent set of AI policy was a phase of divergence between big and little licensing regime. Bigs were sort of pushing it, little was concerned about it. Then there was a period of convergence. And I think, actually, if you look at like the National AI Action Plan comments across a range of different providers, as Colin's saying, like, a lot of them, they had some core similarities. So. So lots of large companies have advocated for federal preemption. We don't oppose that just because big companies are advocating for it. We think that that's good for startups. I think it's possible. I'm curious. I mean, this is really, you know, Colin really understands this in a way that I don't like how the political chips will fall. I think it's possible we're in a period of some divergence. And one thing that we hear repeatedly, which is sort of funny, is people will bring us stuff and they'll say, industry agrees with this. So we expect you to agree. You can't. The industry's already agreed, you can't disagree. And we say the big parts of the industry have agreed, but we sometimes we agree with them, but sometimes we have different views. And so when we disagree, it's not because we're trying to, like, blow up a policy process or make it difficult for lawmakers who are trying to move something forward. It's because when we're looking at it, we're looking at it through this particular lens. And I think, I hope it's not the case, but I think there might be more fracturing in the months ahead.
A
Yeah, I agree with you on that. And by people he means lawmakers. Just to be specific.
C
Yes, that's a great place to wrap. Colin Matt, thanks so much for coming on the podcast.
A
Thanks very much.
C
Thanks for listening to the A16Z podcast. If you enjoyed the episode, let us know by leaving a review@ratethispodcast.com a16z. We've got more great conversations coming your way. See you next time. As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see a16z.com disclosures.
A
Sam.
This episode dives into the “Little Tech Agenda” for AI—a policy framework championed by a16z to ensure that AI regulation doesn't only support the giants (like Microsoft or Google) but also enables startups and small builders to innovate and compete. The conversation unpacks recent developments in AI policy, the origins and principles of the Little Tech Agenda, and the evolving roles of federal and state governments in regulating AI, including export controls, preemption, and open source. The focus is on practical, outcome-oriented policy that balances innovation with safety and competition, particularly for startups.
[00:00–03:36]
[05:14–10:09]
[10:09–19:33]
[19:43–24:34]
[24:34–32:57]
[32:57–35:22]
[35:22–39:14]
[39:14–42:47]
[42:47–53:29]
[53:29–56:13]
On regulatory overreach:
“For it to be regulated like nuclear energy with like an international style nuclear regulatory regime to govern it... That wasn’t that far in the rearview mirror.”
— Matt Perault [17:51]
On realistic regulation:
“If you’re five people and you’re in a garage, how are you supposed to be able to comply with the same things that are built for a thousand person compliance teams?”
— Colin McCune [04:52]
On existing law vs. new frameworks:
“When you talk to people about ways that you could try to address potential criminal activity... before they occur, that’s really scary to people.”
— Matt Perault [33:55]
On government intervention:
“I go to the government because I have this big problem. Now I get a lot of regulation. Now I have two problems.”
— Mark Andreessen (as quoted by the host) [34:54]
On U.S. tech and global competition:
“Do we want people using U.S. products across the world... or do we want people to use Chinese products? The more that we lock down obviously American products, the more the Chinese will enter those markets.”
— Colin McCune [41:30]
On the core difference between big and little tech:
“One of the pillars... is five person versus trillion dollar company: not the same thing.”
— Colin McCune [04:55]
The Little Tech Agenda is a16z’s push to ensure that AI regulation supports a vibrant, competitive ecosystem, focusing on clear, actionable rules that startups can manage. They argue for regulating harmful use without stifling innovation, and against overreactions grounded in hypothetical harms or one-size-fits-all compliance frameworks. As AI policy advances, a16z’s team is working to shape proactive, realistic frameworks that empower small builders while safeguarding public interests, and is committed to maintaining an independent, startup-centric voice in the process.