a16z Podcast: The Little Tech Agenda for AI
Episode Date: September 8, 2025
Host: Andreessen Horowitz (aka “a16z”)
Guests: Matt Perault (Head of AI Policy, a16z), Colin McCune (Head of Government Affairs, a16z)
Episode Overview
This episode dives into the “Little Tech Agenda” for AI—a policy framework championed by a16z to ensure that AI regulation doesn't only support the giants (like Microsoft or Google) but also enables startups and small builders to innovate and compete. The conversation unpacks recent developments in AI policy, the origins and principles of the Little Tech Agenda, and the evolving roles of federal and state governments in regulating AI, including export controls, preemption, and open source. The focus is on practical, outcome-oriented policy that balances innovation with safety and competition, particularly for startups.
Key Discussion Points & Insights
1. Why the Little Tech Agenda?
[00:00–03:36]
- Startups and small tech companies have historically lacked advocacy in Washington, D.C., compared to major tech players.
- The Little Tech Agenda arose to ensure policies account for the unique constraints and capacities of startups—not just “trillion-dollar companies.”
- “There wasn't anyone who was actually advocating on behalf of what I think we call little tech, which... are the startups and entrepreneurs, the smaller builders in the space.” — Colin McCune [01:32]
- Viewing tech policy “from the perspective of little tech” reshapes the conversation around rooms dominated by bigger company priorities.
- The “core pillar”: five-person teams cannot comply with regulatory frameworks sized for hundreds or thousands. Policy must differentiate.
- Lack of internal regulatory infrastructure (e.g., general counsel or policy teams) makes compliance daunting for startups.
2. The Regulatory Philosophy: Regulate Harmful Use, Not Development
[05:14–10:09]
- a16z is not anti-regulation: “I actually can't think of a single example across the portfolio in which we are arguing for zero regulation.” — Colin McCune [08:48]
- Their policy: robustly regulate harmful uses of AI (e.g., violations of consumer protection, civil rights, criminal law), not mere development.
- “Regulate use, do not regulate development,” is often mistaken for “no regulation.”
- A regulatory environment that supports trustworthy innovation and real competition is in a16z’s—and the U.S.’s—long-term interest.
3. The Recent History and Policy Landscape for AI
[10:09–19:33]
- Policymaker interest in AI regulation exploded after high-profile Senate hearings and CEO testimonies in late 2023, sparking “Terminator” level fears.
- The so-called “safetyism” movement (often attributed to effective altruism circles) heavily influenced the debate, leading to strict and sometimes ill-conceived proposals, like:
- Requiring licenses for AI development akin to nuclear regulation—“There were ideas... to require a license to build frontier AI tools. And for it to be regulated like nuclear energy... That wasn’t that far in the rearview mirror.” — Matt Perault [17:51]
- Proposed bans on open source at the state level.
- Emphasizes the unintended consequences of overregulation: lost innovation, lost global competitiveness, and strengthening adversaries like China.
4. Political and Economic Motivations Behind Tech Regulation
[19:43–24:34]
- Regulatory stances are often shaped by political constituencies, fundraising, and a cultural skepticism toward private enterprise.
- Historical missed opportunities in regulating social media influence current “do over” attitudes in policymaking.
- Well-intentioned but misguided frameworks risk reinforcing monopolies rather than addressing competition concerns.
5. The “Gold Post Moving” Problem and the Boundaries of Existing Law
[24:34–32:57]
- Policy debates have shifted from disinformation and bias to existential concerns like jobs and national security, often without clarity on what new law is needed.
- Many feared AI harms (malicious uses) are already covered by existing law.
- “If use covers it, and there hasn’t been a very incredibly fair rebuttal onto why use is not enough... what’s the answer?” — Colin McCune [26:38]
- Legislative proposals (e.g., Colorado’s approach) may burden startups with administrative processes without clear evidence they are more effective than simply enforcing anti-discrimination law against actual harmful uses.
6. Approaches to Future and “Marginal” AI Risks
[32:57–35:22]
- a16z’s view: Start with existing legal frameworks; adapt as new, concrete risks emerge.
- Preemptive regulation against hypothetical or future threats risks creating inefficiencies and stifling innovation.
- “That is how our legal system is designed... when you talk to people about ways that you could try to address... ex ante before they occur, that's really scary to people.” — Matt Perault [33:55]
7. Recent Progress and Remaining Challenges in AI Policy
[35:22–39:14]
- The federal government is now more attuned to the value of supporting competition and open source.
- National AI Action Plan: Advances federal-state role clarity, supports open source, addresses labor transition (worker retraining, market monitoring).
- Significant shift from a “safety first” mindset to also prioritizing innovation and global competitiveness.
8. Export Controls, Open Source, and Global Competition (esp. China)
[39:14–42:47]
- Export controls must be carefully crafted not to inadvertently undermine U.S. open source models in favor of Chinese technologies.
- Balancing national security and open platform power: “Do we want people using U.S. products across the world... or do we want people to use Chinese products?” — Colin McCune [41:30]
9. State vs. Federal Roles: Preemption and the Dormant Commerce Clause
[42:47–53:29]
- The need for federal preemption: a uniform national framework for AI model regulation is critical; a “50-state patchwork” is untenable for startups.
- States still have vital roles in policing local harmful use (e.g., criminal enforcement).
- The dormant commerce clause may serve as an important check: states can act, but not in ways that excessively burden national markets or out-of-state actors.
- The industry is largely, though not always, aligned on at least the need for a federal standard.
10. Towards a Proactive Policy Agenda for Startups
[53:29–56:13]
- a16z is doubling down on proactively articulating policies that support innovation—including technical training for government enforcers, infrastructure access for startups, and a “central resource” for lowering barriers to entry.
- The Little Tech Agenda aims to stay independent of “big tech” dynamics: “It’s really nonpartisan and it doesn’t take a position on Big Little. It basically says, here’s the agenda, and when you agree with us, we’ll support you, and when you disagree with us, we’ll oppose you.” — Matt Perault [54:34]
- A recognition of potential future divergence between big and little tech interests, and the challenge of remaining a distinctive voice for startups in negotiations.
Notable Quotes & Memorable Moments
-
On regulatory overreach:
“For it to be regulated like nuclear energy with like an international style nuclear regulatory regime to govern it... That wasn’t that far in the rearview mirror.”
— Matt Perault [17:51] -
On realistic regulation:
“If you’re five people and you’re in a garage, how are you supposed to be able to comply with the same things that are built for a thousand person compliance teams?”
— Colin McCune [04:52] -
On existing law vs. new frameworks:
“When you talk to people about ways that you could try to address potential criminal activity... before they occur, that’s really scary to people.”
— Matt Perault [33:55] -
On government intervention:
“I go to the government because I have this big problem. Now I get a lot of regulation. Now I have two problems.”
— Mark Andreessen (as quoted by the host) [34:54] -
On U.S. tech and global competition:
“Do we want people using U.S. products across the world... or do we want people to use Chinese products? The more that we lock down obviously American products, the more the Chinese will enter those markets.”
— Colin McCune [41:30] -
On the core difference between big and little tech:
“One of the pillars... is five person versus trillion dollar company: not the same thing.”
— Colin McCune [04:55]
Timestamps for Key Segments
- [00:00–03:46] — Little Tech Agenda origin and focus: startups v. big tech
- [05:14–10:09] — Philosophy: regulate use, not development; anti-zero regulation myth
- [10:09–19:33] — Historical AI “panic,” impact of safetyism, and misguided policies
- [19:43–26:38] — Political incentives and the “do over” impulse in tech regulation
- [24:34–32:57] — Why existing law covers many AI risks; pushback against new, vague mandates
- [32:57–35:22] — Addressing future risks and the limits of preemptive regulation
- [35:22–39:14] — Recent policy wins: federal progress, open source, the AI Action Plan
- [39:14–42:47] — Export controls, open source, and U.S.-China competition
- [42:47–49:57] — Preemption, Dormant Commerce Clause, and federal/state roles
- [49:57–56:13] — Developing a startup-focused policy agenda and inter-industry alignment/divergence
Summary Takeaways
The Little Tech Agenda is a16z’s push to ensure that AI regulation supports a vibrant, competitive ecosystem, focusing on clear, actionable rules that startups can manage. They argue for regulating harmful use without stifling innovation, and against overreactions grounded in hypothetical harms or one-size-fits-all compliance frameworks. As AI policy advances, a16z’s team is working to shape proactive, realistic frameworks that empower small builders while safeguarding public interests, and is committed to maintaining an independent, startup-centric voice in the process.
