The Lawfare Podcast: AI Policy Under Technological Uncertainty Hosted by The Lawfare Institute Episode Release Date: July 26, 2025
Introduction
In the July 26, 2025 episode of The Lawfare Podcast, Alan Rosenstein, Associate Professor at the University of Minnesota Law School and Senior Editor at Lawfare, co-hosts a deep dive into Artificial Intelligence (AI) policymaking amidst technological uncertainties. Joined by Matt Peralt, Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, the episode features a comprehensive discussion with Alexander “Amac” McGillivray. McGillivray, renowned for his expertise in First Amendment issues and his tenure as the former Principal Deputy Chief Technology Officer of the United States under the Biden administration and General Counsel at Twitter, provides valuable insights into navigating the complex landscape of AI regulation.
Background and Context
[01:44] Mary Ford introduces the episode, highlighting the recent release of the White House's much-anticipated Artificial Intelligence Action Plan. Structured around three core pillars—innovation, infrastructure, and security—the plan aims to position the United States at the forefront of AI development while countering China's increasing influence in the AI ecosystem. For this archived episode, Mary selects a conversation from July 23, 2024, featuring McGillivray discussing AI policymaking during a period of rapid technological advancement.
[02:43] Alan Rosenstein sets the stage by introducing McGillivray, highlighting his roles in both Democratic administrations and his extensive experience in the private tech sector, including his positions at Google and Twitter.
Navigating AI Policymaking Under Uncertainty
Understanding Regulatory Frameworks
[03:37] Unnamed Speaker:
“... guidance you would give to people who are stepping into that office about how to do that job effectively.”
[03:59] Alexander McGillivray:
“I do think we are going to have a lot of trouble with some of the types of regulation that certainly many people have been calling for.”
McGillivray emphasizes the challenges of crafting effective AI policies in an environment characterized by rapid technological changes and inherent uncertainties. He underscores the difficulty in balancing regulation to protect individual rights while fostering innovation.
Assumptions in AI Development
[12:13] Unnamed Speaker:
“... what was the thing that motivated you to want to write this.”
[12:34] Alexander McGillivray:
“There are a lot of assumptions that people bring into the conversation... For example, there's this assumption that the current line of AI development is sort of going up and to the right... We were both basically guessing about the future.”
McGillivray points out that many AI policy discussions are built on unexamined assumptions about the trajectory of AI development. He advocates for greater transparency regarding these assumptions to facilitate more grounded and productive conversations.
Regulation Amidst Uncertainty: Substantive vs. Meta Regulation
Substantive Regulation Challenges
[14:14] Alan Rosenstein:
“... one can imagine a spectrum of views on the question of whether the First Amendment applies to AI...”
[14:54] Alexander McGillivray:
“There is no regulation within, just to pick on one, is just false with respect to AI.”
McGillivray argues that existing regulatory frameworks, while not exhaustive, provide a foundation that can be built upon rather than starting from scratch. He challenges the notion that regulatory uncertainty necessitates a complete avoidance of regulation, suggesting instead that policymakers should enhance and adapt current laws to address AI-specific issues.
Meta Regulation as a Strategic Approach
[17:39] Alexander McGillivray:
“... think through what they would want to do if the scenario were A and think through what they would want to do if the scenario were B and then try to design for how we think through regulating in either of those circumstances.”
McGillivray advocates for a dual approach: focusing on what is currently known and implementing meta-regulatory strategies such as transparency, information gathering, and capacity building. This ensures readiness to adapt regulations as AI technologies evolve.
[20:45] Unnamed Speaker:
“You have this line in the piece that I think captures this really succinctly...”
[21:10] Alexander McGillivray:
“I still think that the blueprint for an AI Bill of Rights gives a great rundown of this... We proposed principles like safe and effective systems, protection from algorithmic discrimination, data privacy, notice and explanation with AI systems, and human alternatives consideration and fallback.”
McGillivray highlights the AI Bill of Rights as a robust framework that addresses both current and future AI-related challenges. These principles aim to safeguard individual rights while promoting the responsible development and deployment of AI technologies.
Regulatory Impact on Industry Dynamics
Competitive Effects on Companies
[22:05] Unnamed Speaker:
“What do you think about potential competitive effects across the industry overall? Is this going to strengthen the large companies relative to the small ones?”
[23:14] Alexander McGillivray:
“If the AI leaders are right about cost, then AI is a place where competition and competition law needs to be extremely active because there's going to be this natural propensity toward having only a very few players and having most of those players be extremely well-financed companies...”
McGillivray expresses concerns that stringent AI regulations may inadvertently favor large, well-funded corporations over smaller entities. He suggests that regulatory frameworks need to be carefully designed to avoid stifling innovation and ensuring a competitive market landscape.
First Amendment Implications in AI Regulation
Navigating Speech and Regulation
[32:44] Unnamed Speaker:
“How you see First Amendment jurisprudence mapping on to the AI regulatory landscape in other areas of tech policy...”
[33:27] Alexander McGillivray:
“The First Amendment requires that courts have a tough time... whenever we've tried to do that in the past, it hasn't worked out very well. So misinformation as a classic example... The First Amendment is going to be a filter through which everything else passes.”
McGillivray delves into the complexities of applying First Amendment principles to AI regulation. He notes that regulating areas like misinformation poses significant legal challenges because the First Amendment protects a wide range of speech, complicating governmental efforts to impose certain restrictions.
Judicial Challenges and Agency Authority
[39:08] Alan Rosenstein:
“... the Loper Bright case, which overruled Chevron...”
[40:45] Alexander McGillivray:
“... the court is teaching us... not being able to know what standard a court is going to apply makes it extremely, extremely difficult.”
Addressing recent judicial shifts, McGillivray discusses how cases like Loper Bright, which overruled the Chevron doctrine, constrain agencies' abilities to interpret and enforce AI regulations. He underscores the resulting uncertainty for agencies, making it challenging to implement effective policies without clear judicial guidelines.
Policy Recommendations and Future Directions
Effective Policy Implementation
[30:04] Alexander McGillivray:
“A mix of regulatory principles and driving force with a more agile agency being able to push on particular levers... is a fairly good way of thinking about how we could do this in AI.”
McGillivray advocates for a balanced approach combining established regulatory principles with agile, responsive agencies capable of adjusting policies as AI technologies evolve. He draws parallels to fuel efficiency standards as a potential model for AI regulation—setting clear targets while allowing industry flexibility in achieving them.
Federal Coordination and International Collaboration
[31:42] Alexander McGillivray:
“We really do need some sort of federal bringing together... we need that.”
He emphasizes the necessity of federal coordination in AI regulation to ensure consistency and effectiveness, as state-level initiatives alone would be insufficient. Additionally, McGillivray highlights the importance of international collaboration to harmonize AI policies globally, ensuring that the U.S. remains competitive and aligned with international standards.
Reflections on Private Sector Experience
Adapting to Evolving Platforms
[43:08] Alexander McGillivray:
“…need different policies as platforms like Twitter evolve to have a bigger impact on people’s lives...”
Drawing from his tenure at Twitter, McGillivray reflects on the necessity of continuously adapting policies to address emerging challenges on dynamic platforms. He acknowledges that there is no static set of regulations that can comprehensively manage the ever-changing landscape of social media and AI technologies.
Conclusion
[45:11] Alan Rosenstein:
“Amac, thanks so much for the great post for Lawfare and for talking with us today and for all the great thinking you do on this.”
In closing, the podcast underscores the critical need for thoughtful, adaptive AI policies that account for technological uncertainties while safeguarding individual rights and fostering innovation. McGillivray’s insights highlight the intricate balance policymakers must maintain to navigate the evolving AI landscape effectively.
[45:25] Alexander McGillivray:
“Thank you so much Matt and Alan.”
The episode wraps up with acknowledgments and a reminder of the ongoing conversation surrounding AI policy and regulation, inviting listeners to engage with future discussions as AI continues to shape the national and global landscape.
Key Takeaways
-
Regulatory Frameworks Must Adapt: Existing laws provide a foundation, but policymakers need to enhance and tailor regulations to address AI-specific challenges effectively.
-
Balance Between Innovation and Protection: It's essential to safeguard individual rights without stifling technological advancement and innovation.
-
Meta-Regulation as a Strategic Tool: Implementing strategies like transparency and capacity building can prepare regulators to adapt to future AI developments.
-
First Amendment Challenges: Protecting free speech complicates efforts to regulate AI, especially concerning misinformation and content moderation.
-
Agency Limitations: Judicial decisions like the Loper Bright case limit agencies' flexibility, necessitating clearer legislative mandates and federal coordination.
-
Competitive Dynamics: Regulations should be designed to prevent disproportionately favoring large corporations over smaller entities, ensuring a balanced and competitive market.
-
Continuous Policy Evolution: As AI technologies and platforms evolve, so must the policies and regulations governing them to remain effective and relevant.
Notable Quotes:
-
Alexander McGillivray [03:10]: "I think we are going to have a lot of trouble with some of the types of regulation that certainly many people have been calling for."
-
Alexander McGillivray [12:34]: "People weren't being as clear about those assumptions as they might and in particular weren't being as clear about the lack of understanding of those assumptions."
-
Alexander McGillivray [17:39]: "I got very weirdly lucky during my undergrad and got to design my own major... reasoning under uncertainty."
-
Alexander McGillivray [22:05]: "We proposed principles like safe and effective systems, protection from algorithmic discrimination, data privacy, notice and explanation with AI systems, and human alternatives consideration and fallback."
-
Alexander McGillivray [33:27]: "The First Amendment is going to be a filter through which everything else passes."
-
Alexander McGillivray [40:45]: "If we make it so that there's just no way for them to know whether a particular thing that they're doing is legal or not, that's really tough."
This comprehensive summary encapsulates the episode's exploration of AI policy in the face of technological uncertainty, providing listeners with a nuanced understanding of the challenges and proposed strategies for effective AI governance.
