The Lawfare Podcast: Scaling Laws — A Year That Felt Like a Decade: 2025 Recap with Sen. Maroney and Neil Chilson
Air Date: January 9, 2026
Host(s): Kevin Frazier, Alan Rosenstein
Guests: Senator James Maroney (Connecticut), Neil Chilson (Head of AI Policy, Abundance Institute)
Episode Overview
In this special year-end episode of "Scaling Laws," a podcast from The Lawfare Institute and the University of Texas School of Law, hosts Kevin Frazier and Alan Rosenstein convene with Senator James Maroney and Neil Chilson to recap the whirlwind of 2025 in the world of AI law and policy. The discussion cuts through the hype and chaos to analyze major legislative, regulatory, and political dynamics shaping artificial intelligence in the United States, both federally and at the state level. With an eye toward vibes, trends, and the ever-shifting narrative, the panel unpacks what happened, why, and what it portends for 2026.
Key Discussion Points & Insights
1. The Federal AI Policy Landscape in 2025
- Major Paradigm Shift with the Trump Administration:
- The new Trump administration rapidly undid Biden-era executive orders, pivoting the federal approach from AI safety oversight to an innovation- and deregulation-focused agenda.
- "Very much shifted from frontier AI safety oversight to wanting to accelerate the technology and deregulate in this space." — Neil Chilson [04:55]
- Project Stargate announced ($500B private investment in AI), signaling deregulatory ambitions.
- The new Trump administration rapidly undid Biden-era executive orders, pivoting the federal approach from AI safety oversight to an innovation- and deregulation-focused agenda.
- International Context:
- Early 2025 saw a powerful open-source AI model released by China’s Deep Seeking, sparking inter-branch policy debates.
- Congressional (In)Action:
- Congressional activity remained focused on hearings, not comprehensive legislation.
- The "Take It Down Act" was passed, targeting non-consensual deep fake pornography, but broader regulation stalled.
- Debates around preemption — whether federal law should block or supersede state laws on AI — were contentious but unresolved, with a proposed moratorium on state AI legislation failing in the Senate due to last-minute negotiation breakdowns.
- "It ended up mostly looking like a spending condition...it failed. And so the states continued to move along." — Neil Chilson [11:44]
- Executive Orders & Action Plans:
- The Trump administration’s 28-page AI Action Plan outlined three pillars: deregulation/innovation, infrastructure (esp. energy), and international expansion.
- Additional actions addressed so-called ‘woke AI’ and attempted to shape the state-federal dynamic with an executive order incentivizing (but not mandating) states to follow a national framework.
- Export Controls & Chips:
- Shifted toward making it easier for China to access less sophisticated American AI chips, sparking concerns about long-term competitive advantage.
Notable Quote
"If you were going to cut across all of those, the theme is that AI is a normal technology ... it's not a separate entity. This is not a separate entity that is going to develop and compete with humanity." — Neil Chilson [25:40]
2. Why So Little Congressional Progress?
- Tacit Approval over Dysfunction:
- Chilson and Rosenstein emphasize that congressional inaction may reflect satisfaction with, or at least acceptance of, the deregulatory status quo—not mere dysfunction.
- "Congress chooses when to act and when it doesn't. And inaction is a outcome of the democratic process as much as legislation is." — Neil Chilson [16:17]
- Chilson and Rosenstein emphasize that congressional inaction may reflect satisfaction with, or at least acceptance of, the deregulatory status quo—not mere dysfunction.
- Historical Parallels:
- Senator Maroney compares current inaction to past federal hesitation on tech issues such as privacy, e.g. reliance on COPPA (1998).
- “Before the Take It Down Act, the last major ... federal comprehensive privacy was 1998, COPPA...” — Sen. Maroney [18:43]
- Senator Maroney compares current inaction to past federal hesitation on tech issues such as privacy, e.g. reliance on COPPA (1998).
- Public Pressure and Visibility:
- Major incidents involving AI harms—e.g., suicides linked to chatbots—have heightened awareness and urgency.
- “Once you could see that power right in your hand, in your own phone, that became part of the consciousness.” — Sen. Maroney [22:06]
- Major incidents involving AI harms—e.g., suicides linked to chatbots—have heightened awareness and urgency.
3. The “Trump Doctrine” on AI
-
Core Approach:
- AI framed as ‘normal technology’ and a tool for U.S. economic and scientific leadership.
- Emphasis on investing, innovating, deregulating, and exporting—especially ‘more American AI’.
- “I would say more American AI is the doctrine.” — Neil Chilson [27:25]
- Strong optimism about speed and scale: “more, faster.”
-
Potential Risks:
- Concern about tactics—e.g., export controls, executive ‘preemption’—possibly backfiring or proving ineffective.
Notable Quote
"They believe that they are optimists about AI technology. They want more, they want it faster. ... They do not buy the idea that this is something we should be slow in pursuing." — Neil Chilson [32:20]
- Alternative View (Sen. Maroney):
- Supports innovation but advocates ‘hurry up but don’t rush’—ensuring safety, accuracy, and public trust, especially for sensitive domains (medicine, finance).
- "We want to unleash all of the potential ... but we don't want to make mistakes ... Okay to slow down a second, I feel, and test it." — Sen. Maroney [34:38]
- Supports innovation but advocates ‘hurry up but don’t rush’—ensuring safety, accuracy, and public trust, especially for sensitive domains (medicine, finance).
4. State-Level AI Legislation: The Real Action
-
Prolific, Diverse Activity:
- States passed a wide range of AI laws:
- Texas Responsible AI Act (with ‘sandbox’ for experimentation)
- Utah: mental health chatbot and regulatory sandbox
- Illinois: chatbot oversight
- California/NY: frontier model bills (California’s SB53 “The Raze Act”)
- Montana: ‘Right to Compute’ act
- Various transparency, pricing, and use-disclosure laws (esp. for rental/housing algorithms, employment, healthcare)
- Ongoing emergence of regulatory sandboxes (Utah, Texas, soon Delaware).
- “The sandbox wants to focus on agentic AI.” — Sen. Maroney [41:27]
- States passed a wide range of AI laws:
-
Patterns Identified:
- Emphasis largely on mitigating AI harms (especially for children and mental health, chatbots, suicide prevention) over affording innovation.
- Economic development efforts (workforce training, educational initiatives) are happening—often outside direct legislative channels.
- “The harms get the headlines.” — Sen. Maroney [46:09]
- State legislatures also fund AI adoption (e.g., Massachusetts, New York) but it’s less visible.
-
Upcoming Focus:
- Predicted surge in industry-specific regulations (healthcare, insurance, finance), especially around chatbots and transparency.
- “The common themes are going to be chatbots and then more industry specific rather than larger sweeping [regulations]." — Sen. Maroney [54:39]
- Predicted surge in industry-specific regulations (healthcare, insurance, finance), especially around chatbots and transparency.
5. Politics & Partisanship in AI Legislation
- Not a Classic Left-Right Split:
- Both parties have AI optimists and skeptics; individual perspective varies by context and case.
- "It's really hard to paint with a broad brush. ... There are very much optimists about parts of this technology in both parties." — Neil Chilson [50:24]
- In practice, Republican states lean toward pro-innovation, sandbox, and 'right to compute' bills; Democratic states favor transparency, child protection, and cautious regulation.
- Some high-profile Democratic and Republican leaders have staked out unexpected regulatory and deregulatory positions (e.g., Florida’s Gov. DeSantis opposing federal preemption).
- Broad consensus exists around certain harm mitigation (e.g., child safety), but approaches differ.
- Both parties have AI optimists and skeptics; individual perspective varies by context and case.
6. Predictions for 2026
- Federal Preemption Is Unlikely:
- Alan Rosenstein predicts federal preemption efforts (including executive orders) will fail; patchwork state laws will persist. [53:01]
- Local Fights Over AI Infrastructure:
- Neil Chilson foresees environmental and construction battles tied to data centers and energy use dominating local AI debates. [53:34]
- State Law Thematic Trends:
- Senator Maroney predicts continued proliferation of state-level AI laws, with a focus on chatbots, transparency, and sector-specific regulation. Workforce and government-use initiatives will grow—but comprehensive “big” state-level laws may slow down. [54:16]
- Basketball Prediction:
- Both UConn men’s & women’s teams to win the national championship (Sen. Maroney, keeping it local and bold). [54:16]
Notable Quotes & Memorable Moments
-
On Federal-State Tension:
“Senator Moroney, I'm curious your perspective...Congress also hasn't identified that state regulation is such a problem that it requires full throated congressional preemption.” — Alan Rosenstein [17:35] -
On Chatbots and Mental Health:
"You’re seeing the ... Adam Rain case where ... he died by suicide. Sewell Seltzer ... died by suicide because of [encouragement] by character AI. And then in Connecticut, we had a murder suicide where ChatGPT validated the man's delusions..." — Sen. James Maroney [21:27] -
On the ‘Trump Doctrine’ for AI:
"I would say more American AI. Is the doctrine. Is the doctrine more American AI." — Neil Chilson [27:25] -
On Legislative Attitudes:
"It's okay to slow down a second, I feel, and test it. ... We want people to feel safe so that they will use it and we'll have more, more adoption." — Sen. James Maroney [34:38]
Important Timestamps
- Federal Policy Recap & Trump Doctrine: [04:38]–[13:37]
- Congressional (In)action & Perspectives: [13:37]–[24:05]
- Trump Doctrine & Evaluation: [25:40]–[34:38]
- State-Level Legislative Overview: [37:28]–[43:44]
- Harms vs. Innovation in State Laws: [44:48]–[47:20]
- Politics & Bipartisanship: [47:20]–[52:02]
- Predictions for 2026: [52:02]–[56:51]
Episode Tone & Style
Serious, insight-driven, and slightly wonky, with flashes of humor (esp. basketball references) and camaraderie among expert guests. The conversation blends realpolitik analysis with personal optimism and situational wariness; the tone is pragmatic but forward-looking.
Useful Links
- Scaling Laws Podcast
- Lawfare Media AI Coverage
- Abundance Institute
- Connecticut State Legislature Resources (State AI Policy)
This summary distills the dense, fast-moving panoply of 2025’s AI law and policy debates into a clear, accessible, and actionable account for policymakers, practitioners, and interested listeners entering 2026.
