The AI Policy Podcast (CSIS)
Episode: Senate Strikes AI Law Moratorium, Courts Rule on Copyright Cases, and Congress Talks AGI
Date: July 2, 2025
Host: Center for Strategic and International Studies (Wadhwani Center for AI and Advanced Technologies)
Special Guest/Co-host: Gregory C. Allen
Episode Overview
This episode offers insider analysis of pressing developments in U.S. AI policy from both legislative and judicial branches, with particular focus on:
- The Senate’s removal of a federal moratorium on state AI laws
- Major dual court rulings on AI copyright use (Meta & Anthropic)
- Congressional hearings on U.S.-China AI competition and emergent AGI policy
- Updates on the Chinese AI company Deep Seq, U.S. export controls, and national security
Greg Allen (CSIS) provides expert, up-to-the-minute commentary on these turbulence-defining issues for AI law, governance, and geopolitics.
1. Senate Strips State AI Law Moratorium
[00:09 – 07:35]
Background:
The Senate voted 99–1 to remove a controversial moratorium that would have blocked certain categories of state-level AI regulation from the pending reconciliation bill (“the big beautiful bill”).
Key Points
-
Original Provision’s Intent:
- “This idea was to basically block not all, but certain types of state level AI regulation, specifically…preemptory actions that AI companies have to do before they release their AI products.” — Greg Allen [01:20]
- Did not block criminal penalties for AI misuse, but would prevent new state mandates targeting AI developers.
-
Political Dynamics and Fallout:
- The moratorium was initially supported in the House—though some later retracted after realizing potential implications.
- Bipartisan Opposition: Sparked especially by Senator Marsha Blackburn (Tennessee, music industry concerns); bipartisan letter from 40 states’ attorneys general opposed the measure.
- Even initial supporters “didn’t want to torpedo the big beautiful bill over this issue for which there is not a majority of support.” [03:30]
-
Outcome & Implications:
- “States are not going to be blocked in going ahead with their regulation. And there is a flurry of activity at the state level… more than a thousand bills have been introduced.” — [05:52]
- AI companies warn “patchwork” state laws will hinder innovation, but haven’t opposed AI safety laws—provided they’re federal.
-
Notable Quote:
- “Blocking state regulation without doing anything federally is like taking a strong pain reliever that makes you pass out and forget to treat the real injury. The patchwork is the incentive.” — Miles Brundage (via Greg Allen) [05:22]
- Summary: A federal framework may be more likely after this defeat, but only if it is paired with comprehensive national regulation.
2. Copyright Court Rulings: Meta vs. Anthropic
[07:35 – 20:52]
Meta & Anthropic Cases
-
Who Was Sued and Why?
-
Both Meta and Anthropic were sued in separate lawsuits for using “millions of pirated books” as AI training data.
-
Plaintiffs: Authors and writer groups (class action against Meta; smaller author group against Anthropic).
-
Both companies prioritized speed over clearance, admitting in court they used illegally sourced books because negotiating millions of licenses was logistically prohibitive.
-
“They just stole it. And again, like they’re basically admitting that.” — Greg Allen [10:07]
-
-
Anthropic’s Approach:
- Initially used pirated books, later switched to bulk-scanning millions of print copies after purchasing them. “They literally hired a former executive in charge of the Google Books program.” [10:28]
The Rulings—Contradictory Outcomes
-
Meta Case (Judge Vince Chhabria):
- The verdict seemed poised against Meta based on reasoning, but Meta won because plaintiffs “made the wrong arguments and failed to develop a record” for the crucial legal factor: “the effect of use upon the potential market.” [14:54]
- Notable Statement:
- “This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful. It stands only for the proposition that these plaintiffs made the wrong arguments…” — Greg Allen reading Judge Chhabria [14:54]
-
Anthropic Case (Judge Alsup):
- The judge leaned in favor of transformative use, comparing AI model training to using books to teach children.
- Allen notes the rulings “very explicitly disagree with each other.” [12:01]
-
Legal Complexity:
- Judge Chhabria criticized Judge Alsup’s analogy, arguing that AI can rapidly produce works that directly compete with originals, undermining human incentives to create.
- “When it comes to market effects, using books to teach children to write is not remotely like using books to create a product that…” — Greg Allen paraphrasing Chhabria [17:10]
- The future direction awaits higher court guidance or Congressional clarification.
3. Congressional Attention: House AI-China Hearing & AGI Policy
[20:52 – 27:44]
House Select Committee on China: “Algorithms and Authoritarians” Hearing
[21:17]
-
Theme:
- Centered on whether the U.S. or “authoritarian regimes like the Chinese Communist Party” will lead the future of AI.
- AI framed as the new great power arms race—now with explicit discussion of artificial general intelligence (AGI) and artificial superintelligence (ASI).
-
Notable Quote:
- “Will the future of artificial intelligence be led by free nations or by authoritarian regimes like the Chinese Communist Party?” — Chairman Molinar (R-MI) [21:20]
- “The United States and China are competing in a new strategic arms race, the race for artificial superintelligence.” — Rep. Torres (D-NY) [23:35]
-
Shift in Congressional Attitude:
- “Now you have members of Congress…openly talking about AGI and ASI as a central concern.” [23:00]
- “That was just not something that you could say in public when I was in DoD and reflects just how much this debate has changed.” — Greg Allen [24:35]
Focus on Safety vs. Competition
-
Industry Testimony:
- Jack Clark (Anthropic): “America can win the race to build powerful AI and winning this race is a necessary but not sufficient achievement. We have to get safety right.” [24:50]
- Emphasis on balancing China competition with safety and value alignment.
-
Emerging Legislation:
- Rep. Krishnamurthy (D-IL) announced forthcoming AGI Safety Act: would require AGI to comply with human law and be aligned with human values. [25:47]
- Adversarial AI Act: would bar the U.S. government from using Chinese or Russian AI models without specific exemption (e.g., Deep Seq). [26:30]
4. China’s Deep Seq: Technical & Security Developments
[27:44 – 33:12]
U.S. Export Controls & Deep Seq’s Struggles
[28:10]
-
Deep Seq’s next-gen model R2 is repeatedly delayed, mainly due to U.S. export controls on Nvidia chips (H20s now banned).
-
Even with “genuinely innovative” techniques, Deep Seq “still want[s] Nvidia weaker chips…they’re running into a lot of challenges with compute limitations.” [30:01]
-
Larger implication: China faces a significant hardware and ecosystem deficit, and U.S. export controls are having their intended impact, at least in the short-to-medium term.
-
Notable Industry Quote:
- “Deep Seq’s models are so completely optimized for Nvidia hardware and software that running them on Chinese chips will make them less efficient.” — Jin Yang via Greg Allen [30:52]
National Security & Surveillance Concerns
- Deep Seq is heavily integrated with the broader Chinese surveillance apparatus. “Deep Seq would have no ability to say no to the Chinese government when they demand that sort of information.” [31:35]
- Reports (Reuters) point to “active attempts by Deep Sea to smuggle chips” and collaboration with military/intelligence agencies.
- Integration is government-wide, “quarterly reports” demand agency progress on Deep Seq adoption—spilling into the military/intel world by default. [32:52]
Notable Quotes & Key Timestamps
-
On State Law Moratorium:
“Victory lap taken. Yeah…it did not survive contact with opposition among Senate Republicans.” — Greg Allen [01:05–03:00] -
On Copyright Litigation:
“They just stole it. And again, like they’re basically admitting that.” — Greg Allen [10:07] -
On Judicial Reasoning:
“This ruling does not stand for the proposition that Meta’s use…is lawful. It stands only for the proposition that these plaintiffs made the wrong arguments…” — Greg Allen, quoting Judge Chhabria [14:54] -
On AI Geopolitics:
“…the United States and China are competing in a new strategic arms race, the race for artificial superintelligence.” — Rep. Torres (D-NY) [23:35] -
On U.S. Export Controls:
“Deep Seq’s models are so completely optimized for Nvidia hardware and software that running them on Chinese chips will make them less efficient.” — Jin Yang, via Greg Allen [30:52]
Episode Timeline (Timestamps)
- 00:09 Senate strikes moratorium on state AI law
- 01:05–07:35 Deep dive on the moratorium’s history, politics, and consequences
- 07:35–20:52 Details and implications of Meta/Anthropic AI copyright cases
- 20:52–27:44 House hearing: U.S.-China AI rivalry, AGI/ASI, and new legislation
- 27:44–33:12 Deep Seq: U.S. export controls and China’s AI security challenge
Tone & Style
Greg Allen’s analysis is fast-paced, plain-spoken, and rich in insider knowledge, mixing dry wit (“victory lap taken”) with deep policy insight. The hosts maintain a balanced bipartisan frame, grounding speculation in both legal and technical detail.
Takeaways
- The 2025 Senate decision means a flurry of diverging state AI laws—federal regulation may only follow if compromise is reached.
- Courts remain split on pivotal copyright and fair use questions for AI, with higher courts or Congress likely needing to intervene.
- U.S. policymakers now openly discuss AGI and ASI risks and arms race dynamics with China, accompanied by a surge of proposed federal legislation.
- Export controls are concretely throttling Chinese AI giants like Deep Seq, prompting both technical innovations and alleged circumventions.
- Integration of Chinese AI with state surveillance and defense structures is increasingly a national security flashpoint in Washington.
