The AI Policy Podcast
Host: Matt Mann, Center for Strategic and International Studies (CSIS)
Guest: Gregory C. Allen, CSIS Wadhwani AI Centers
Episode: Trump’s Draft AI Preemption Order, EU AI Act Delays, and Anthropic's Cyberattack Report
Date: November 21, 2025
Episode Overview
This episode offers a comprehensive discussion on three major current events in AI policy:
- A leaked draft executive order by the Trump administration seeking to preempt state AI laws,
- Proposed changes and delays to the EU AI Act and related digital regulations,
- Anthropic’s report of a Chinese state-sponsored cyberattack using its AI model, Claude.
Greg Allen, after returning from groundwork in India ahead of the 2026 India AI Impact Summit, provides expertise on the implications, political context, and controversies stirred by these developments.
1. India’s Emerging Role in Global AI Policy
(00:41–07:37)
Key Insights
- India’s Importance: India is central to the upcoming India AI Impact Summit (Feb 2026), and is positioning itself as a leader for the Global South in shaping AI’s future.
- AI Usage Trends in India:
- India is ChatGPT's second-largest user base after the US.
- Unique user patterns: Preference for WhatsApp over email, strong prevalence of voice-based over typed interactions—driven by both cultural and literacy factors.
- Industry and Talent: Major US AI companies (OpenAI, Google, Microsoft, Anthropic) are expanding their presence in India, reflecting both market and talent strategies.
- Development Focus: India strives to highlight inclusive AI, tailored for global south needs, emphasizing AI in healthcare (e.g., tuberculosis detection) and agriculture.
Notable Quote
- “India is explicitly positioning itself as a leader of the Global South... there is a sense that AI is a rich countries kind of industry and India wants to say whatever this economic revolution is, it needs to be inclusive of the Global South.” — Greg Allen (05:19)
2. Trump Administration’s Draft Executive Order to Preempt State AI Laws
(07:37–17:46)
Key Discussion Points
-
Draft EO Details:
- Leaked by The Verge, still in draft and deliberative/pre-decisional status.
- Seeks to override “patchwork” state AI laws, favoring a uniform national (federal) approach.
-
Policy Objective:
- "It is the policy of the United States to sustain and enhance America's global AI dominance through a minimally burdensome uniform national policy framework for AI." (08:32)
-
Deadline Discrepancy:
- Most EO actions have strict deadlines (e.g., 90 days), but the section on creating a federal AI framework has none—suggesting less urgency for replacement and more focus on blocking state actions.
- “Perhaps there isn't as much of a rush within the administration to provide that federal standard.” — Matt Mann (10:08)
-
Litigation Task Force:
- Directs DOJ to sue states enacting laws deemed “unduly burdensome” or in violation of interstate commerce.
- Threatens federal funding to states passing “burdensome” AI laws.
-
Political Context:
- Synchronizes with legislative maneuvers (e.g., potential NDAA amendments) and ongoing tension between state and federal regulatory powers.
- Example: California's SB 53 AI safety law specifically referenced as a preemption target.
Notable Quotes
-
“A cynic might say… your official policy is to have a minimally burdensome federal framework for AI regulation. That's your stated policy, but your real policy is to block state regulation and to block federal regulation.” — Greg Allen (11:34)
-
“Trump has no power to issue a royal edict canceling state laws.” — CA legislator Scott Wiener (15:59; cited by Greg Allen)
Timeline Segment
- Admin leak & political calculus: (08:04–17:46)
3. EU AI Act: Delays, Self-Criticism, and Political Debate
(17:46–37:32)
Key Discussion Points
-
Digital Omnibus Proposals:
- Two-prong legislative effort to delay and streamline rules in the EU AI Act and GDPR.
- Motivated by concern for competitiveness, regulatory “buyer’s remorse” (EU AI Act may be too burdensome).
-
Regulatory Framework:
- Unacceptable risk: outright prohibited (e.g., government biometric surveillance).
- High risk: stringent requirements (e.g., medical, safety-critical uses), now targeted for delay.
- Low risk: minimal regulation.
-
General Purpose AI (GPAI):
- Reacting to systems like ChatGPT—uncertainty around how to regulate broadly capable models.
-
Draghi Report & Business Response:
- High-level EU advisors and industry giants (Siemens, ASML, Mistral, Philips) argue the act is stifling EU innovation compared to US/China; call for regulatory delay or change.
-
Complications of Delay:
- Delays embedded in a broad package introduce further uncertainty—businesses unclear when/what rules will apply.
- “Businesses won't know when the core of the AI act will truly bite until the EU co-legislators strike a deal on the entire AI omnibus.” — Luca Bertuzzi (35:30)
-
Internal EU Controversy:
- Some parliament members and Green Party politicians fear reforms are capitulating to US tech giants at the expense of citizen protection.
Notable Quotes
-
“GDPR has raised the cost of data by about 20% for EU firms compared with US peers... Broader reform towards simpler harmonized rules is still vague.” — Mario Draghi, former Italian PM, via Greg Allen (27:28)
-
“The Commission should focus on real simplification and streamlining of definitions rather than bending their knee to the US Administration.” — Alexander Gies, Greens/EFA (32:51)
Timeline Segment
- EU AI Act amendments & debate: (17:46–37:32)
4. Anthropic’s Cyberattack Report: Chinese State Espionage Uses Claude for AI-Led Hacking
(37:33–46:55)
Key Facts
-
Incident Description:
- Chinese state-sponsored hackers used Anthropic's Claude to conduct an 80–90% AI-automated cyberattack on at least 30 large companies/government agencies; some breaches occurred.
-
Significance:
- First real-world case of advanced AI used in the agentic execution (not just advisory role) of cyberattacks.
- Security mechanisms failed to preempt the attack.
-
Strategic Irony:
- Chinese attackers used a US model rather than domestic alternatives—a tacit endorsement of Western technological edge.
Regulatory & Policy Repercussions
-
Policy Justification:
- Fears of AI-aided cyberattacks were foundational to the launch of US/EU/UK “AI Safety Institutes.” This attack brings that threat from theory to reality.
-
Community & Political Skepticism:
- Some cybersecurity experts skeptical about Anthropic's disclosure completeness and about whether AI-automation is a significant leap, given China’s low labor costs for engineers.
-
AI Policy Debate:
- The incident is used both to call for more regulation (AI and cybersecurity) and, conversely, condemned by some as fear-mongering to induce regulatory capture.
Notable Quotes
-
“Guys, wake the F up. This is going to destroy us sooner than we think if we don’t make AI regulation a national priority tomorrow.” — Sen. Chris Murphy (CT) [46:55]
-
“You're being played by people who want regulatory capture. They are scaring everyone with dubious studies so that open source models are regulated out of existence.” — Yann LeCun, former Chief Scientist, Meta [47:16]
-
Greg’s Personal Insight:
- “Dario [Amodei, Anthropic CEO] is sincere in his concerns about AI safety. It is not a strategy for regulatory capture. It is something that he believed and argued for long before he was in a position to benefit from anything remotely approaching regulatory capture.” (51:06)
Timeline Segment
- Anthropic report and response: (37:33–53:46)
5. Memorable Moments & Quotes with Timestamps
- On India, ChatGPT, and the Global South:
“For in India, which has basically abandoned email as a work practice. Everything takes place on WhatsApp. Government services are sometimes delivered via WhatsApp.” — Greg Allen (01:51) - On Patchwork Regulation:
“They are against this patchwork of state regulation. That is a phrase that we've heard coming out of many Republican legislators on the Hill.” — Greg Allen (08:32) - On Preemption Executive Order:
“The meat of this executive order is creating an AI litigation task force...they want the Department of Justice to sue state governments who try and...pass AI regulatory laws...” — Greg Allen (12:03) - On Regulatory Burden in the EU:
“The administrative tasks companies must undertake to comply with EU laws depend on their activities...each law has different deadlines, reporting procedures and authorities...imposing registration requirements would constitute a disproportionate compliance burden.” — Parliamentary Research Service, read by Greg Allen (31:00) - On AI-powered cyberattacks:
“The attackers used AI's agentic capabilities to an unprecedented degree using AI not just as an advisor, but to execute the cyber attacks themselves.” — Greg Allen (38:19)
6. Conclusion and Forward Look
This episode highlighted the increasingly global, contentious, and fast-moving terrain of AI policy, regulation, and security. The convergence of executive overreach, regulatory hesitation, and real cyber risks demonstrate how entwined technology, law, politics, and geopolitics have become in the AI era.
Stay tuned for more ground reports, legislative analysis, and regulatory deep-dives in upcoming AI Policy Podcast episodes.
Key Timestamps (for quick reference)
- India AI Impact Summit & Indian context: 00:41–07:37
- Trump AI Preemption EO: 07:37–17:46
- EU AI Act Delays & debate: 17:46–37:32
- Anthropic/Claude cyberattack: 37:33–53:46
This summary captures the original episode’s thoughtful tone and analytical depth, offering clear entry points for further research or focused listening.
