The AI Policy Podcast — May 14, 2025
Episode: The AI Diffusion Rule is Rescinded, AI Executives Testify Before Congress, & AI Adoption in the IRS
Host: Center for Strategic and International Studies (CSIS)
Guests: Gregory C. Allen (Director, Wadhwani Center for AI & Advanced Technologies), Brielle (Associate Director, Wadhwani Center for AI)
Overview
This episode provides a comprehensive analysis of recent headlines in U.S. AI policy, focusing on the Trump administration's repeal of the AI Diffusion Rule, reactions from AI industry executives during Congressional testimony, ambitions for AI adoption in the IRS, contentious debates about copyright in generative AI, and even a surprising discussion on AI in the new papacy. The conversation is frank, insightful, and at times peppered with behind-the-scenes details on policy-making and its wide-ranging impact.
1. The AI Diffusion Rule: Rescinded and What Comes Next
[01:00–19:51]
Key Points
-
Backdrop: The Trump administration announced its intention (May 7, 2025) to rescind the Biden-era AI Diffusion Rule, a complex regime controlling AI chip exports globally.
-
Policy Options Discussed: Four pathways were identified — keep, kill, change, or delay the rule. The administration ultimately chose to delay (punt) and replace it.
- Notable Quote:
“The Biden AI rule is overly complex, overly bureaucratic, and would stymie American innovation. We will be replacing it with a much simpler rule that unleashes American innovation and ensures American AI dominance.”
— BIS statement, paraphrased by Greg [03:05]
- Notable Quote:
-
Long-Term Policy Trajectory:
Despite the rollback, the Trump administration signaled intentions for stronger export controls, not a wholesale retreat.- Evidence:
- Early executive orders for tighter controls
- 50% budget increase for the Bureau of Industry and Security (BIS)
- Evidence:
-
Market Reactions:
- Nvidia stock rose 3.1%, and the semiconductor index rose 1.7% after news broke. Greg cautioned against overinterpreting short-term market movements [05:52].
David Sacks' (White House Policy Advisor) Views — [06:32–16:28]
-
Legality: Claimed the diffusion rule was an “unprecedented and arguably unlawful expansion” of executive authority. But Greg questions if the Trump administration itself would ever truly limit its own legal authority [07:30].
-
Perception of Bureaucracy:
- “It effectively turned Washington into a central planner for the global AI industry.” — Quoting David Sacks [09:11]
- The rule’s tiered numerical chip caps and global rationing drew criticism for undermining market principles.
-
Alienation of Allies:
- The tiered export controls “strained relationships with key allies by arbitrarily dividing countries into compliance tiers, labeling many friendly nations as second class partners.” — David Sacks, [12:02]
- Greg: “Perception ... is important even if it does not align with reality.”
-
Due Process:
- Rule issued with no meaningful public comment, leading to industry backlash and worries about operational speed vs. regulatory transparency [13:37].
-
Realistic Prediction:
The administration will likely change the rule with significant modifications, possibly dropping the public “tiers” system to repair diplomatic relations but seeking a “stronger but simpler” approach [17:05–19:51].
2. AI Executives Testify Before Congress
[19:51–32:49]
Key Points
-
Who Testified: CEOs and top execs from OpenAI (Sam Altman), AMD, CoreWeave, and Microsoft (Brad Smith).
-
Main Topic: Rapid industry reaction to news of the diffusion rule’s rescission and broader AI export controls.
-
Industry Positioning:
Executives avoided direct opposition to administration policy, opting for pragmatic, diplomatic messaging.
Notable Quotes
-
Sam Altman (OpenAI):
“I was glad to see [the diffusion rule] rescinded. I agree there will need to be some constraints, but I think if the sort of mental model is winning diffusion instead of stopping diffusion, that directionally seems right, but that doesn't mean there's no guardrails.” [21:18]
-
Brad Smith (Microsoft):
“The number one factor that will define whether the United States or China wins this race is whose technology is most broadly adopted in the rest of the world. ...The lesson from Huawei and 5G is whoever gets there first will be difficult to supplant. We need to export with the right kind of controls. We need to win the trust of the rest of the world.” [22:50]
-
AI Governance:
- Companies reject the “European approach” (stricter, top-down AI legislation), favoring a lighter, more flexible U.S. regulatory framework.
- “I think [the European approach] would be disastrous. I don’t want to live in Europe either.” — Sam Altman, echoing Senate Chairman Ted Cruz [26:23].
- Brad Smith warns that U.S. inaction will simply cede global regulatory power to the EU (“the Brussels effect”) [27:40].
-
State vs. Federal Regulation:
- Executives express frustration at the prospect of 50 separate state regimes.
“It is very difficult to imagine us figuring out how to comply with 50 different sets of regulations. One federal framework that is light touch...seems important and fine.” — Sam Altman [29:40]
- Executives express frustration at the prospect of 50 separate state regimes.
-
Energy and AI:
- Both Sam Altman and CoreWeave’s CEO stressed that the ultimate limiting factor for future AI capability will be energy availability, with national competitiveness depending on infrastructure investment [31:57].
3. Legislative Outlook: Preemption and State AI Regulation
[32:55–35:13]
Key Points
- New House memo proposes banning state/local regulation of AI for 10 years after federal law’s enactment, illustrating tension over preemption.
- Such sweeping pre-emption seems unlikely to pass, but it underscores strong industry and some Congressional resistance to fragmented regulation.
- Most likely future laws will target politically salient AI risks (e.g., child sexual abuse material, deepfakes), while clarifying liability and safety expectations at a moderate federal level.
4. AI and the IRS (Internal Revenue Service)
[35:13–44:41]
Key Points
- Secretary of Treasury Scott Bessant testified about plans to use AI to offset major IRS staff cuts (budget slashed by $2.5 billion).
- Vision: Use new AI tools to increase audit and collection productivity and limit operational disruption despite layoffs.
Notable Quote
“I believe through smarter IT, through this AI boom that we can use to enhan...collections would continue to be very robust as they were this year.” — Scott Bessant [35:58]
- Cautionary Tale (Netherlands):
- Greg cited the Dutch childcare benefits scandal, where flawed AI fraud detection led to hundreds of false positives, family separations, and toppling of the Dutch government [37:48].
- Key warning: Adopting AI is not a “slam dunk”—it requires careful process redesign, strong safeguards, and an understanding of government systems' complexity.
Takeaway
AI in tax enforcement has transformative potential but poses serious risks if used recklessly:
"It’s not that stapling on AI makes everything 10x more productive...You really have to think through what is an appropriate use case and the right safeguards." — Greg [41:08]
5. Copyright Office and Generative AI Training
[44:41–49:06]
Key Points
- Draft Report: U.S. Copyright Office's 113-page report tackled whether using copyrighted material to train AI constitutes “fair use.”
- The report concluded it is not always (but not never) fair use—a nuanced stance leaving room for legal debate and ongoing lawsuits (e.g., NYT v. OpenAI, Dow Jones v. Perplexity).
- Political Firestorm:
- The day after report release, Trump terminated Register of Copyrights Shira Perlmutter.
- Representative Joe Morelle called the firing “a brazen, unprecedented power grab.”
- Speculation:
- The Trump administration is expected to move toward more AI-friendly copyright interpretations benefiting industry.
Notable Quote
“It is surely no coincidence he acted less than a day after she refused to rubber stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models.” — Rep. Joe Morelle [48:25]
6. AI and the Papacy
[49:06–52:16]
Key Points
- The new Pope, Leo XIV, explicitly referenced AI as part of his guiding social mission.
- Chose papal name inspired by Leo XIII, who addressed the social disruptions of the first industrial revolution. Sees modern AI as ushering in a parallel era of human challenge.
- Explicitly tied his pontificate to the ethical and spiritual challenges posed by AI, indicating its global, cross-disciplinary importance.
Notable Quote
“The Church offers...her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor.” — Pope Leo XIV, as quoted by Greg [49:50]
Timestamps for Key Segments
- AI Diffusion Rule Overview & BIS Statement: [01:00–05:44]
- David Sacks’ Critique & Legal/Policy Nuance: [06:32–16:28]
- Industry & Markets Respond: [05:44–06:43]
- What’s Next for Export Controls: [16:28–19:51]
- Congressional Testimony – Industry Executives: [19:51–32:49]
- Preemption and State Regulation: [32:55–35:13]
- IRS & AI Productivity, Dutch Cautionary Tale: [35:13–44:41]
- Copyright, Fair Use, and Political Upheaval: [44:41–49:06]
- The Pope, AI, and Ethics: [49:06–52:16]
Memorable Moments & Quotes
- “AI export controls are moving from a public, tiered system to something simpler—possibly even secret, but definitely with tighter security checks.” — Greg [19:51]
- “If we’re playing to win worldwide adoption, we can’t undermine allies’ trust.” — Brad Smith [22:50]
- “AI’s productivity promises are real, but one failure can have catastrophic political effects—as the Dutch learned.” — Greg [37:48]
- “The new pope chose his name because he sees AI as the defining issue of our era.” — Greg [49:50]
Final Thoughts
This episode pulls back the curtain on a whirlwind week in AI policy, highlighting the tension between innovation, geopolitics, diplomatic perception, and regulatory clarity. It shows how decisions in Washington ripple through boardrooms, markets, courtrooms, and even the Vatican—demonstrating that AI policy is now a matter of global consequence.
