Podcast Summary: The AI Policy Podcast
Episode: Unpacking the EU AI Act Code of Practice with Marietje Schaake
Host: Gregory C. Allen (CSIS)
Guest: Marietje Schaake (Stanford Cyber Policy Center and Institute for Human-Centered AI, former MEP)
Released: September 5, 2025
Main Theme & Purpose
This episode offers a deep dive into the EU AI Act’s Code of Practice—especially its safety and security provisions for general purpose AI models—with expert insights from Marietje Schaake, who co-chaired the drafting of the Code’s Safety and Security section. The conversation illuminates the historical, political, and enforcement context of the EU’s approach and its global implications.
Guest Background and Motivation
-
Early Interest in Tech and Policy:
Schaake explains her academic curiosity about change, technology, and society. She shares how studying at the University of Amsterdam and participating in the new media minor exposed her to questions about the emerging Internet."I always had a curiosity about where change was coming from... Technology is one of them, and you could argue policy and politics is also one of them." (02:12)
-
Intersection with Politics:
Her entry into the European Parliament coincided with the rise of social media and the Obama campaign, allowing her to observe technology’s political impact firsthand.
She describes her first major policy focus as being at the intersection of human rights and technology, prompted by the Arab Spring and direct exposure to activists' experiences with state surveillance. -
Wake-up Call on European Tech:
When meeting Iranian activists in Turkey, she learned that European-made surveillance tools were used by dictatorships—a revelation that pivoted her focus to tech accountability:"[It] was a real wake up call for me... our 'European made technology' was the tool of choice to repress." (06:51)
-
Broader Tech Policy Work:
This evolved into broader involvement with antitrust, copyright, cybersecurity, and export controls, always centering on human rights and democratic values.
European Tech Regulation: From Data Privacy to AI
-
GDPR as a Key Precedent:
While she wasn't a lead GDPR architect, Schaake underscores its importance as both a response to European history and the moment US tech needed to seriously comply with EU law:"It was the first time US companies started to take EU regulations seriously." (13:10)
-
Cultural Drivers:
The privacy focus is rooted in Europe’s experience with fascism and communism, which informs the regulatory philosophy.
Allen adds:"If you've ever been to Berlin and you haven't been to the Stasi Museum, you've got to go... it turns out that one out of every 67 East Germans was either working directly as a Stasi employee or was an officially... known informant." (11:12)
-
Enforcement & Tunnel Vision:
GDPR’s key lesson is the importance of enforcement, not just legal text, and the risks of tunnel vision (e.g., focusing on privacy while missing early AI developments).
From Tech Policy to AI Focus
- Natural Progression:
Schaake describes AI as a natural next step for anyone committed to tech policy, especially following her move to Stanford and her appointment to a new institute focusing entirely on AI.
Her role grew globally, including a UN advisory appointment and, ultimately, chairing the EU Code of Practice safety group:"It's just grown by the various things I've done. And then I became one of the chairs of this code of practice that I know you'd like to focus on." (15:36)
The EU AI Act and Code of Practice: Context and Structure
Legislative Background
- The AI Act is a landmark law with a risk-based approach. Most AI use cases are low risk; special obligations apply at higher risk tiers, ranging from “low” to “unacceptable.”
- The breakthrough of generative AI models complicated the original risk taxonomy — since such models can serve “infinite use cases.”
- The Code of Practice provides compliance clarity especially for General Purpose AI (GPAI) models and those with “systemic risk”.
Code of Practice vs. AI Act
- Not Legally Binding, but...
The Code is not a separate law but a guideline for how companies can reliably comply; signing on shows “good faith” and prescribes specific compliance steps."The law applies to anyone...Even if you're a company that doesn't want to sign the AI Code of practice, you still have to comply to the AI act, but then it's sort of up to you how you want to do it..." (20:39)
- Signatories and Pushback
Some companies (e.g., Google, OpenAI) have signed on; others (Meta) have not, preferring to chart their own path and potentially challenge enforcement.
What’s in the Safety & Security Chapter? (24:22–29:35)
-
Who is Covered?
Differentiates between GPAI and GPAI with systemic risk. Risk assessment is partly company-led, with further clarifications in development. -
Obligations:
- Formal risk assessments
- Rigorous, documented mitigation steps
- Executive responsibility (not siloed to junior staff)
- Whistleblowing provisions
- Ongoing documentation for a specified period
- Notification of the EU AI office and relevant board(s)
-
Notable Quote:
"It basically spells out a way...companies can choose to do other ways, but a way to be rigorous, but also identify the people within the companies...who should have executive responsibility so that it's not some, you know, niche group of juniors who are assessing the most serious risks..." (24:22)
"There's also provisions for whistleblowing..." (25:35) -
Systemic Risk Designation:
Criteria for systemic risk are still being developed (linked to model power, reach, and usage), and companies are mainly responsible for self-designation until further standards emerge.
The Big Systemic Risks Targeted (32:06–38:08)
-
Categories:
- National Security (e.g., bioweapons, cyber weapons)
- Public Health
- Loss of Control (e.g., “runaway” models)
- Downstream, unpredictable consequences
-
Dynamic Approach:
The framework must adapt, given rapid AI advancements and unforeseen use cases."You write the law at a moment where this technology is evolving so quickly...but you also need to be flexible..." (32:39)
-
Allen’s Perspective:
Allen highlights the singular risk of enabling bad actors with tools for devastating consequences (e.g., bioengineering pandemics, cyberwarfare), even if most actors are responsible."If nuclear weapons were as hard to build as Legos, we would all be dead...the plausibility of a human-created pandemic...And AI is one of these areas..." (34:43–35:18)
Safety and Security Frameworks: Function and Enforcement (41:50–48:18)
-
Key Deliverables for Companies:
- Safety and Security Framework
- Model-specific safety/security reports
- Incident (especially “serious incident”) reporting
- Some public disclosures
-
Regulatory Focus:
Emphasis is on “upstream” providers (large model developers) rather than “downstream” third-party implementers or end-users—for practicality and risk mitigation potency."...from a practical perspective, where would you want to intervene? Would you want to monitor the handful of companies that have these unique capabilities or would you want to have compliance from...thousands of smaller players downstream? ...It makes no sense from a practical enforcement point of view either..." (43:38)
-
Challenges:
- Proprietary nature of AI models makes assessment difficult
- The unpredictability of frontier AI models challenges classical legislative frameworks
"...nobody really knows how generative AI works the way it does...it's hard to work with existing law and enforcement models to anticipate so much uncertainty." (47:04)
Measuring Success & The Road Ahead (48:18–50:12)
-
Success Metrics:
- Improved agility of the whole enforcement process (for regulators and companies)
- Companies discovering and mitigating risks that might otherwise go unnoticed
- Trust-building: Companies and the public becoming more confident in AI safety and societal benefit
- Public and confidential sharing of lessons learned
-
Aspirational Take:
Schaake hopes that the process, while initially contested, will prove mutually beneficial, contributing to global standards and reliable innovation.
Notable Quotes & Memorable Moments
- On European Privacy Motivation:
"Part of why data protection has always been such a key issue for Europeans is because of our recent past with fascism...That's, you know, clearly an anchoring identity for Europeans..." (10:40)
- On Public Risk Perception:
"There are also problems in the here and now that could seriously impact the trust that people have in AI writ large...impact on democracy being another one that I think is really important." (39:03)
- On Enforcement Challenges and Uncertainty:
"How unpredictable AI ends up being...that is a risk in and of itself. Because...nobody really knows how generative AI works the way it does." (47:04)
Engaging, Thoughtful Takeaways
- The EU AI Act’s Code of Practice is a living document meant to offer both clarity and adaptability to rapid technological change.
- Its main aim is to create a transparent, enforceable, and trust-building environment for the development and deployment of high-impact AI—while remaining flexible enough to adjust as new risks emerge.
- The legal and policy architecture is designed to encourage responsible innovation but also empower whistleblowers and ensure that responsibility for safety is held at the highest levels within companies.
- Both Schaake and Allen agree that the most dangerous risks—like bioweapons and runaway models—demand special vigilance, robust prevention, and international cooperation.
Select Timestamps for Important Segments
- 02:00 – Schaake’s early policy interests and entry into technology
- 06:30 – European tech and surveillance: early lessons
- 12:00 – Surveillance and historical context for European privacy
- 15:15 – Schaake’s roles at Stanford and global AI perspective
- 17:22 – Overview of the AI Act’s design and rationale for Code of Practice
- 24:22 – Safety & Security chapter explained
- 32:06 – What are systemic risks?
- 34:24 – Allen’s analysis on bioweapons and AI risks
- 41:50 – Safety and Security Frameworks, and practical enforcement
- 48:18 – Success criteria and looking ahead
Tone and Style
The conversation is candid, informative, and pragmatic—laced with personal anecdotes, historical context, and clear-eyed acknowledgment of policy challenges. Both speakers blend caution about real risks with optimism about responsible governance.
