The AI Policy Podcast
Episode: A Crash Course on AI Standards with Google DeepMind’s Owen Larter
Date: March 6, 2026
Host: Gregory C. Allen (CSIS, Wadhwani Center for AI)
Guest: Owen Larter (Head of Frontier Policy and Public Affairs, Google DeepMind)
Episode Overview
This episode provides a deep dive into the complex but foundational world of AI standards—why they matter, how they are developed, their intersection with regulation and geopolitics, and what the future might hold. Owen Larter from Google DeepMind joins Gregory Allen to offer a comprehensive “crash course,” drawing on history, current industry practices, and emerging global challenges at the intersection of technology, policy, and governance.
Key Discussion Points & Insights
1. Owen Larter’s Path to AI Policy and Standards
- Background: Owen reflects on his transition from broader policy and politics to tech, including over a decade at Microsoft leading to a focus on responsible AI.
- “I ended up at Microsoft where I had 11 fantastic years... then towards the end of that worked in the Office of Responsible AI, which I joined just before the large language model revolution kicked off.” (00:49, Owen Larter)
- DeepMind Experience: Nine months into his role at Google DeepMind, Owen is still inspired by the caliber and pace of work, from research breakthroughs to real-world impact globally.
2. What are AI Standards? Definitions and Importance
- Standards Explained: Two types—formal (“capital S”) standards developed by bodies like ISO, and informal best practices or guidance recognized by industry.
- “A standard is just an agreed upon and repeatable way of doing something useful, essentially.” (03:10, Owen Larter)
- Historical Examples: From Ancient Egypt (qubits) and shipping containers to Internet protocols (TCP/IP, HTTP, URL), Owen grounds the idea of standards in centuries of technological progress.
3. Types and Functions of Standards
- Technical (Interoperability) Standards: Enable devices and systems to communicate and connect across organizational or national lines (e.g., USB, HTTP).
- Process Standards: Establish procedures for ensuring safety, reliability, and trust (e.g., airline safety checklists, crash test protocols).
- “[Technical standards like HTTP] are absolutely fundamental in terms of building the connected and useful Internet that we have.” (07:26, Owen Larter)
4. Benefits of Standards in AI
- Interoperability: Ensures systems and agents can integrate and coexist without bespoke engineering.
- “You need standards to help technology talk to each other. You need interoperability standards for things like agents as we move into a more agentic economy.” (08:29, Owen Larter)
- Trust & Safety: Facilitates acceptance by providing predictable, testable processes and outcomes.
- Catalyst for Growth: Standards help scale markets and lower entry barriers for innovation.
5. Challenges and Downsides of Standardization
- Process Can Be Slow: Formal standard-setting bodies like ISO operate on consensus and iterate over months or years—which may lag behind rapid advances in AI (16:04-18:59).
- Competing and Protectionist Standards: Nation-states may pursue divergent technical standards for protectionism rather than technical superiority, sometimes to promote local industries at the expense of interoperability.
- “Sometimes you see this phenomenon where standards are a source of protectionism.” (15:26, Gregory Allen)
- Intellectual Property Dynamics: The concept of "standard essential patents" means companies may push their proprietary tech as the standard—potentially creating monopolies or national leverage (18:59-20:50).
6. Making a Capital ‘S’ Standard – The Process
- Development Pathway: Multi-stakeholder bodies like ISO assemble expert groups to draft, debate, refine, and eventually ratify standards over 12-24 months (16:21-18:59).
- From Research to Standard: Emphasis is needed on building a pipeline—from research and best practices to formalized, international standards.
7. Google DeepMind’s Role in Standards Development
- Technical Standards:
- Agent-To-Agent Standard: “Basically a way of connecting agents so they can talk to each other...like a handshake that makes it very easy for them to interact.” (21:05, Owen Larter)
- Universal Commerce Protocol (UCP): For standardizing agents’ interactions with websites.
- Google’s approach: Issuing open source standards, collaborating broadly rather than solely pursuing formal ISO ratification (23:00-23:14).
- Process Standards:
- Participation in consortia (e.g., Frontier Model Forum) for knowledge sharing and best practices.
- Engagement with global institutions like NIST, UK AI Safety Institute.
8. Major Standards Bodies and Frameworks
- International Standards Organization (ISO)
- NIST (U.S. National Institute of Standards and Technology)
- Notably, AI Risk Management Framework is a highly referenced set of guidelines rather than a formal standard.
- CEN/CENELEC (EU Bodie)s
- Frontier Model Forum, AI Safety Institutes
- Emerging Initiatives:
- US CASE’s Agentic Standards Initiative (36:25)
- ISO 42100 on AI Management Frameworks (26:05)
9. Standards & Regulation – The EU AI Act Case Study
- Harmonized Standards: EU AI Act will require compliance with yet-to-be-developed harmonized standards, showing the regulatory reliance on flexible, evolving standards.
- “Some of this regulation is almost at, not necessarily placeholder, but gesturing to things that have not yet been developed, which...puts all the more of a premium on getting those standards right.” (27:31-29:14, Owen Larter)
- Voluntary vs. Mandatory Standards: Standards sit alongside or underpin regulation, sometimes becoming de facto requirements when referenced in law.
10. Geopolitics, Protectionism, and Standards
- International Competition: Standards can become proxies for geopolitical rivalry, as seen in the U.S. push to set international norms that reflect its values and economic interests.
- “I think it is a fundamental characteristic of AI technology that it is going to be developed and used across borders. I don't think it's in anyone's interest to have a sort of balkanized approach to the technology.” (32:00, Owen Larter)
- Leverage through Standards: Export controls can be highly effective when they target standard-essential technologies (33:00, Gregory Allen).
11. Interaction Between Standards & Industry Behavior
- Standards Drive Internal Practice: Google’s entire business was built on Internet standards; AI standards are anticipated to become just as foundational (34:37, Owen Larter).
- Mutual Shaping: As new standards emerge, they shape and are shaped by leading industry players.
12. Advice for Policymakers & Future Priorities
- Invest in Pre-standardization Research: Building robust knowledge and prototype best practices before codifying them (35:24, Owen Larter).
- Promote Expert-Led, Inclusive Standard Setting: Ensure multi-stakeholder, international, science-driven processes to keep standards open and interoperable.
- Support High-Quality, Globally Relevant Initiatives: Back strong work from groups like the US CASE’s agentic standards and the global AI Security Institutes (36:25-37:30).
Notable Quotes & Memorable Moments
-
“A standard is just an agreed upon and repeatable way of doing something useful, essentially.”
— Owen Larter (03:10) -
“In the early years of playing with electricity, a lot of people setting themselves with the houses on fire... being an electrician was not a safe job. It’s a dangerous field. Early electricity is not inherently safe. We made it safe. We figured out what the right protocols, what the right standards, what the right technologies were.”
— Gregory Allen (11:11) -
“Sometimes you see this phenomenon where standards are a source of protectionism.”
— Gregory Allen (15:26) -
“Google is an organization that is literally built on the Internet standards that were developed through the 80s and 90s. I think the same thing is going to happen in the AI era as well.”
— Owen Larter (34:37) -
“I think the high quality bar is really, really important. And I think that [international AI Security Institutes] network work is very international. I think it has good representation from around the world. So I think it is an important institution for sure.”
— Owen Larter (37:30)
Important Timestamps for Major Segments
- Owen Larter’s background & entry to standards: 00:49–02:23
- What are standards? (historic & technical context): 03:10–04:52
- Types of standards (technical vs. process): 05:30–07:52
- Benefits and challenges of standards: 08:29–11:42
- Historical case study—Electricity & measurement units: 11:42–13:56
- Standards, protectionism & fragmentation: 13:56–16:04
- Making formal standards (capital ‘S’ standards): 16:21–18:59
- Standard essential patents and market power: 18:59–20:50
- Google DeepMind’s standards initiatives: 21:05–23:14
- Industry consortia and pre-standardization work: 23:14–24:46
- AI Risk Management Frameworks & ISO 42100: 25:37–26:56
- Regulation tied to standards (EU AI Act): 26:56–29:14
- Geopolitical rivalry & international standards: 31:22–33:00
- How standards affect company behavior: 34:37–35:01
- Advice for policymakers: 35:24–36:12
- Notable current/future initiatives: 36:25–37:48
Concluding Takeaways
- AI Standards are crucial for interoperability, safety, and trust, analogous to the role of past technical revolutions.
- Standardization is both a technical and political process, requiring balance between deliberate consensus and rapid innovation.
- Global cooperation and expert-driven openness are essential to maximize AI’s safe and beneficial deployment—and to avoid fragmentation that slows progress and increases risk.
- Active investment in research, knowledge sharing, and inclusive processes will ensure the standards underpinning AI serve society’s needs as the technology evolves at exponential speed.
End of Summary
