
Hosted by Center for Strategic and International Studies · EN

In this episode, founding host Gregory C. Allen announces his departure from CSIS and introduces Aalok Mehta, Director of the Wadhwani AI Center, as the new host of the AI Policy Podcast.

In this episode, we are joined by Wadhwani AI Center fellow Kateryna Bondar to discuss her recent reports on Russia's military AI, "How Russia Is Building a Sovereign Drone Ecosystem for AI-Driven Autonomy" and "How Russia Is Reshaping Command and Control for AI-Enabled Warfare." We cover Kateryna's background (1:07) before doing a deep dive into the role technological innovation has played in the conflict in Ukraine (7:49). Kateryna then explains why AI capabilities in warfare "cannot be built, can only be grown" (22:24) and unpacks the report's claim that Russia has likely fielded a fully autonomous unmanned system in combat (53:02). Read Kateryna's report on Russia's AI-enabled C2 architecture here. Read her report on Russia's sovereign drone ecosystem here.

In this special episode, we sit down with Katrina Manson, author of Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare. We explore what drew Katrina to this story (2:31), trace the turbulent history of Project Maven and the obstacles it has overcome (8:03), examine how the U.S. leveraged Maven to support Ukraine against Russia (31:32), and discuss Katrina's latest reporting on Maven's role in the ongoing conflict with Iran (47:40). You can order a copy of Katrina's book here.

In this episode, we unpack President Trump's new national framework for AI legislation (11:15), including reactions from experts and policymakers (39:44). We also discuss the indictment of a Super Micro co-founder for smuggling Nvidia chips into China (42:59) and Nvidia receiving permission to sell H200 chips to China (57:04).

In this episode, we provide a detailed update on the Anthropic-Pentagon clash, including the Trump Administration's decision to label Anthropic "a supply chain risk" (4:17), the lawsuits Anthropic has filed in response (11:45), and what these lawsuits and recent reporting reveal about how Claude has been used in the war in Iran (43:20).

In this episode, we're joined by Owen Larter, Head of Frontier Policy and Public Affairs at Google DeepMind, to explore the often-overlooked world of AI standards and the role they play in shaping how AI is developed and governed. We discuss what standards are and why they matter for technological progress (2:53), how standards are developed and the key organizations involved (16:05), the relationship between standards and AI regulation like the EU AI Act (26:58), and more.

In this episode of the AI Policy Podcast, Wadhwani AI Center senior adviser Gregory C. Allen is joined by Andreessen Horowitz Chief Legal and Policy Officer Jai Ramaswamy and head of AI policy Matt Perault for a discussion on a16z's AI policy agenda. They will cover a16z's entrance into politics, their position on state and federal AI regulation, and how to ensure AI benefits society. Jai Ramaswamy is Chief Legal and Policy Officer at Andreessen Horowitz, overseeing the firm's legal, compliance, and government affairs functions. Previously, he was Chief Risk and Compliance Officer at cLabs. He has also served as the Head of Enterprise Risk Management at Capital One and Global Head of AML Compliance Risk Management at Bank of America/Merrill Lynch. Before joining the private sector, Jai worked for over a decade at the Justice Department, including as Chief of the Asset Forfeiture and Money Laundering Section. Matt Perault is the head of AI policy at Andreessen Horowitz, where he oversees the firm's policy strategy on AI and helps portfolio companies navigate the AI policy landscape. Before joining a16z, he was the director of the Center on Technology Policy at University of North Carolina Chapel Hill. He also previously served as head of global policy development at Facebook. Matt is a fellow at the Center on Technology Policy at New York University, the Abundance Institute, and the National Security Institute at the George Mason University Antonin Scalia Law School.

In this episode, we break down the escalating Anthropic-Pentagon clash, including the best arguments for either side, Defense Secretary Pete Hegseth's ultimatum, and the potential consequences of designating Anthropic as a "supply chain risk" or invoking the Defense Production Act (00:34). We then discuss several recent stories that are sparking discourse about the economic impacts of AI (28:58) and a senior government official's claim that DeepSeek's forthcoming model was trained using Nvidia's Blackwell chips and frontier model distillation (45:51).

This special episode was recorded in India on the last day of the India AI Impact Summit. We discuss the highlights of our experience at the Summit (00:25), whether India accomplished its goals for the event (10:25), major AI investments announced (16:01), and key messages from speeches by AI CEOs (19:45) and government officials (28:35).

The second International AI Safety Report, released on February 3, brings together insights from over 100 AI experts across 30 countries to assess the current state of frontier AI systems. The report examines advanced models' capabilities, the risks they pose, and the technical and governance measures needed to ensure their safe development and deployment. In this episode of the AI Policy Podcast, Wadhwani AI Center senior adviser Gregory C. Allen is joined by lead writer Stephen Clare and MIT Ph.D. student Stephen Casper, who authored the section on technical safeguards. They discuss how the latest Safety Report compares to the first edition published last year, explore the Report’s findings on technical safeguards, and unpack the document’s key policy implications.