GovDiscovery AI Podcast with Mike Shanley
Episode 68: AI and National Security with Dr. Marina Theodotou
Released: November 12, 2025
Episode Overview
This episode of GovDiscovery AI Podcast features Dr. Marina Theodotou, Executive Director of the Center for Frontier AI Security (CFAS). With extensive experience in the Department of Defense and significant leadership on the Defense Innovation Board, Dr. Theodotou discusses the evolving landscape of AI governance, the AI race between the US and China, challenges in operationalizing AI security recommendations, and the importance of stakeholder collaboration for the future of national security. The episode delivers actionable insights, with a special focus on bridging policy and practice—the “think and do” approach.
Key Discussion Points and Insights
1. The Global AI Race and US National Security
- Jensen Huang’s Statement ([01:07]):
- Mike opens by referencing Jensen Huang, CEO of Nvidia, who recently claimed “China is going to win the AI race.”
- Dr. Theodotou’s reaction: “My first reaction was what can we do differently to prevent that from happening? … What we really want is the world to run on a US and allied AI stack.” ([01:21])
- US Strategic Positioning: The Center for Frontier AI Security is aiming to ensure global adoption of safe, reliable, and aligned US/allied AI systems.
2. Current State of AI Governance in US National Security ([02:08])
- Rapid Advancements & Leading Frontier Models:
- Six leading models from Amazon, OpenAI, Anthropic, Google, Microsoft, and Meta are at the edge of current capabilities.
- Policy & Research Efforts:
- “A lot of great work happening... especially the version passed by the Senate a few weeks ago has at least four sections that are focusing on national security, security and AI in particular.” ([03:10])
- Several states, think tanks, and research groups (RAND, CSET, MIT, Apollo Research UK).
- Implementation Gap:
- “What we are not seeing is implementation of that great research and operationalizing that research within national security.” ([04:25])
- CFAS’s role: Uniting government, industry, academia, and VC to develop and deploy operational standards.
3. Making Recommendations Stick: From Policy to Practice ([05:48])
- Keys to 40% DoD Adoption Rate:
- “Engage stakeholders early and often and make the recommendations practical and actionable.” ([06:22])
- Specificity and ownership of recommendations are crucial: “If the recommendation is too broad, then nobody owns it.” ([06:54])
- Memorable Quote:
- “Engage stakeholders early and often and make the recommendations practical and actionable. Because when we were very specific... it was easier for the stakeholders to take them on and adopt them.” (Dr. Theodotou, [06:22])
4. The Compute Power and Energy Challenge ([07:45])
- Critical Infrastructure Requirements:
- “Compute power requires data centers, and data centers require copious amounts of energy... The more powerful the chips, the more energy needed.” ([08:17])
- Water for cooling is another resource constraint, and China is advancing rapidly on these fronts.
- Noted recent collaborations: Amazon and Anthropic announced a new major data center.
- CFAS’s Focus:
- Though compute and energy are critical, CFAS is focusing on “securing AI and operationalizing AI.” ([09:45])
5. Insights from the First CFAS Meeting ([10:56])
- Interoperability and Organizational Change:
- 44 leaders from top AI companies, former government, academia, and VCs came together virtually.
- Validation of CFAS’s mission to “operationalize AI”—create secure, reliable, and nationally aligned standards.
- Key Gaps Identified:
- Lack of interoperability in safety plans across frontier models.
- Differing definitions of risk: “That may sound basic, but it’s actually very critical if you’re in a contested environment.” ([13:05])
- Call for a National Security-Specific Framework:
- Current NIST risk framework exists, but “it is not national security specific.” ([12:15])
6. Vulnerabilities in GenAI and Autonomous Systems ([15:03])
- Core Vulnerabilities:
- Main risk is adversarial takeover or “rogue” behaviors: “The clear vulnerability is if someone gets into the AI adversary and creates a system that goes rogue.” ([15:30])
- Operational Interoperability:
- Ensuring unmanned/AI-augmented systems can communicate and coordinate, both within US forces and among allies, is a key operational challenge.
- “It’s an imperative that all of our unmanned systems... have interoperability across the different AI capabilities and unmanned systems so they all talk to each other.” ([17:00])
- Decision Advantage:
- Having congruent systems enables: “decision advantage, operational efficiencies that drive lethality.” ([17:52])
7. NDAA 2026 and Funding Outlook ([18:14])
- Funding for AI Security:
- Not final yet, but both House and Senate versions show an emphasis on “AI security.”
- “It would be premature to comment because we will have to see the final approved version.” ([18:45])
8. Roadmap for CFAS Success ([19:29])
- User-Focused Standards:
- “The operator is the user first. So what is it that they need to have and how do we work backwards to create a framework of standards that supports what the operators need?” ([19:41])
- Building a “Minimum Viable Product”:
- Down-selecting from over 1600 AI risks catalogued by MIT to shape a pragmatic, foundational standards framework in year one.
- “Goal is to be the think and do tank... close the gap between policy and implementation.” ([21:37])
- Engagement with Stakeholders:
- Want to activate broad participation from industry, AI shops (large and small), academia, think tanks, VC, and government.
9. How to Get Involved with CFAS ([23:14])
- Contact and Updates:
- Follow CFAS on LinkedIn (Center for Frontier AI).
- Visit the website: cfas.online to join their distribution list and submit partnership inquiries.
Notable Quotes & Memorable Moments
-
On the International AI Race
“What we really want is the world to run on a US and allied AI stack. And that's exactly where the center for Frontier AI Security is positioning itself.”
— Dr. Marina Theodotou ([01:38]) -
On Bridging Research and Ops
“There's a lot of great policy and research... What we are not seeing is implementation... What we would like is to bring together government, industry, academia, think tanks, and VC to co-develop and deploy a standards framework.”
— Dr. Marina Theodotou ([04:23]) -
On Recommendations and Stakeholder Engagement
“If the recommendation is too broad, then nobody owns it. You have to really be specific and engage the stakeholders and make them and encourage them to own those recommendations.”
— Dr. Marina Theodotou ([06:54]) -
On Security Gaps Across Models
“We found several areas of alignment, but also five key areas where there are significant gaps. One is lack of interoperability across the frontier model safety plans, differing definitions of risk... If basic things like definitions of risk are not the same... our operators are having to use different models to make those decisions on the spot.”
— Dr. Marina Theodotou ([13:05]) -
On the Role of Unmanned Systems
“The only way we can win the fight is if we can align and deploy our best AI with our best unmanned systems in scale and speed.”
— Dr. Marina Theodotou ([16:25]) -
Setting the CFAS Agenda
“The goal is to be the think and do tank... There is great work being done, but we want to close the gap between policy and implementation.”
— Dr. Marina Theodotou ([21:37])
Important Timestamps
- [01:07] — Jensen Huang’s “China will win the AI race” claim and implications
- [02:08] — Overview of AI governance and the state of research
- [05:48] — Key success factors in operationalizing recommendations
- [07:45] — Computation and energy constraints in the AI race
- [10:56] — Insights from first CFAS convening, interoperability challenges
- [15:03] — AI vulnerabilities in autonomous/unmanned systems
- [18:14] — AI funding outlook via NDAA 2026
- [19:29] — CFAS’s first-year success criteria and roadmap
- [23:14] — How listeners and organizations can connect with CFAS
Final Takeaways
Dr. Marina Theodotou and the Center for Frontier AI Security are laser-focused on closing the gap between excellent AI policy research and real-world, national security-grade implementation. Their approach—uniting a broad array of stakeholders and prioritizing practical, user-driven standards—may help the US and allies maintain a competitive edge in the AI race and operationalize AI security. Organizations interested in partnership or updates can connect with CFAS via LinkedIn or their website.
