80,000 Hours Podcast: Helen Toner on the Geopolitics of AI in China and the Middle East
Date: November 5, 2025
Host: Rob Wiblin (and Luisa Rodriguez, not present in transcript)
Guest: Helen Toner (interim executive director, Center for Security and Emerging Technology)
Brief Overview
This episode features an in-depth conversation with Helen Toner, interim executive director at the Center for Security and Emerging Technology (CSET), on the global landscape of AI—focusing on China, the Middle East (especially the Gulf), export controls, competition for AI dominance, policy responses, and concerns about both the diffusion and concentration of power as AI advances. The discussion traverses both technical and geopolitical terrain, providing grounded insights for anyone interested in how AI is shaping security, geopolitics, and policy.
Key Discussion Points & Insights
1. Export Controls and the Global Semiconductor Supply Chain
-
Export Controls on Chips and Equipment
CSET's early work influenced U.S. policies restricting access to advanced semiconductor equipment and chips for China (10:43–11:09). Toner distinguishes between restricting finished chips (semiconductors) and the tools to make them (semiconductor manufacturing equipment, SME).“...if you want to prevent a country from having light bulbs, you're going to have a really hard time because if you prevent them from buying your light bulbs, they're going to be able to buy them anywhere else. ...the key thing about these pieces of [SME]...if they really are a choke point...you can potentially really slow down China's efforts to build up its own domestic supply chain...”
(11:09 Helen Toner) -
Effectiveness and Implementation Issues
Toner notes SME controls are under-implemented, often overshadowed by a focus on chips, and that licenses have sometimes been granted in questionable cases.
(14:39–14:58) -
China's Reaction and Strategic Calculus
China had already been preparing for tech decoupling (15:20). Export controls did not create a major backlash, partly because they were expected (15:20–17:23). -
Debate Over U.S. Allowing Nvidia Sales to China
U.S. debate lacks clarity on objectives: maintaining market share, slowing military AI, or denying general progress? (18:18–21:25)
2. AI and Geopolitics in the Middle East
-
UAE Deals and Democracy Rhetoric
American companies and the US government have announced provisional deals to build large AI computing facilities in the Gulf, notably the UAE. Toner points out the contradiction in framing this as a "win for democracy" in an autocratic state (00:00–00:56, 59:24–62:25).“It certainly looks cynical to me. I think it's really playing fast and loose with what sort of democracy means, what democratic AI means.”
(59:45 Helen Toner) -
Risks of Empowering Autocracies
Concerns about giving world-class computing power to highly repressive regimes, both on moral and strategic grounds (56:58–59:24, 65:17).
3. The US–China AI Race: Myths vs. Realities
-
Pervasive "Race" Rhetoric vs. Reality
Toner questions the winner-take-all "race" framing. The nature and outcomes of AI competition with China are less clear and more nuanced than often presented (39:04–40:34).“I think the shape of the competition is actually pretty unclear. And when people treat it as though it is very obviously just this sort of winner take all race, I think that is a pretty risky proposition...”
(40:34 Helen Toner) -
Is China Racing to AGI?
Chinese policy emphasizes both applications ("AI Plus") and general purpose/AGI aspirations, although the leadership may be less "AGI-pilled" (27:10–30:25).“They can walk and chew gum at the same time. And they are continuing to also emphasize AGI, general purpose AI. ... It's very clear that they are right now pushing for both."
(27:20 Helen Toner) -
China's Capabilities Gap
The gap between US and Chinese frontier models has narrowed considerably but is still significant—estimated at 6–12 months (43:48–46:41).“The smallest gap that we have observed is three months... but ... no Chinese company has previewed a similarly sophisticated model [as OpenAI had nine months ago], which means we're now at something like nine month gap.”
(43:48 Helen Toner)
4. Soft Power, Market Share, and Open Source AI
-
"AI Stack Diplomacy" and Soft Power
Argument that it benefits the US for other countries to adopt its AI models, but major strategic value is soft power, not direct security (21:25–24:34).“...there's just a lot of cases where you get some kind of broad, diffuse soft power benefits from being the provider of a key technology.”
(22:08 Helen Toner) -
Open Source AI: Benefits and Risks
Open sourcing is good for soft power, but opening the most advanced models could pose risks (25:06–27:10).
5. Concentration vs. Diffusion of Power in the Transition to Advanced AI
-
Power Concentration Risks
If AI remains capital intensive (training requires enormous compute), power may naturally concentrate with big tech or rich states (66:38–69:15). -
Tension with Risk Management
AI safety advocates sometimes favor power concentration for easier coordination, but this has worrying historical precedents for abuse (69:15–70:54, 71:45–75:07).“I think there's often a sense of, well actually that concentration is valuable. ... The problem is then... you have this incredibly powerful technology in the hands of a very small number of people. I think just historically that's been really bad."
(69:15 Helen Toner) -
Desire for “Muddling Through” vs. Comprehensive Solutions
AI policy debates are complicated by uncertainty about risk—some favor maximal control, others fear over-centralization. Toner suggests we cannot know which outcome to prioritize, so a more flexible, adaptive policy stance is justified (77:07–79:05).
6. Adaptation Buffers vs. Non-Proliferation in AI (Especially for Bio/Cyber Risks)
-
Limits of Non-Proliferation Strategies
Toner argues nonproliferation for AI models will not succeed long-term, as capabilities diffuse rapidly once demonstrated (88:38–92:43).“...as soon as we build that for the first time, the amount of computing power ... to build, that is going to fall pretty rapidly over time... If you have a determined actor, a bad actor who is trying to misuse these models, they will find a way to do it.”
(88:38 Helen Toner) -
Adaptation Buffers
Instead, we should focus on adaptation buffers—using the time before broad capability diffusion to prepare resilience and mitigations.
Notable Quotes & Memorable Moments
On Export Controls and Chinese Capacity
“A big thing that the companies will say is if the US doesn't sell it, then China will sell it. So that China is waiting outside the door. If the deal doesn't go through with the us, they'll just offer the exact same deal.”
— Helen Toner [00:14 and 62:25]
“But China can't even make the chips.”
— Rob Wiblin [00:18 and 62:25]
“Exactly, exactly. So it's totally disconnected from the reality of what they can do.”
— Helen Toner [00:20 and 62:26]
On UAE's Political Situation
"The UAE is an autocratic country. Political parties are banned, they do mass trials of dissidents, they persecute the families of dissidents. Economy runs on immigrant labour... It's a hereditary autocracy with a royal family that is going to stay in power."
— Helen Toner [00:20, repeated throughout 56:58–59:24]
On the OpenAI Board Firing/Attempt Incident
“...for pretty much every single X, it was something that we were thinking about, we considered carefully, we had good reasons not to do. And in most cases I still stand behind those reasons.”
— Helen Toner [02:25]
On Contradictions in U.S. AI Race Policy
“...there's a bunch of stuff that you probably would be be doing like encouraging high skill immigration if you wanted to be, I guess, as far ahead of China as you possibly could be, that things that aren't happening, indeed it's kind of going the other way. How can you make sense of that?”
— Rob Wiblin [48:17]
“I make sense of it by there being sort of different factions inside the Trump administration... them not necessarily coming together into a coherent policy vision.”
— Helen Toner [49:14]
On Future Global AI Development
“Is it crazy to think that ... superintelligence could first be trained in Saudi or the uae?... Is it possible that superintelligence could be developed in Saudi first?”
— Rob Wiblin [64:36] “I think it's possible. I would say the UAE more than Saudi Arabia is my impression.”
— Helen Toner [65:17]
On the Challenge of AI Risk Policy
“So many of the obvious solutions that you might have or approaches you might take to dealing with loss of control do make the concentration of power problem worse and vice versa... the people who think it's 50% likely that we have some catastrophic loss of control event ... are going to say this is a terrible move that you're making thinking, because we're like accepting much more risk, we're creating much more risk than we're actually eliminating.”
— Rob Wiblin [77:07]
On DC Policy Conversations
"Policymakers talk to companies because the companies can tell them what is real and what is realistic and how the technology actually works... And that is a place that CSEC can step in as well...having a broader public interest in mind."
— Helen Toner [130:06]
Timestamps for Important Segments
- UAE, Nvidia chips, and autocracy — [00:00–00:56]
- OpenAI board firing incident — [01:32–05:27]
- CSET's mission and work — [07:36–10:43]
- Semiconductor export controls explained — [10:43–14:58]
- US–China AI chip debate — [17:23–24:34]
- Power concentration and the risks of AGI transition — [66:38–77:07]
- Non-proliferation vs. adaptation buffers — [88:38–95:58]
- Military AI applications & their (slow) adoption — [96:09–99:43]
- OpenAI's restructuring and nonprofit control — [115:28–118:51]
- CSET’s unique role and remaining independent — [126:01–132:49]
Additional Topics Covered
- Open Source AI and the American Truly Open Models Project (25:06)
- The need for more technical and interdisciplinary talent in AI policy (133:26–134:43)
- Challenges of keeping up with fast-moving developments and news sources (122:42–125:52)
- Helen’s "normalcy", accent, and code-switching anecdotes (118:57–122:42)
Overall Tone and Language
Helen Toner brings careful, analytic, and sometimes skeptical clarity to issues often discussed with sweeping rhetoric. The tone is thoughtful, nuanced, and occasionally ironic—inviting listeners to think beyond soundbites about “AI races” and simplistic US-vs.-China, democracy-vs.-autocracy frames. The conversation is replete with policy nuance and an appreciation for historical and technical detail.
Summary: Takeaways for the AI Policy-Minded Listener
- The geopolitics of AI is defined by real technological constraints (manufacturing, compute), not just abstract competition.
- Gulf state chip deals may trade short-term strategic benefit for long-term risks of empowering autocracies—and require hard-headed, not rhetorical, justification.
- "AI race" rhetoric is often unhelpful and oversimplified; the real dynamics of competition and risk are more complex and multi-layered.
- Policies should account for the near inevitability of AI model proliferation and focus on building adaptation and resilience rather than assuming permanent control is possible.
- There’s much value in having technically informed, nonpartisan analysis to bridge industry, policy, and security priorities—and in building a policy community that can think dynamically about new risks and opportunities as AI progresses.
For more insights, read Helen Toner’s substack [helentoner.substack.com] and CSET’s monthly newsletter for further expert analysis.
