Hard Fork: “At the Pentagon, OpenAI Is In and Anthropic Is Out”
The New York Times | March 1, 2026
Hosts: Kevin Roose (A), Casey Newton (B)
Overview
This episode explores a major shakeup in the relationship between the US government and two leading AI labs: Anthropic and OpenAI. Just as a Pentagon deal with Anthropic collapses amid intense political and ethical disputes, OpenAI steps in to sign its own agreement, raising questions about regulatory capture, government overreach, and the future of AI deployment in the military. Kevin and Casey break down the drama, analyze the contractual and ethical nuances, and discuss the wider implications for Silicon Valley and US tech policy.
Key Discussion Points & Insights
The Crisis: Anthropic vs. Pentagon
- In the days leading up to the episode, Anthropic refused Pentagon demands to drop two “red lines” for AI deployment: no mass domestic surveillance and no fully autonomous weapons ([02:07]).
- Dario Amodei (Anthropic CEO) publicly stated: “The these threats do not change our position. We cannot in good conscience accede to their request.” ([03:16], A quoting Dario)
- Casey: “I cannot remember any tech leader invoking conscience as a reason not to do something since Trump has been reelected.” ([04:43])
- Trump and Pentagon escalate:
- Trump’s Truth Social post: “The United States of America will never allow a radical left woke company to dictate how our great military fights and wins wars.” ([05:38], A quoting Trump)
- Orders phase-out of Anthropic technology from federal agencies, but does not officially designate them a “supply chain risk” at first ([06:18]).
- Hours later, Defense Secretary Pete Hegseth, on X: “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” ([07:21], A quoting)
- This supply chain risk designation is unprecedented for an American tech company ([08:02], B).
OpenAI Steps In
- While the Pentagon-Anthropic drama unfolds, OpenAI quietly negotiates its own deal, reportedly maintaining similar red lines ([08:28]-[10:32]).
- Sam Altman (OpenAI CEO) announces on X their agreement with Pentagon, claiming “we have confidence that our models will not be used for domestic mass surveillance and autonomous weapon systems, and that the Pentagon had agreed with those principles, and then they put them into our deal.” ([09:47], A quoting Altman)
- Confusion reigns over whether OpenAI genuinely secured stronger protections, or if they accepted similar “legalese” Anthropic had rejected ([10:49]-[13:25]).
- “If you just sort of zoom out and look at the facts of the case. It is a truly insane series of events.” ([10:42], A)
Contractual “Nuance” and Legal Grey Zones
- Dispute seems to hinge on the “all lawful use” standard: Pentagon wants to use AI for any application not explicitly illegal ([11:15], B).
- But, as Casey notes, “We do not have a national privacy law. …These are among the reasons that Anthropic has become very concerned about what powerful AI systems might do if they were given to the military in a country where there are not actually laws.” ([11:44])
- Example: Federal agencies using AI to scan social media of immigrants ([12:11], B).
- “Anthropic has said, we’re serious about this stuff… this is what I see happening here and seems like a significant part of, of the conflict.” ([12:56], B)
Political Vendetta or Legal Difference?
- Kevin outlines two theories ([15:25]):
- Political vendetta—Pentagon administration simply prefers OpenAI, perhaps for ideological reasons, and singles out Anthropic as “woke.”
- “OpenAI has been chosen for this contract because the administration likes them more.”
- Substantive legal differences in the contracts—OpenAI may have quietly accepted Pentagon’s language Anthropic rejected.
- Political vendetta—Pentagon administration simply prefers OpenAI, perhaps for ideological reasons, and singles out Anthropic as “woke.”
- Casey: “That is how the Chinese government regulates its tech companies. Either you get on board with the party or they crush you. Right. So that I think is really chilling. And again, not just to me, to former members of the Trump administration.” ([16:49], B)
Unprecedented Government Action Against Tech
- “...this fight with Anthropic and the Pentagon is, by a fairly wide margin, the most punitive action that the US Government has taken against a major American company at least this century and possibly ever.” ([17:50], A)
- Silicon Valley’s political realignment: Companies lurching rightward to avoid government hostility ([18:35], B).
Employee Activism
- Widespread solidarity letter signed by staffers at OpenAI, Google DeepMind, etc.: “We stand with Anthropic. We also do not want to make tools for mass domestic surveillance and autonomous killing.” ([19:07], A)
- Matters because employees wield leverage over tech deployment to the military ([19:48]).
- “I hope that those employees get a hold of the contracts that their employers are signing and really scrutinize them. …We really are going to need to rely on these employees in the coming years.” ([20:01], B)
"Safety Stack" & Security Theater
- OpenAI touts “safety stack” guardrails to prevent prohibited uses ([20:39], A quoting Altman’s language).
- Casey, skeptical: “This is the same company that told us it was going to build safeguards to make sure that SORA couldn't be used to make images of Brian Cranston… sometimes when OpenAI tells you it's going to build guardrails, they don't actually show up on time.” ([21:34], B)
- Kevin: “If you dump a bunch of data that you’ve collected on Americans... it is not going to be able to tell whether that information was legally gathered... so this is not really a meaningful change.” ([21:48], A)
Data Brokering and Surveillance Loopholes
- Legal loopholes: Data brokers sell personal data to federal agencies, enabling practical domestic surveillance without technically violating the law ([22:12], B).
- “The Pentagon already has all of the tools it needs… It’s just not called that because it’s legal…” (ibid.)
High Stakes & The Regulatory Capture Debate
- Casey: “Is it the people who build the technology, or is it the militaries and the governments…?” ([23:00], A)
- Kevin: “This is a company OpenAI coming into a very hot dispute between their biggest rival and the United States government … effectively using what seem to be vibes, charm, possibly some better political instincts to get a deal done…” ([26:12], A)
- Classic definition of regulatory capture in play ([26:12], A).
Unresolved Questions & Looking Forward
- What will Pentagon’s “supply chain risk” designation actually do to Anthropic? Are there hidden legal or business consequences? ([28:08]-[29:37]).
- Will the true details of the OpenAI contract ever come out? Was OpenAI’s “win” a matter of better politics, or a meaningful ethical/risk victory?
- Can employee pressure or public opinion shape company decisions?
- Anthropic’s internal culture: Dario Amodei emphasized the Manhattan Project analogy to foster a sense of moral gravity around AI ([29:46], A).
Notable Quotes & Memorable Moments
- A (quoting Dario Amodei):
“‘The these threats do not change our position. We cannot in good conscience accede to their request.’” ([03:16]) - B:
“I cannot remember any tech leader invoking conscience as a reason not to do something since Trump has been reelected.” ([04:43]) - A (quoting Trump):
“The United States of America will never allow a radical left woke company to dictate how our great military fights and wins wars.” ([05:38]) - A (on the situation’s chaos):
“It is a truly insane series of events.” ([10:42]) - B:
“That is how the Chinese government regulates its tech companies. Either you get on board with the party or they crush you. Right. So that I think is really chilling.” ([16:49]) - A:
“This fight with Anthropic and the Pentagon is... the most punitive action that the US Government has taken against a major American company at least this century and possibly ever.” ([17:50]) - B:
“Until now, though, we hadn’t actually tried to see the Trump administration try to crush a company. But now we have... I can’t imagine what kind of chilling effect that is going to have across Silicon Valley.” ([18:35]) - B (on safety stack promises):
“This is the same company that told us it was going to build safeguards to make sure that SORA couldn’t be used to make images of Brian Cranston…” ([21:34]) - B (on data privacy loopholes):
“It is legal for data broker companies to buy up data on millions of Americans and it is also legal for federal agencies to buy that data. …It is functionally equivalent [to surveillance].” ([22:12]) - A (on Anthropic’s culture):
“One of Daario’s favorite books… is called the Making of the Atomic Bomb… He believed... AI models would become as important to national security, to the government, to the future of the global order as nuclear weapons.” ([29:46])
Important Timestamps
- 02:07 – Anthropic’s crisis point; their red lines and refusal to compromise
- 03:16 – Amodei’s conscience statement
- 05:38 – Trump’s Truth Social escalation
- 07:21 – Pentagon declares Anthropic a “supply chain risk”
- 08:28 – OpenAI begins negotiations for its own deal
- 09:47 – Sam Altman’s deal announcement
- 10:42 – Hosts remark on the surreal state of affairs
- 11:15 – Breakdown of the “all lawful use” contract language
- 19:07 – Employee activism and open letters
- 20:39 – OpenAI’s “safety stack” and skepticism
- 22:12 – Legalized data brokering and domestic surveillance loopholes
- 26:12 – Regulatory capture and OpenAI’s political maneuvering
- 29:46 – Anthropic’s “Manhattan Project” mindset instilled by Amodei
Closing Thoughts
Kevin and Casey emphasize that the episode’s events are not mere contractual dramas, but existential questions about the control and ethics of artificial intelligence at a national and even global level. The clash between Anthropic, OpenAI, and the Pentagon is a harbinger of future conflicts as AI becomes core to military, government, and social infrastructure. The ultimate resolution—and the real details of these secretive agreements—remain uncertain, but the stakes could not be higher.
For more, listen to the full episode on your podcast app or the NYT website.
