THE LAWFARE PODCAST
Episode: Lawfare Daily: The Pentagon Designates Anthropic as a Supply Chain Risk
Date: March 3, 2026
Host: Benjamin Wittes (Editor in Chief, Lawfare)
Guest: Alan Rosenstein (Senior Editor, Lawfare; Professor at University of Minnesota Law School)
Overview
This episode examines the explosive recent decision by the Department of Defense (DoD) to designate Anthropic—a leading AI company and creator of “Claude”—as a supply chain risk, effectively cutting off its ability to do business with the U.S. government and government contractors. Host Benjamin Wittes and procurement law “sudden expert” Alan Rosenstein break down the Pentagon’s moves, the legal and policy implications, industry and public reactions, and the high-stakes court fight looming ahead.
This is a rich, in-depth exploration of rapidly unfolding tech policy politics at the intersection of law, AI, and national security.
Key Discussion Points & Insights
1. Background: Anthropic and DoD Tensions (05:03-08:52)
- Anthropic has cooperated with the government for years, providing AI services for military and classified purposes, with “red lines” barring its tech from use in mass surveillance of Americans and fully autonomous lethal operations.
- In January, Secretary of Defense Pete Hegseth issued a memo requiring all AI contracts to allow “all lawful uses,” rebuffing companies that wanted additional restrictions.
- The standoff heightened after reports of Claude’s use in a military operation against Nicolás Maduro (Venezuela), raising Pentagon concerns over Anthropic’s constraints in military use.
- On Friday, President Trump posted on Truth Social banning Anthropic from government systems; Hegseth followed with a post designating Anthropic a “supply chain risk,” barring not just Anthropic—but any DoD contractor—from doing business with it.
- Notable quote:
“It’s a sanction. It’s essentially a sanctions regime against Anthropic. . . . If Heath’s designation is read to its utmost, it’s effectively a death sentence for Anthropic because Anthropic loses its capacity for compute.”
— Alan Rosenstein (07:51)
- Notable quote:
2. Why Now? Political, Personal, and Substantive Drivers (09:05-14:17)
- Discussion of the timing: The ban was issued the day before military action against Iran, suggesting high priority.
- Rosenstein speculates motives range from political “chest-thumping” and personality-driven decision-making (“domination and this kind of symbolic politics is how these people think”) to a real ideological insistence that military should control how its tech is used.
- Notable quote:
“I really think there’s an element of ‘I’d like to show you that my stick is bigger than your stick...because domination and this kind of symbolic politics is how these people think.’”
— Alan Rosenstein (11:04)
- Notable quote:
3. Contracts and Software Licensing: Is This Really Unique? (14:17-16:40)
- Distinction between military purchases of classic hardware (“You buy an F-16 and then you decide how to use it”) and software-as-a-service, where licenses and vendor-imposed terms are typical.
- It's routine for software companies to impose use restrictions and for contracts to have exit mechanisms when terms are not agreed upon.
- Notable quote:
“If we were civilized people, we would just shake hands and this would not be a story.”
— Alan Rosenstein (15:44)
- Notable quote:
4. The Supply Chain Risk Designation vs. the Defense Production Act (16:40-18:59)
- Many expected the government to invoke the Defense Production Act (DPA), which could force a company to provide services to the government.
- Instead, the DoD used the supply chain risk designation, typically reserved for foreign threats (e.g., Kaspersky, Huawei), leveraging 10 U.S.C. § 3252 to summarily exclude Anthropic from DoD contracts.
- Notable quote:
“I did not have on my bingo card that I’d be spending the whole weekend studying supply chain risk because it seems so outlandish. But here we are.”
— Alan Rosenstein (16:58)
- Notable quote:
5. Legal Analysis of the Supply Chain Action (22:33-27:56)
- The main statute used, 10 U.S.C. 3252, allows the SecDef to designate suppliers as risks, but historically aimed at foreign threats.
- Rosenstein outlines Anthropic’s legal arguments:
- The statute may not apply to domestic companies.
- Lack of procedural protections violates due process.
- The decision is arbitrary and capricious (“the definition of arbitrary”).
- The move is pretextual (“Trump says Anthropic is radical left, woke something something...it’s all about how they don’t like Anthropic.” — (30:47)).
- The administration’s public statements and haphazard process bolster Anthropic’s case.
- Potential “major questions doctrine” issues: burning a leading American AI company over a contract dispute is a major policy decision likely beyond the statute’s intent or scope.
- Notable quote:
“It is completely insane to simultaneously say, this product is so important, we’re going to force you to give it to us. It’s so safe that we’re going to use it during an active military engagement, and it’s so dangerous that we’re going to burn you to the ground.”
— Alan Rosenstein (26:06/03:08)
- Notable quote:
6. Secondary Boycotts: Legality and Comparison to Past Actions (39:29-42:42)
- The DoD’s order includes barring government contractors from doing any business with Anthropic—potentially devastating for Anthropic’s cloud relationships (Amazon, Google).
- Statutes allow the DoD to exclude a party from security products, but not to prohibit all business relationships (“secondary boycotts”).
- Only in rare cases (e.g., by explicit act of Congress, as in Section 889 with Huawei/ZTE) has this been attempted, and even then, not as broadly.
- Notable quote:
“Burning American Frontier AI company to the ground because you don’t like how they contracted with you, that’s a pretty major question.”
— Alan Rosenstein (42:11)
- Notable quote:
7. The Rival AI Labs: OpenAI vs. Anthropic, DOD's Double Standards (43:01-52:37)
- Grok (Elon Musk): “will do anything the government wants.”
- OpenAI claims its contract with DoD is as restrictive or more than Anthropic’s, but contract - and its red lines - are vague and possibly meaningless (stripping “red lines” to only what’s already legally required; relies on changeable law/guidelines).
- OpenAI’s public communications about those “red lines” are unclear and possibly misleading, raising reputational risks—"I am afraid...OpenAI I do not think is covering itself in glory…” (46:01).
- Notable quote:
“You have this contract that OpenAI has signed with the government...Those provisions, I think pretty clearly...do not impose meaningful red lines. It just does not.”
— Alan Rosenstein (44:25) - Notable quote:
“If I were OpenAI, I would really worry that my reputation...is going to be very seriously and durably harmed in this very, very small community of, you know, elite AI engineers.”
— Alan Rosenstein (51:33)
- Notable quote:
8. The Broader Context: Who Should Decide These Questions? (52:37-58:24)
- Rosenstein argues this contract standoff is a symptom of a larger unresolved policy fight:
- Should America “nationalize” AI leadership, and to what extent?
- Are these epochal questions for the democratic process, or for Pentagon/private sector negotiations?
- The current process—ad hoc, driven by personalities, lacking congressional input—is unsustainable for setting existential policy.
- Notable quote:
“We do have an institution that is supposed to do this. It is called the United States Congress...but at the end of the day, this is going to be...under any rational system, Congress would be the one who would decide it.”
— Alan Rosenstein (55:51/57:39)
- Notable quote:
9. What Happens Next: Procedural Confusion and Legal Predictions (58:24-63:21)
- Unclear whether a formal designation has occurred (“My understanding is that Anthropic has not yet received a piece of paper…” (58:54))
- Anthropic is preparing to sue, likely winning a restraining order or injunction given the weak government case.
- The risk for Anthropic is more in lost business/investor confidence (i.e., “permanent wounding”) than in losing court, since even a temporary chill could cause lasting harm.
- Notable quote:
"If something gets screwed up for 12 months, we could go bankrupt."
— Dario Amodei (quoted by Alan Rosenstein, 62:32)
- Notable quote:
Notable Quotes & Memorable Moments
- “In the land of the blind, the one eyed man is king.” — Alan Rosenstein (04:31, on suddenly learning procurement law)
- “It’s a little company that some of us may have heard of called Anthropic…” — Alan Rosenstein (05:03)
- “You can never rule out personality driven decision making…this is just not the case for this administration.” — Alan Rosenstein (10:49)
- “It’s almost sanctions regime attempt against some domestic company, which is wild.” — Alan Rosenstein (21:43)
- “Contract dispute is the right answer of how you operationalize it. It’s just a weird vehicle to set the substantive principles.” — Alan Rosenstein (53:35)
- “Killer robots have a fabulous way of focusing the mind.” — Alan Rosenstein (55:46)
Timestamps for Important Segments
- 05:03 – Anthropic, Pentagon, and contract “red lines”
- 07:51 – Explanation of supply chain risk designation and “secondary boycott”
- 10:49 – Political/personal/ideological analysis of the administration’s motives
- 15:08 – Industry norms for contract restrictions in software vs. hardware
- 16:40 – Pivot from Defense Production Act to supply chain authority (10 U.S.C. 3252)
- 22:33 – Statutory basis and legislative history
- 26:06/03:08 – The “completely insane” logic of the Pentagon’s stance
- 30:47 – Pretextual and political nature of the supply chain designation
- 39:29 – Secondary boycott and legal limits on such action
- 46:01 – OpenAI’s contract and reputational fallout
- 52:37 – Why contract disputes can’t substitute for real policy-making
- 58:24 – Procedural limbo; likely litigation outcome and business peril for Anthropic
Tone & Language
- The speakers maintain a mix of seriousness, exasperation, and wry humor, using phrases like "we live in the dumbest of all possible timelines" (26:06), and frequent tongue-in-cheek asides about bureaucracy, law, and DC/Silicon Valley culture.
- Language is candid and accessible, occasionally irreverent, but grounded in legal and policy rigor—a hallmark of Lawfare’s style.
Conclusion
This episode is a must-listen for anyone interested in the intersection of AI, government power, tech law, and national security. Wittes and Rosenstein dissect the legal rollercoaster, strategic blunders, future litigation, and larger questions of democratic oversight, all while keeping the discussion engaging, sharply insightful, and highly relevant as AI becomes ever more foundational—and contested—in national security.
