The Jaeden Schafer Podcast
Episode Summary: OpenAI Steals $200M Contract in Anthropic vs. Pentagon Battle
Release Date: March 2, 2026
Overview
In this episode, Jaden Schaefer offers an in-depth analysis of the recent and dramatic fallout between Anthropic and the Department of Defense (referred to throughout as the “Department of War” under the Trump administration), which led to Anthropic being blacklisted as a supply chain risk and losing a $200 million contract. OpenAI, under Sam Altman, immediately stepped in to secure the deal, raising urgent questions about AI’s role in national defense, vendor policy power, government oversight, and the future geopolitics of advanced AI systems.
Key Discussion Points & Insights
1. Anthropic’s Stand Against Military Applications
- Anthropic CEO Dario Amodei’s Red Lines
- Dario Amodei “said like, he basically made this big statement where he's basically saying he doesn't want his AI models to be used for two specific things: mass domestic surveillance of Americans and also fully autonomous weapons that select and engage targets without human involvement.”
[03:10] - Jaeden notes these are “his two red lines,” leading Anthropic to “put safety guards and guardrails into what Anthropic is capable of doing so that the government can't do that.” [03:30]
- Dario Amodei “said like, he basically made this big statement where he's basically saying he doesn't want his AI models to be used for two specific things: mass domestic surveillance of Americans and also fully autonomous weapons that select and engage targets without human involvement.”
- Pentagon’s Pushback
- Secretary Hegseth and the Pentagon argue that “the Department of Defense shouldn't be constrained on their use cases by the internal policies of an AI company.” [04:10]
2. The Broader Ethical Dilemma
- Jaeden’s Take on the Red Lines
- “On the one hand, I agree with Anthropic in a sense, that I don't want the government doing mass surveillance of Americans with AI systems. And I also agree that, you know, fully autonomous AI that goes and executes, you know, kill shots or whatever without a human intervention is…a very crazy kind of ethical boundary.” [04:30]
- However, he worries about companies holding too much sway: “I don't really like the fact that Anthropic can redline use cases for the military…in the future they could be bad.”
- Risks of Corporate Control
- Discusses the danger of foreign influence, hypothetically imagining, “Let's say China decides to take a huge stake in Anthropic…then they could make some sort of policies that directly, you know, negatively impact the government.” [07:00]
3. The Fallout: Anthropic Blacklisted
- Trump Administration Response
- “President Trump directed federal agencies to stop using Anthropic products,” initiating “a six month transition period.” [08:10]
- Secretary Hegseth “designated Anthropic as a supply chain risk to national security, which is basically blacklisting them from doing business with the military.”
- Anthropic’s Response
- Anthropic claims they hadn’t received formal notice and plan to challenge the designation in court.
4. Geopolitics and Military Operations
- Jaeden describes the context:
- The fallout happened as the US government was preparing its bombing of Iran, after operations such as the “capture of Nicolas Maduro,” highlighting AI’s growing battlefield role.
[10:00] - “It seemed like…before they wanted to launch their full attack on Iran, they also wanted to make sure that they had AI models to back up their operations.” [10:45]
- The fallout happened as the US government was preparing its bombing of Iran, after operations such as the “capture of Nicolas Maduro,” highlighting AI’s growing battlefield role.
5. OpenAI Steps In – The Contract Swap
- OpenAI’s Strategy
- OpenAI CEO Sam Altman “went and posted on X…saying that OpenAI had reached an agreement with the Department of Defense and would be taking over this Anthropic contract.” [12:30]
- OpenAI also promises red lines: “We're not going to do domestic mass surveillance and we're not going to…be used in autonomous weapon systems.” [13:00]
- Technical Conditions
- OpenAI “will deploy through a cloud-based API so they can retain control of the safety stack and…embed personnel with appropriate clearances to oversee deployment.” [13:40]
- Altman admits the deal was “rushed,” intended “to de-escalate tensions and stabilize the relationship between AI labs and the government.” [14:00]
- Key Question Raised
- “If OpenAI could secure an agreement with similar red lines, why was Anthropic not able to do this?” Jaeden explores possible reasons: differences in negotiation strategies, deployment architecture, and the government’s tolerance for shifting policies after contracts are signed. [15:00]
6. Public and Industry Reactions
- Anthropic’s Popularity Surge
- “Anthropic's chatbot Claude went all the way to the top of Apple's App Store rankings. It passed ChatGPT and it was kind of the number one spot…immediately after this big news story came out.” [16:20]
- Long-Term Strategic Implications
- “The US Military already operates highly automated systems…so the question isn’t whether AI is going to be used in defense, but kind of how broadly and under whose constraints.” [17:10]
- Points to international competition: “China obviously doesn't care about any of them. Russia doesn't care about any of these things…if we kind of nerf the capabilities of those…that could, you know, not be positive.”
- “The Pentagon…doesn’t want a single vendor to…tie their hands basically if something is legal and they're allowed to do it.” [19:10]
7. The Need for Regulation and the Accountability Gap
- Current Landscape
- “Anthropic has consistently argued that technology is advancing so fast that government mechanisms haven't kept place.”
- Critics like Max Tegmark say that “the broader AI industry helped create this vacuum by lobbying against binding federal regulation, preferring these sort of voluntary safety frameworks.” [20:00]
- “We don't really have any sort of enforceable laws. It's mostly just people saying, look, we want to be safe and responsible.”
- Without laws, “disputes like this are going to be resolved through executive power and then contract leverage rather than legislation.” [21:00]
Notable Quotes & Memorable Moments
- “The center of this whole argument is…who's in control of these AI systems that are powering the most powerful national defense systems.” — Jaeden Schaefer [02:15]
- “I just don't like the rules coming from the companies themselves, which are, you know, we know that those are sort of manipulatable.” — Jaeden Schaefer [07:15]
- “Anthropic lost a $200 million deal and OpenAI came and picked that up basically. But I think there’s some like, some strategic things we have to think about.” — Jaeden Schaefer [16:00]
- “We have the best AI models right now with OpenAI and Anthropic being built inside of America. But that doesn't mean that we'll have the best forever.” — Jaeden Schaefer [18:00]
Timestamps for Key Segments
- [03:10] – Dario Amodei’s public statement and Anthropic’s AI red lines
- [04:10] – Pentagon’s objection and Jaeden’s ethical concerns
- [07:00] – Foreign influence risks in AI company governance
- [08:10] – Federal order blacklisting Anthropic
- [10:00] – Anthropic’s AI use tied to military operations in Iran and Maduro missions
- [12:30] – OpenAI announces takeover of the contract
- [16:20] – Public reaction and Anthropic’s app surge
- [19:10] – US military’s concern about vendor-imposed restrictions
- [21:00] – Lack of regulatory framework and reliance on executive decision
Conclusion
Jaeden Schaefer delivers a comprehensive assessment of how the Anthropic-Pentagon feud exposes the limits of voluntary AI safety, highlights the power struggle between tech vendors and government, and accelerates the debate on AI’s place in defense strategy. The episode underscores the need for legal frameworks and points to the rapidly shifting balance of power in AI and geopolitics.
