Friday, March 6, 2026
← Back to today's brief
technology

Pentagon bans Anthropic; OpenAI signs classified AI defense deal

A 72-hour window in February redefined the relationship between the Pentagon and Silicon Valley over the definition of lawful AI use.

The Story

The Pentagon is enforcing a strict 'any lawful use' standard for AI procurement, leading to a total break with Anthropic and a new classified deal with OpenAI. The tension lies in whether safety guardrails are redundant protections already covered by federal law or necessary insurance against executive overreach in classified environments. While the Department of Defense (DoD) cites mission efficiency and existing statutes, Anthropic argues that removing specific prohibitions on surveillance and autonomous weapons creates a blank check for operations that cannot be publicly audited.

The 'supply chain risk' designation is a powerful regulatory tool that forces not just government agencies but also private defense contractors like Boeing and Lockheed Martin to purge the affected technology. The Government Services Administration (GSA) has already removed Anthropic from USAi.gov, the central portal for federal AI tools. This ban extends beyond the military to civilian agencies, creating a significant revenue hole for Anthropic. Meanwhile, the Defense Production Act (DPA) remains a potential 'wild card' that could allow the government to force access to Anthropic's technology under national security grounds despite the ban. The transition period for all agencies to comply with the removal of Anthropic technology expires in August 2026.

Think Critically

Both Sides

Mission Flexibility and Standardization

Defense Secretary Pete Hegseth and Undersecretary Emil Michael argue that existing federal laws already prohibit the abuses Anthropic fears. They contend that vendor-specific restrictions create unnecessary bureaucratic friction and redundancy. By standardizing on 'any lawful use' language, the DoD ensures that AI can be deployed with the same flexibility as any other military asset, supported by the precedent of xAI's existing compliance.

Contractual Safety Enforcement

Anthropic CEO Dario Amodei and safety advocates argue that statutory law is insufficient in classified settings where transparency is zero. They maintain that explicit contractual prohibitions on surveillance and autonomous weapons are the only enforceable guardrails against misuse. Critics view the removal of these terms as a 'safety theater' trade-off where OpenAI gains market access by accepting terms that permit the very abuses Anthropic refused to enable.

Defense Secretary Pete Hegseth and Undersecretary Emil Michael argue that existing federal laws already prohibit the abuses Anthropic fears. They contend that vendor-specific restrictions create unnecessary bureaucratic friction and redundancy. By standardizing on 'any lawful use' language, the DoD ensures that AI can be deployed with the same flexibility as any other military asset, supported by the precedent of xAI's existing compliance.
Anthropic CEO Dario Amodei and safety advocates argue that statutory law is insufficient in classified settings where transparency is zero. They maintain that explicit contractual prohibitions on surveillance and autonomous weapons are the only enforceable guardrails against misuse. Critics view the removal of these terms as a 'safety theater' trade-off where OpenAI gains market access by accepting terms that permit the very abuses Anthropic refused to enable.

Both sides acknowledge

Both the Pentagon and the AI labs agree that the U.S. military requires advanced frontier models to maintain a competitive edge against adversaries. Both parties also acknowledge that AI deployment in classified environments requires high-level security clearances for technical personnel.

Bottom Line

Between February 24 and 28, 2026, the Pentagon executed a rapid restructuring of its AI supplier base. The trigger was a January 2026 memo from Defense Secretary Pete Hegseth requiring all AI contracts to permit 'any lawful use.' Anthropic, which held a $200 million contract from July 2025, refused to strip language prohibiting its models from being used for domestic mass surveillance or autonomous weaponry. By February 27, the Pentagon designated Anthropic a 'supply chain risk,' and President Trump directed all federal agencies to cease using the technology. One day later, OpenAI announced an agreement for classified deployment that includes forward-deployed engineers and safety researchers but lacks the specific contractual prohibitions Anthropic demanded.

This shift follows the precedent set by xAI, which already operates under 'all lawful purposes' terms for classified military work. The Pentagon's move to standardize these terms across all vendors suggests a strategic priority on operational flexibility over vendor-imposed restrictions. Despite market rumors, there is no verified data suggesting Anthropic has restarted talks with the Pentagon or that Nvidia has halted H200 chip production. The current six-month transition period for agencies to remove Anthropic technology ends in August 2026.

Primary's view: The Pentagon is prioritizing a unified legal standard over corporate safety frameworks. In a classified environment, the definition of 'lawful' is determined by executive orders and legal opinions that are often themselves classified. By accepting 'any lawful use' language, vendors like OpenAI and xAI are effectively deferring to the government's internal legal interpretations, which are shielded from the public and the vendors' own safety teams. The dispute isn't about whether AI should be safe—it's about who holds the keys to the definition of safety.

Council Deliberation

Verified Findings

  • Anthropic was designated a supply chain risk on February 27, 2026, following a contract dispute.
  • OpenAI signed a classified agreement with the Pentagon on February 28, 2026.
  • Defense Secretary Hegseth's January 2026 memo requires 'any lawful use' language in all AI contracts.
  • Anthropic's prior $200 million contract from July 2025 included explicit restrictions on surveillance and autonomous weapons.
  • xAI currently operates under 'all lawful purposes' language for classified DoD work.

Challenged Claims

  • Anthropic restarts talks with Pentagon (unverified; no primary source evidence)
  • Nvidia stops H200 chip production (unverified; no SEC filings or company statements)
  • OpenAI's deal has 'more guardrails' than Anthropic's (unverified; comparative contract text is classified)
  • Adversaries are 'increasingly integrating AI' (unverified; no quantitative data or DIA baselines provided)
Citations
OpenAI (high): We reached an agreement with the Pentagon for deploying advanced AI systems in classified environments.
Mayer Brown (high): Defense Secretary Pete Hegseth designated Anthropic a supply chain risk.
Axios (high): Anthropic rejected the terms, citing 'legal jargon' that would permit safeguards to be ignored.
ALM Corp (high): Undersecretary Emil Michael stated existing laws make Anthropic's restrictions redundant.
TechCrunch (high): Amodei called OpenAI's messaging 'safety theater.'

Every morning at 8 AM.

Subscribe free