Anthropic's CEO went on national television to confirm something the Pentagon had been treating as classified. That alone is unusual. The week of timing around it is the bigger story.

What Amodei actually said

In an exclusive CBS News interview recorded the same Friday evening the Pentagon cut ties with his company, Dario Amodei confirmed that Claude had been deployed on the Department of Defense's classified networks since July 2025. Anthropic was the first frontier-model lab cleared at that level. Custom Claude variants were running on dedicated air-gapped infrastructure. Amodei said the systems had "revolutionized and radically accelerated" what the military could do.

He also said Anthropic refused to drop two safety guardrails: a prohibition on using Claude to power autonomous weapons targeting, and a prohibition on mass surveillance of Americans. The Pentagon wanted Claude available for "all lawful purposes." Anthropic said no. Amodei called the response "retaliatory and punitive."

What the Pentagon did about it

On February 27, 2026, President Trump ordered all federal agencies to stop using Anthropic technology. The same day, Defense Secretary Pete Hegseth designated Anthropic a "supply chain risk," a classification typically reserved for foreign adversaries like Huawei. On March 5 the Pentagon formally notified Anthropic the designation was effective immediately. Anthropic sued.

A federal judge in California later blocked the designation, calling it "Orwellian" and unsupported by the governing statute. The DC Circuit then allowed enforcement during appeal. The legal status remains contested.

The timing problem

On February 28, the day after the Friday-night ban, the United States launched Operation Epic Fury, a coordinated airstrike campaign against Iran. Pentagon planning systems running on Claude were generating thousands of targeting priorities for that operation in the same window the company was being banned. Officials reportedly described the prospect of cutting Claude over from production mid-operation as "open-heart surgery on a plane in flight."

The Pentagon banned the model. The Pentagon kept using the model. Both things were happening at the same time.

The question Senator Warren asked

On March 16, 2026, Senator Elizabeth Warren sent a four-page letter to Hegseth. Her question was simple. The NSA had cleared Claude for classified deployment after a security review. The same NSA had flagged Elon Musk's Grok with security concerns Claude did not have. Hundreds of thousands of private Grok conversations had recently been found indexed on Google. Warren wanted to know why the Pentagon banned the model the NSA cleared and replaced it with the model the NSA flagged.

A separate Warren letter opened an investigation into the designation as apparent retaliation. The same day OpenAI signed a Pentagon deal that included identical guardrail language to the one Anthropic had refused to drop. Sam Altman later called the language "opportunistic and sloppy" and renegotiated.

What the Defense Production Act usually does

The supply chain risk designation invokes authority tied to the Defense Production Act, a statute originally written to mobilize American industry for the Korean War. It is the legal mechanism the executive branch uses to compel companies to produce ventilators during a pandemic, or to prioritize defense supplier deliveries during wartime. Using it to punish a software vendor for refusing to expand its acceptable-use policy is unprecedented. Lawfare's analysis put it bluntly: the governing statute does not authorize the action.

Why it matters

This dispute is not about whether Anthropic gets to keep a contract. It is about whether the federal government can punish a domestic AI company for refusing to expand the surface area of military use beyond what the company is willing to authorize.

The pattern matters because every other frontier-model lab is watching. OpenAI, Google DeepMind, and xAI all have, or are pursuing, classified contracts. The Anthropic designation is the first time the executive branch has tried to use national-security supply-chain authority to override a vendor's acceptable-use policy. If the Pentagon's reading holds, an AUP becomes a fiction in any classified deployment. If the courts rule the other way, every AI lab that takes a defense contract gets a confirmed precedent that its red lines are enforceable in federal court.

Reddit users speculated the military's custom Claude was generations ahead of the consumer version. AI researchers pushed back, noting fine-tuned Sonnet 4.5 is not a secret Opus 5. The truth probably sits in the middle. But the speculation itself is revealing. When the most powerful military in the world reaches for wartime industrial-mobilization authority over a chatbot, it is fair to ask what capability is actually being protected.

You don't break the glass on the Defense Production Act for autocomplete.

Originally published as an Instagram carousel on @recul.ai.