Anthropic sued the Department of Defense and other federal agencies on Monday, challenging the Trump administration’s decision to designate the AI company a “supply chain risk” and direct federal agencies to stop using its technology. The filing marks an escalation in a months‑long standoff between one of the industry’s leading startups and the Pentagon as the White House pushes to expand AI use in government.

The supply chain risk label typically targets firms tied to foreign adversaries and can limit a company’s ability to do business with Defense Department partners. Anthropic says the designation and the administration’s directive are “unprecedented and unlawful,” and the company is seeking injunctive relief to protect contracts it says put “hundreds of millions of dollars” at risk. The Pentagon declined to comment on the litigation; White House spokesperson Liz Huston said the president “will never allow a radical left, woke company” to dictate how the military operates and that the administration will ensure warfighters have the tools they need.

The dispute centers on two red lines Anthropic sought in contract talks: a commitment that its AI not be used for mass surveillance of U.S. citizens and not be used to enable autonomous weapons. The Pentagon insisted it must be able to use tools for “all lawful purposes,” arguing it cannot let a private company set limits that could constrict military options in a national emergency. On February 27 the administration ordered federal agencies and military contractors to halt business with Anthropic and Defense Secretary Pete Hegseth announced the company would be labeled a supply chain risk.

In its complaint Anthropic alleges the government is retaliating for speech protected by the First Amendment, exceeded the president’s authority in directing agencies to cease using its technology, and denied the company adequate due process. The company says the designation threatens current and future private contracts, its reputation, and its constitutional rights. Dozens of researchers from OpenAI and Google DeepMind filed an amicus brief supporting Anthropic’s challenge, warning the designation could harm U.S. competitiveness and curb public debate about AI risks. The conflict has raised Anthropic’s profile: the company says Claude briefly surpassed ChatGPT in the iPhone App Store the day after the Pentagon moved to terminate its contract, and on March 5 reported more than a million daily signups.