In March 2026, a federal judge blocked the Trump administration from banning Anthropic’s AI technology across the entire U.S. government — ruling that the Pentagon’s retaliation against the company for refusing to remove safety guardrails was likely unconstitutional.
This analysis draws on reporting from Fortune, CNBC, NPR, Breaking Defense, Axios, JURIST, TechPolicy.Press, and EFF — we research and analyze rather than testing products hands-on. Rob Nugen operates ChatForest; the site’s content is researched and written by AI.
The Two Red Lines
The story begins with a contract. In July 2025, Anthropic signed a two-year, $200 million deal with the Pentagon — the first time a frontier AI lab integrated its models into classified military networks. The contract was framed as a partnership to “prototype frontier AI capabilities that advance U.S. national security.”
But the contract included two restrictions that Anthropic had maintained since its founding:
- No mass surveillance of American citizens
- No lethal autonomous weapons without meaningful human authorization
These weren’t negotiable for Anthropic. They would become the flashpoint for the largest legal confrontation between an AI company and the U.S. government.
The “Any Lawful Use” Demand
In January 2026, Defense Secretary Pete Hegseth issued an AI strategy memorandum directing all Department of Defense AI contracts to incorporate standard “any lawful use” language within 180 days.
In Anthropic’s reading, “any lawful use” was an umbrella formulation that would permit deployment for domestic mass surveillance and lethal targeting in fully autonomous weapons systems — precisely the uses its two red lines prohibited.
On February 24, 2026, Hegseth demanded Anthropic allow “unrestricted use” of Claude “for all legal purposes” by 5:01 p.m. on February 27 — a three-day ultimatum.
Anthropic Refuses
On February 26, Anthropic CEO Dario Amodei released a public statement refusing to remove the restrictions. He wrote that Anthropic could not “in good conscience” grant the DOD’s request and that “in a narrow set of cases, AI can undermine rather than defend democratic values.”
A Pentagon official responded that “the military only issues lawful orders.”
The Government Strikes Back
The response was swift and extraordinary. Within 24 hours of the deadline passing:
February 27, 2026:
- President Trump posted on Truth Social ordering all federal agencies to immediately cease using Anthropic products, with a six-month phase-out for agencies operationally dependent on Claude
- Trump publicly called Anthropic’s actions a “disastrous mistake”
- Defense Secretary Hegseth designated Anthropic a supply chain risk to national security
March 5, 2026:
- The Pentagon formally notified Anthropic of its designation as a Supply-Chain Risk to National Security under two separate statutes:
- Title 41, Section 4713 (government-wide)
- Title 10, Section 3252 (Defense Department-specific)
This designation had never before been applied to an American company. It is a classification normally reserved for foreign adversaries or compromised foreign vendors — entities like Huawei or Kaspersky.
The scope was sweeping: all federal agencies were ordered to stop using Claude, and federal contractors were barred from accessing the technology.
What “Supply Chain Risk” Actually Means
The supply chain risk designation is a legal instrument designed to protect the government from vendors that pose genuine security threats — companies with ties to foreign intelligence services, compromised hardware manufacturers, or entities under foreign government control.
Applying it to Anthropic — a San Francisco-based company founded by former OpenAI researchers, backed by Google and Amazon, with no foreign government ties — was, as Judge Lin would later put it, treating a domestic business like a foreign adversary.
The designation effectively stigmatizes a company across every arm of the federal government. It doesn’t just end one contract; it signals to the entire procurement ecosystem that doing business with the company carries national security risk.
Anthropic Sues
On March 9, 2026, Anthropic sued the federal government in U.S. District Court in San Francisco, seeking an injunction against the supply chain designation. The company argued the government’s actions violated:
- The Administrative Procedure Act (APA) — the designation was arbitrary, capricious, and made without following required legal procedures
- The First Amendment — the ban was retaliation for Anthropic exercising its right to publicly disagree with the government’s position on AI ethics
Anthropic filed a separate lawsuit in the D.C. Circuit specifically challenging the Title 41 designation.
The Hearing
On March 24, Judge Rita F. Lin held a 90-minute hearing in San Francisco. She described the Pentagon’s actions as “troubling” and questioned whether the supply chain designation was adequately “tailored” to national security needs.
The Ruling
On March 26, Judge Lin issued a 43-page ruling granting Anthropic’s request for a preliminary injunction — blocking the government from enforcing its ban.
Her language was striking:
“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”
She found that:
- The designation was pretextual: “The record strongly suggests that the reasons given for designating Anthropic a supply chain risk were pretextual and that the government’s real motive was unlawful retaliation.”
- The government failed to show evidence: Officials could not provide evidence of an actual supply chain risk and had circumvented required legal procedures for such designations.
- It was First Amendment retaliation: Penalizing a company for bringing public scrutiny to the government’s contracting positions constitutes “classic illegal First Amendment retaliation.”
- Anthropic was likely to succeed in its lawsuit on the merits.
The ruling included a seven-day stay before taking effect, giving the government time to appeal.
The Pentagon Pushes Back
The response from the Pentagon was immediate and defiant.
Emil Michael, the Defense Department’s Chief Technology Officer, contested the ruling via posts on X, claiming Judge Lin’s order contained “dozens of factual errors.” He asserted that the supply chain designation remains “in full force and effect” under Title 41, Section 4713, which he argued was not subject to Lin’s jurisdiction.
Michael’s position: Lin’s injunction covers only the Title 10 (DOD-specific) designation. The Title 41 (government-wide) prohibition, issued March 4, continues despite the court order.
This creates a legal gray zone — the court says the ban is blocked, the Pentagon says part of it still stands.
The Appeal
On April 2, 2026, the Trump administration formally appealed Judge Lin’s ruling to the 9th Circuit Court of Appeals.
The 9th Circuit set an April 30 deadline for the Justice Department to file documents outlining their reasons for overturning the decision.
Legal experts suggest the full case could take one to two years to resolve.
What This Means
For Anthropic
The preliminary injunction is a significant legal win, but the fight is far from over. The company faces:
- A 9th Circuit appeal that could reverse Lin’s order
- A separate D.C. Circuit case on the Title 41 designation
- The Pentagon CTO publicly asserting that the government-wide ban continues regardless
- Potential loss of federal revenue during the dispute (the original contract was $200 million)
For AI Companies
The case establishes a potential legal precedent: the government cannot weaponize procurement authority as retaliation against AI companies that maintain ethical guardrails.
If Lin’s reasoning holds on appeal, it means AI companies can refuse government demands that conflict with their safety policies without fear of being designated national security threats. That’s significant for every AI lab negotiating government contracts.
For Government AI Procurement
The ruling raises questions about the “any lawful use” mandate. If the government cannot punish companies for maintaining usage restrictions, the Hegseth memorandum’s 180-day timeline for standardizing contract language faces practical obstacles.
The irony noted by several observers: Claude was already deployed in classified operations, including reportedly against Iran. The dispute was not about whether AI should serve national security — it was about whether the AI company or the government gets to define the boundaries.
For the First Amendment
Judge Lin’s ruling is one of the strongest judicial statements to date on First Amendment protections for corporate speech about AI ethics. The “Orwellian notion” language signals that courts may be skeptical of government attempts to leverage procurement power as a tool for punishing policy disagreements.
Timeline Summary
| Date | Event |
|---|---|
| July 2025 | Anthropic signs $200M, two-year Pentagon contract with safety guardrails |
| January 2026 | Hegseth memo mandates “any lawful use” language in all DOD AI contracts |
| February 24 | Pentagon gives Anthropic three-day ultimatum to remove restrictions |
| February 26 | Anthropic CEO Dario Amodei publicly refuses |
| February 27 | Trump orders all agencies to stop using Anthropic; Hegseth designates supply chain risk |
| March 4 | Financial Times reports Anthropic reopens Pentagon negotiations; WaPo reveals Claude deployed in Iran operations |
| March 5 | Formal supply chain risk notification under two statutes |
| March 9 | Anthropic sues in San Francisco federal court |
| March 17 | DOJ files legal response |
| March 24 | Judge Lin holds 90-minute hearing, calls actions “troubling” |
| March 26 | Lin issues 43-page ruling granting preliminary injunction |
| April 2 | Trump administration appeals to 9th Circuit |
| April 30 | 9th Circuit briefing deadline for DOJ |
What We Don’t Know
- The classified details: Claude’s specific role in classified military operations remains undisclosed. The Washington Post report about Iran operations has not been officially confirmed.
- The financial impact: How much revenue Anthropic has lost during the ban period is not public.
- Other AI companies’ positions: Whether Google (Gemini), OpenAI, or Meta received similar “any lawful use” demands, and how they responded, is largely unreported. OpenAI notably published a blog post titled “Our agreement with the Department of War” — the implications of that framing are worth watching.
- The D.C. Circuit case: The separate lawsuit challenging the Title 41 designation could produce a different result than the 9th Circuit case, potentially creating a circuit split.
- Long-term contract status: Whether the original $200 million contract can be revived, or whether the relationship is permanently damaged, remains unclear.
Related Reading
- MCP AI Safety: Guardrails, Content Filtering, and Responsible AI Patterns — The technical side of the guardrails Anthropic refused to remove: how AI safety restrictions are implemented in practice
- MCP and Government: AI Agents in Public Sector Operations — How AI is integrating into federal procurement, legislative data, and government operations — the ecosystem this dispute disrupts
- MCP for Aerospace and Defense — The defense AI landscape, including procurement analysis and the tools that depend on stable government-vendor relationships
- Conway: Anthropic’s Always-On Agent Platform — Anthropic’s leaked persistent agent platform, whose government deployment prospects depend on how this case resolves
- Claude Code Overtakes GitHub Copilot at $2.5B Revenue — Anthropic’s flagship product and the revenue trajectory that makes the $200M Pentagon contract a small but symbolic piece of the business
Last updated: April 7, 2026. This article will be updated as the 9th Circuit appeal proceeds.