Anthropic’s Legal Battle with the Defense Department.
Credit: Stock Images
On February 27th, 2026, the Trump administration ordered all U.S. agencies to stop using Anthropic’s products, including its most renowned product, Claude. Notably, this widely used Large Language Model (LLM) holds 40% of the global enterprise LLM market share, making it a leading player in the AI industry. Subsequently, Defense Secretary Pete Hegseth moved to designate Anthropic as a supply chain risk, a classification unprecedented in its use on an American company. This classification mandated Federal Agencies expunge Anthropic’s products and put pressure on government contractors—including companies associated with them—to do the same.
What engendered the administration's actions was Anthropic’s renegotiation of a previous $200 million defense contract that made Claude the first AI tool approved for government use in July. In the renegotiation, Anthropic refused the Department of Defense’s (DoD) request to waive restrictions on its technology being used for mass surveillance of Americans and fully automated decision-making in weapons systems.
Shortly after the DoD formally announced the designation on March 9th, Anthropic filed two suits challenging the designation and seeking to suspend its implementation pending trial. The first suit, filed in the U.S. District Court for the Northern District of California, ruled in Anthropic’s favor, enabling it to maintain federal contracts with certain agencies. The second ruling issued by the District of Columbia Court barred Anthropic from fulfilling new contracts with the DoD.
The DoD’s Legal Basis
The DoD invoked 10 U.S.C. § 3252 and the Federal Acquisition Supply Chain Security Act of 2018 (FASCSA) to support the supply chain risk designation.
According to Title 10, the government can prevent a specific company from participating in the military’s supply chain if the action is “necessary to protect national security by reducing supply chain risk” and if “less intrusive measures are not reasonably available.” In order to exercise this authority, the DoD must provide Congress with a risk assessment outlining the justification behind the decision.
In addition, FASCSA expands this ban to cover every agency in the Federal government. FASCSA functions as an interagency body that enables the government to centralize the identification of high-risk vendors. In this case, it’s used as an enforcement mechanism mandating that contractors report all usage of Anthropic’s products and remove them from their systems over a six-month period.
Anthropic’s Litigation
Anthropic argues that the supply chain risk designation violated its First Amendment Rights, unfairly punishing the company based on ideological grounds. The company asserts that their refusal to build or enable certain features is a form of protected expressive conduct in their AI safety policy.
The plaintiff leveraged 303 Creative LLC v. Elenis, a recent Supreme Court case ruling that Lorie Smith, a website designer, was within her First Amendment rights to object to creating websites for clients if the content they requested violated her personal beliefs. Anthropic alleges that its safety guardrails are expressive works akin to Smith’s websites, and therefore, the government cannot force Anthropic to write code for its technologies that violate its internal policies.
Proceedings
Judge Lin of the California court affirmed Anthropic’s claims at the end of March, stating that “nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.” While the defendant cited Anthropic’s “hostile manner through the press” as the justification for the designation, the court found that the government couldn’t prove evidence of a technical security risk posed by Anthropic. Thus, Lin issued a preliminary injunction, effectively pausing the Trump administration’s ban, enabling Anthropic to continue with its federal contracts with certain defense-related agencies.
However, on April 8th, the D.C. court issued a contradictory ruling that denied Anthropic’s motion for an emergency stay, enabling the enforcement of the supply chain risk designation specifically for defense-related contracts. The court ruled that “financial harm” to Anthropic was outweighed by the military’s interest in securing AI technology during “active military conflict.”
The two rulings conflict with each other, creating a limbo where some of Athropic’s business with the Federal government remains intact, while its use in other systems is outright banned. Oral arguments resolving this equivocality will begin on May 19th in the D.C. court, focusing on whether the DoD had the statutory authority to impose the “supply-chain risk” designation on Anthropic.
Broader Implications
One of the most direct implications of this case is the unprecedented use of Title 10. Historically, the supply chain risk designation was utilized to stop covert sabotage, such as in its prior use against Huawei, a Chinese technological company. If the ultimate ruling sides with the DoD, Title 10 could be used as a blacklist against domestic entities in the future. This would mean any American company that refuses a military directive could be punished with the designation.
Regarding the First Amendment, this case will determine whether a government contract can function as a forum for free speech and whether a tech company’s internal safety rules are a protected form of speech. The fundamental question that is being decided here is whether the safety policy of AI is protected. Ironically, Anthropic is arguing against Section 230, a statute that has long protected Big Tech companies against liability over how customers use their products, claiming that their scope of responsibility encompasses usage.
In a time of sparse regulation over generative AI, Anthropic v. DoD is pioneering AI liability and will ultimately determine whether the fundamental guardrails are within the purview of companies or the federal government.
Ethan Seiz is a sophomore concentrating in Computer Science. He is a staff writer for the Brown Undergraduate Law Review and can be contacted at ethan_seiz@brown.edu.
Alice Kovarik is a sophomore concentrating in Economics and International and Public Affairs. She is an editor for the Brown Undergraduate Law Review and can be contacted at alice_kovarik@brown.edu.
Danny Moylan is a sophomore from Massachusetts studying Political Science and International/Public Affairs. He is a Staff Editor and Blog Director for the Brown Undergraduate Law Review and can be contacted at daniel_moylan@brown.edu.