Dario Amodei, co-founder and CEO of Anthropic, speaks on stage at the 2025 New York Times Dealbook Summit at Jazz at Lincoln Center on December 3, 2025 in New York City.
Michael M. Santiago | Getty Images
A federal judge in San Francisco has granted Anthropic’s request for a preliminary injunction in its lawsuit against the Trump administration.
Judge Rita Lin issued the ruling on Thursday, two days after lawyers for the artificial intelligence startup and the U.S. government appeared in a hearing. Anthropic sued the government to overturn the Pentagon’s blacklisting and President Donald Trump’s directive barring federal agencies from using the Claude model.
Anthropic sought an injunction to suspend these actions and prevent further financial and reputational damage as the incident unfolds.
Anthropic released the following statement regarding the ruling: “We are grateful for the court’s swift response and are pleased that the court agreed that Anthropic is likely to succeed on its merits. While this litigation was necessary to protect Anthropic, our customers, and partners, we remain focused on working productively with the government to ensure all Americans receive the benefits of safe and reliable AI.”
“Punishing Mr. Antropic for bringing public scrutiny to government contracting positions is classic unlawful First Amendment retaliation,” Judge Lin wrote in his order. A final verdict in the case could still take months.
At Tuesday’s hearing, Lin pressed government lawyers about why Anthropic was blacklisted. Her language in commands was even sharper.
“There is nothing in applicable law to support the Orwellian notion that expressing disagreement with the government could brand a U.S. company as a potential adversary or saboteur of the United States,” she wrote.
The Anthropic lawsuit continued over several dramatic weeks in Washington, D.C., between the Department of Defense and one of the world’s most valuable private companies.
In a post on X in late February, Defense Secretary Pete Hegseth declared Anthropic a so-called supply chain risk, meaning that the company’s use of its technology threatens U.S. national security. The Department of Defense formally notified Anthropic of this designation in a letter earlier this month.
Anthropic is the first U.S. company to be publicly designated for supply chain risk, as this designation has historically been reserved for foreign adversaries. The label requires defense contractors, including Amazon, Microsoft, and Palantir, to certify that they do not use Claude in their work with the military.
The Trump administration relies on two different designations to justify this lawsuit: 10 USC § 3252 and 41 USC § 4713, which must be challenged in two separate courts. So Anthropic filed a separate lawsuit seeking formal review of the Pentagon’s decision in the U.S. Court of Appeals in Washington.
Shortly before Hegseth declared Anthropic a supply chain risk, President Donald Trump posted on Truth Social ordering federal agencies to “immediately cease” all use of Anthropic’s technology. He said there will be a six-month phase-out period for government agencies like the Department of Defense.
“We are the ones who decide the fate of our country, not an out-of-control radical left-wing AI company run by people who have no idea what the real world is,” Trump wrote.
The Trump administration’s actions surprised many officials in Washington, who had come to admire and trust Anthropic’s technology. The company was the first to deploy the model across the Department of Defense’s classified networks, championing its ability to integrate with existing defense contractors including: Palantir.
Anthropic signed a $200 million contract with the Department of Defense in July, but negotiations stalled in September when the company began negotiations to introduce Claude to the Pentagon’s GenAI.mil AI platform.
The Pentagon wanted Anthropic to give the Pentagon unfettered access to its models for all lawful purposes, but Anthropic wanted assurances that its technology would not be used for fully autonomous weapons or domestic mass surveillance.
The two sides were unable to reach an agreement, and the dispute will now be resolved in court.
“Everyone, including Anthropic, agrees that the Department of Defense is free to stop using Claude and find a more tolerant AI vendor,” Lin said during Tuesday’s hearing. “I don’t think that’s the subject matter of this case. I see the issue in this case as a completely different question: whether the government violated the law.

