More than 30 employees from OpenAI and Google DeepMind filed a statement Monday supporting Anthropic’s lawsuit against the U.S. Department of Defense after the federal agency classified the AI company as a supply chain risk, according to court filings.
“The government’s designation of Anthropic as a supply chain risk is an inappropriate and arbitrary use of power that has serious implications for our industry,” said the brief, which also includes Google DeepMind chief scientist Jeff Dean.
Late last week, the Department of Defense classified Anthropic as a supply chain risk (typically limited to foreign adversaries) after the AI company refused to allow the agency to use its technology for mass surveillance of Americans or to automatically fire weapons. The Pentagon had argued that AI should be able to be used for any “lawful” purpose and should not be restricted by private contractors.
The court brief supporting Anthropic entered the docket hours after Claude’s creator filed two lawsuits against the Department of Defense and other federal agencies. Wired was first to report the news.
Google and OpenAI employees argue in a court filing that if the Department of Defense was “no longer satisfied with the agreed terms of the contract with Anthropic,” the agency could “simply cancel the contract and purchase the services of another large AI company.”
In fact, the Department of Defense awarded the contract to OpenAI shortly after designating Anthropic as a supply chain risk, a move many of the ChatGPT maker’s employees protested.
“If allowed to proceed, this effort to punish one of America’s leading AI companies will undoubtedly impact America’s industrial and scientific competitiveness in artificial intelligence and beyond,” the brief reads. “And that would chill open discussion in our field about the risks and benefits of today’s AI systems.”
tech crunch event
San Francisco, California
|
October 13-15, 2026
The filing also asserts that the red lines described by Anthropic are legitimate concerns that warrant strong guardrails. In the absence of public law regulating the use of AI, the paper argues, the contractual and technical limits that developers place on their systems provide important safeguards against catastrophic misuse.
Many of the employees who signed the statement also signed an open letter over the past few weeks calling on the Department of Defense to withdraw the label, supporting Anthropic and calling on company leaders to reject unilateral use of their AI systems.
