
Antropics enter Friday in a no-win situation.
The artificial intelligence startup has until 5:01 p.m. ET to decide whether the Department of Defense will allow its model to be used in all legitimate use cases without restriction. If it doesn’t, Defense Secretary Pete Hegseth has threatened to label the company a “supply chain risk” or invoke the Defense Production Act to force compliance.
Anthropic won a $200 million contract with the Department of Defense in July, becoming the first AI lab to integrate its models into mission workflows on classified networks. The company is negotiating the terms of its contract with the agency, seeking assurances that its technology will not be used for fully autonomous weapons or for mass surveillance of Americans domestically.
“We believe that in limited cases, AI could undermine rather than protect democratic values,” Anthropic CEO Dario Amodei, who co-founded the company in 2021, said in a statement Thursday. “Some applications are simply beyond what can be done safely and reliably with today’s technology.”
The Pentagon refused to change its stance, and negotiations stalled, creating the highest-profile test yet of Anthropic’s espoused values. The company has spent years carefully building a reputation as a champion of safe and responsible AI adoption, setting itself apart from OpenAI, where Amodei worked before leaving to start Anthropic.

But Anthropic is also facing intense pressure to justify its massive $380 billion valuation with the backing of large institutional and strategic investors, while racing to stay on the cutting edge of model development and fend off competition from OpenAI and other rivals. google And Elon Musk’s xAI. All three companies’ models are used by the Department of Defense.
Complying with the Department of Defense’s demands could damage Anthropic’s reputation and alienate employees and customers. But if Anthropic doesn’t agree to give the military unfettered access to its models, it could lose out on meaningful revenue in the short term and be locked out of potential future opportunities with other companies that do business with the government.
“There are no winners in this,” Lauren Kahn, a senior research analyst at Georgetown’s Center for Security and Emerging Technologies, said in an interview with CNBC. “It leaves a sour taste in everyone’s mouth.”
Sean Parnell, the Pentagon’s chief spokesman, said Thursday that the Pentagon has “no interest” in using AI in fully autonomous weapons or conducting mass surveillance of Americans, which would be illegal. He said authorities want Anthropic to agree to allow its models to be used for “all lawful purposes.”
“This is a simple, common sense request to prevent Anthropic from potentially endangering critical military operations and endangering our nation’s warfighters,” Parnell wrote in a post to X on Thursday. “We will not let any company dictate the terms of how we make business decisions.”
In a separate post on Thursday, Emil Michael, the undersecretary of defense for research and engineering and former undersecretary of defense, said: Uber The executive wrote that Amodei was “a liar and has a God complex.” He accused Amodei of wanting “nothing more than to try to personally control the U.S. military.”
Mr. Hegseth set a Friday deadline for Anthropic in a meeting with Mr. Amodei earlier this week, warning that failure to agree could result in severe penalties. He said Anthropic could be classified as a “supply chain risk.” This designation is typically limited to companies from countries considered adversaries. The label would force Department of Defense vendors and contractors to certify that they are not using Anthropic’s models.
Amodei said his company would not be threatened.
“These threats do not change our position. We cannot in good conscience comply with their demands,” he said in a statement Thursday.
“The juice isn’t worth the squeeze.”
This escalating conflict is one that other AI institutes, industry experts, and government contractors are also closely monitoring. Khan warned that the government could exclude companies with promising products if they conclude that tech companies are “not worth the juice.”
“I’m really, really, honestly worried that private companies will say, ‘It’s not worth my time to work with the defense sector going forward,'” Khan said, adding, “It’s the warfighters who are really going to suffer.”
“Personally, I don’t think the Department of Defense should threaten DPA against these companies,” OpenAI CEO Sam Altman told CNBC on Friday. He said he believed it was important for companies to choose to work with the ministry, as long as they adhered to legal protections and “some red lines” that the sector shares with Anthropic.
“Despite our differences with Anthropic, I pretty much trust Anthropic as a company. I think they really care about safety and I’m happy that they’re supporting our warfighters,” Altman said in an interview. “I don’t know what will happen next.”
Anthropic and several other employees in the industry have expressed their support for the company on social media in recent days.
Josh McGrath, a technical staff member at OpenAI, said in a post on X on Tuesday that he was “at a loss for words about what’s going on.”
More than 330 Google and OpenAI employees also signed an open letter from the division titled “We Will Not Divide,” which aims to create “common understanding and unity in the face of this pressure,” according to the company’s website.
“We hope that our nation’s leaders will put aside their differences and come together to continue rejecting the Department of the Army’s current request for permission to use the model to kill people autonomously without domestic mass surveillance or human oversight,” the letter reads.
U.S. Army Secretary Pete Hegseth speaks during a visit to Sierra Space, Monday, February 23, 2026, in Louisville, Colorado.
Aaron Hontiveros | Denver Post | Getty Images
For Antropic, it’s just the latest conflict with the Trump administration.
Venture capitalist David Sachs, who serves as the White House’s AI and crypto czar, previously accused Anthropic of supporting “woke AI” and “pursuing a sophisticated regulatory acquisition strategy based on fear-mongering” for its stance on regulation, after a company executive wrote an essay titled “Technological Optimism and Appropriate Fear” in October.
In contrast to other industry executives, including Altman, Amodei has largely avoided contact with President Donald Trump. apple CEO Tim Cook and Google CEO Sundar Pichai. Notably, Amodei did not attend President Trump’s inauguration last year.
In January, Hegseth released a memo titled “Accelerating America’s Military AI Advantage.” He wrote that the Pentagon must not adopt AI models that “embedded ideological ‘coordination'” and that the department “must also utilize models that are not subject to use policy constraints that may limit legitimate military applications.”
Amodei has remained steadfast in the company’s commitment to the safe use of its models, but said Thursday that Anthropic’s “strong desire” is to continue working with the Department of Defense to support U.S. national security.
“If the Department chooses to offboard Anthropic, we will seek to avoid disruption to ongoing military programs, operations, or other critical missions and allow for a smooth transition to another provider,” Amodei wrote.
— CNBC’s Kate Rooney contributed to this report
WATCH: Antropic rejects Pentagon’s ‘final proposal’ in AI safeguard fight

