
Pentagon Chief Technology Officer Emile Michael said Thursday that Anthropic’s Claude artificial intelligence model “contaminates” the Pentagon’s supply chain because it incorporates “different policy preferences.”
“We cannot allow companies with different policy preferences that are built into their models, through their constitution and their soul and their policy preferences, to contaminate our supply chain so that our warfighters are getting ineffective weapons, ineffective body armor, ineffective body armor,” Michael said on CNBC’s “Squawk Box.” “That’s where the supply chain risk designation comes from.”
Anthropic is the first U.S. company to be publicly certified for supply chain risk, an unusual step historically reserved for foreign adversaries. This designation requires defense contractors and vendors to certify that they are not using Claude in their work with the Department of Defense.
Michael’s comments Thursday were the clearest explanation yet of why the Department of Defense believes anthropic is a supply chain risk. The agency sent an official letter to the company earlier this month informing it of the designation, but the letter did not outline what risk Mr. Claude posed to national security.
Antropic sued the Trump administration on Monday, calling the government’s actions “unprecedented and illegal.” Anthropic said in a filing that the company has suffered “irreparable” harm and that hundreds of millions of dollars worth of contracts are at risk.
“This is not meant to be a punishment,” Michael said Thursday.
He added that Anthropic has a “huge commercial operation” with “a small portion” coming from the U.S. government. Michael also dismissed Anthropic’s claims that the government was actively lobbying companies and telling them not to use Anthropic, calling them “rumours.”
“The Department of the Army is not going to tell companies what to do unless it’s in our supply chain,” he said.
Anthropic was founded in 2021 by a group of researchers and executives who defected from OpenAI. The company is best known for its family of Claude models and has had early success selling to large corporations, including the Department of Defense.
The startup has drafted and published a “constitution” used to train the mainline publicly accessible Claude model. Antropic said the constitution plays a “vital role” in this process and that its content “directly shapes Claude’s actions,” according to Antropic’s website.
Anthropic shared an updated version of the Claude Constitution in January.
“In it, Claude explains what it means to be broadly safe, ethical, and helpful while adhering to guidelines,” Anthropic said in a blog post. “The Constitution provides Claude with information about the situation and advice on how to deal with difficult situations and trade-offs, such as balancing honesty with compassion and protecting sensitive information.”
As CNBC previously reported, even after Anthropic was blacklisted, its models have been used to support US military operations in Iran. Palantir Chief Executive Officer Alex Karp told CNBC on Thursday that his company, a major defense contractor, is still using Claude.
Michael said the transition to another vendor will take time and that the Department of Defense cannot “just strip” Anthropic of its technology overnight. The agency said it has a transition plan.
“Outlook isn’t the only thing you can remove from your desktop,” says Michael.
WATCH: Watch CNBC’s full interview with Under Secretary of Defense Emil Michael.

