OpenAI CEO Sam Altman speaks at the AI Impact Summit gathering in New Delhi, India on February 19, 2026.
Barwika Chhabra | Reuters
OpenAI CEO Sam Altman said Monday that the company “should not have rushed” its recent agreement with the Pentagon and outlined amendments to the agreement.
Altman shared what he purported to be a repost of an internal memo regarding X, saying the company plans to amend the contract to include some new language, adding that “AI systems may not be intentionally used for domestic surveillance of American persons and citizens.”
This comes after the ChatGPT maker announced a new deal with the Pentagon on Friday, just hours after US President Donald Trump directed federal agencies to stop using tools from rival AI company Anthropic, and hours before the US government carried out the attack on Iran.
He added that the Department of Defense has assured that OpenAI’s tools will not be used by intelligence agencies such as the NSA.
“There are still a lot of things that technology doesn’t address yet, and there are still a lot of areas where we don’t understand the trade-offs that need to be made for safety,” Altman said, adding that the company will work with the Department of Defense on technical safety measures.
The CEO also admitted he made a mistake and said he “should not have rushed” to close the deal on Friday.
“We were really trying to de-escalate the situation and avoid a worse outcome, but I think it just looked opportunistic and sloppy,” he said.
The approval comes after a public feud between Anthropic and the U.S. government over safeguards for the Claude AI system ended without an agreement. Defense Secretary Pete Hegseth said Friday that the company would be designated as a supply chain threat.
Following its first contract last year, Anthropic became the first AI lab to deploy its model across the Department of Defense’s classified networks.
The company then sought assurances that its tools would not be used for purposes such as operating or developing autonomous weapons without oversight or human control within the United States.
The controversy began in January after it was revealed that the U.S. military had used Anthropic’s Claude in a raid to capture Venezuelan President Nicolas Maduro, although the company did not publicly object to its use.
OpenAI’s deal with the Pentagon comes shortly after talks between Anthropic and the Pentagon broke down, but Altman told employees in a Thursday memo that OpenAI shares the same “red lines” as Anthropic. He said in a post Friday that the Department of Defense agreed to the company’s restrictions.
It remains unclear why the Pentagon agreed to work with OpenAI instead of Anthropic, but government officials have criticized Anthropic in recent months for being overly concerned about the safety of its AI.
The timing of OpenAI’s deal with the Department of Defense sparked an online backlash, with many users reportedly ditching ChatGPT in app stores and purchasing Claude.
Altman further addressed the controversy in his post, saying, “During our conversations over the weekend, I reiterated that Anthropic should not be designated as a (supply chain risk) and that I hope (the Department of Defense) offers them the same terms that we agreed to.”
Anthropic was founded in 2021 by a group of former OpenAI staff and researchers, including Dario Amodei, who left the company after disagreements over its direction. The company markets itself as a “safety first” alternative.
