In just over a week, negotiations over the Pentagon’s use of Anthropic’s Claude technology have collapsed, with the Trump administration designating Anthropic as a supply chain risk and saying the AI company will challenge the designation in court.
Meanwhile, OpenAI quickly announced its own deal, sparking a backlash that saw users uninstall ChatGPT and propel Anthropic’s Claude to the top of the App Store charts. And at least one OpenAI executive resigned over concerns that the announcement was rushed without proper guardrails in place.
On the latest episode of TechCrunch’s Equity podcast, Kirsten Kolosek, Sean O’Kane, and I discussed what this means for other startups looking to work with the federal government, specifically the Department of Defense. Kirsten wondered if things would change a little.
Sean pointed out that this is an unusual situation in many ways. One reason is that OpenAI and Claude are building products that “no one can shut up about.” And importantly, since this is a controversy over how their technology is or isn’t being used to kill people, it will naturally come under more scrutiny.
Still, Kirsten argued that this is a situation that should “give every startup pause.”
Read a preview of the conversation below, edited for length and clarity.
Kirsten: I’m wondering if other startups are starting to take notice of what happened with the federal government, specifically the Department of Defense and Anthropic, and that discussion and wrestling match. And pause about whether to go after federal funds. Will the tone change a little?
tech crunch event
San Francisco, California
|
October 13-15, 2026
Sean: I’m curious about that too. In the short term, I think to some extent no, because if you really think about the different companies, whether they’re startups or more established Fortune 500 companies that work with the government, especially the Department of Defense and the Department of Defense, a lot of them are under the radar in their work.
General Motors manufactures defense vehicles for the Army and has been doing so for a very long time, working on all electric and automatic versions of those vehicles. That kind of thing goes on all the time, but it’s never part of the zeitgeist. I think the problem that OpenAI and Anthropic have run into over the past week is that these are companies that make products that a lot of people use, and more importantly, no one can shut up.
So there’s been such a spotlight on them, and it’s naturally highlighted their involvement to a level that most other companies that contract with the federal government, especially any of the warfighting elements of the federal government, don’t necessarily have to deal with.
The only caveat I would add is that a lot of the heat of the discussion that’s going on between Anthropic and OpenAI and the Department of Defense is very specifically about how their technology is being used to kill people, or how it’s being used in some of the missions that are killing people. It’s not just the attention they’re getting and the familiarity we have with their brand, but there’s an additional element that I feel is more abstract when you think of General Motors as a defense contractor or something.
I don’t think Applied Intuition or any of the other companies that advertise dual use will make a big exit. That’s because I don’t spotlight it and there’s no common understanding of what its impact is.
Anthony: This story is very unique and in many ways unique to these companies and these people. So there’s a lot of really interesting thinking about, “What is the role of technology in government?” AI in government? And I think those are all good questions and questions worth asking and considering.
However, I also think this is a very interesting lens to look at some of these things because Anthropic and OpenAI are not really all that different in many ways or in the stances they take. It’s not like one company says, “We don’t want to work with the government,” and the other company says, “Yes, we will.” Or some people say, “You can do whatever you want.” And (the other one) says, “No, I want to set limits.” Both companies have said, at least publicly, that they would like to see limits placed on how AI is used. Anthropic seems to be digging deeper into the “you can’t change terms like this” thing.
And then there’s the addition of Emil Michael, the CEO of Anthropic and whom many TechCrunch readers may remember from his Uber days, who is now the Chief Technology Officer of the Department of Defense. Apparently, they don’t really like each other. Reportedly.
Sean: Yes, there’s a very big “girls fighting” element here that shouldn’t be overlooked.
Kirsten: Yeah, just a little bit. It is, but the connotation is a little stronger than that. Again, backtracking a little bit, what we’re talking about here is that the Pentagon and Anthropic got into a conflict and it looks like Anthropic lost, but it should be said that they are still very much utilized by the military. These are considered important technologies, but OpenAI is stepping in and this is evolving and may change by the time this episode is published.
This headwind is interesting for OpenAI, and I think ChatGPT uninstalls spiked 295% after OpenAI signed the agreement with the Department of Defense.
To me, all this is noise about something really important and dangerous: the Department of Defense was trying to change the existing terms of an existing contract. And this is very important and should give every startup pause because the political machinery that’s happening now, especially the Pentagon political machinery, looks different. This is not normal. The problem lies in the fact that the contract takes forever to become entrenched at the government level, and the government is trying to change its terms.
