On Friday afternoon, just as this interview was about to begin, a news alert appeared on my computer screen. The Trump administration had cut ties with Anthropic, a San Francisco AI company founded in 2021 by Dario Amodei and other former OpenAI researchers who left the company over safety concerns. Secretary of Defense Pete Hegseth had invoked national security legislation aimed at countering foreign supply chain threats and blacklisted Anthropic from doing business with the Pentagon after Mr. Amodei refused to allow Anthropic’s technology to be used in autonomous armed drones that can selectively kill targets without mass surveillance of American citizens or human intervention.
It was an amazing sequence. Anthropic now stands to lose up to $200 million worth of contracts and be barred from working with other defense contractors following President Trump’s post on Truth Social directing all federal agencies to “immediately cease use of Anthropic technology.” (Anthropic later announced that it would challenge the Department of Defense in court, arguing that the supply chain risk designation was legally unsound and had “never before been publicly applied to a U.S. company.”)
Max Tegmark has spent the better part of a decade warning that the race to build ever more powerful AI systems is outpacing the world’s ability to manage them. The Swedish-American physicist and MIT professor founded the Future of Life Institute in 2014. In 2023, he famously helped write an open letter (ultimately signed by more than 33,000 people, including Elon Musk) calling for a moratorium on advanced AI development.
His view of humanity’s crisis is unforgiving. The company, like its rivals, is sowing the seeds of its own troubles. Tegmark’s argument begins not with the Pentagon, but with a decision made years ago: a choice shared across the industry to resist binding regulation. Companies like Anthropic, OpenAI, and Google DeepMind have long promised to govern themselves responsibly. Earlier this week, Anthropic even removed a central tenet of its safety pledge: a promise not to release increasingly powerful AI systems until it is confident they will not cause harm.
Now that there are no rules, there aren’t many protections for these players, Tegmark said. Below are details of that interview, edited for length and clarity. You can listen to this week’s full conversation on TechCrunch’s StrictlyVC Download podcast.
What was your first thought when you just saw this news about Anthropic?
The road to hell is paved with good intentions. It’s very interesting to look back 10 years ago. At the time, people were excited about how to develop artificial intelligence to cure cancer, make America prosperous, and make America strong. And here we are, with the US government furious at this company for not wanting its AI to be used for domestic mass surveillance of Americans, and for not wanting to deploy killer robots that can autonomously decide who gets killed without any human input.
tech crunch event
San Francisco, California
|
October 13-15, 2026
Anthropic stakes its entire identity on being a safety-first AI company, yet it has also worked with defense and intelligence agencies (going back at least to 2024). Do you think that’s contradictory at all?
That’s contradictory. If you can take a slightly cynical view of this, then yes. Anthropic has been very good at advertising that it’s all about safety. But if you really look at the facts rather than the claims, you’ll see that Anthropic, OpenAI, Google DeepMind, and xAI all have a lot to say about how they value safety. We don’t support binding safety regulations like other industries. And all four of these companies have broken their own promises. First there was Google. This big slogan is “Don’t be evil.” Then they dropped it. Then they backed off another long promise to basically promise to do no harm with AI. They stopped doing that so they could sell AI for surveillance and weapons. OpenAI has removed the word safety from its mission statement. xAI shut down its entire safety team. And now, earlier this week, Anthropic is backtracking on its most important safety commitment: a promise not to release powerful AI systems until it is confident they will not cause harm.
How did a company with such a remarkable commitment to safety end up in this position?
All of these companies, especially OpenAI and Google DeepMind, and to some extent Anthropic as well, have lobbied relentlessly against regulating AI, saying, “Trust us, we’re going to regulate ourselves.” And they lobbied successfully. So currently, AI systems are less regulated in the US than sandwiches. If you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won’t let you sell sandwiches until you fix it. But if you say, “Don’t worry, I’m not going to sell sandwiches. I’m going to sell AI girlfriends to 11-year-old kids, and they’ve been linked to suicides in the past. And I’m going to expose something called a super intelligence agency that could overthrow the U.S. government, but I have a good feeling about mine,” the inspector has to say, “Okay, go ahead, just don’t sell sandwiches.”
There are food safety regulations, but no AI regulations.
And I feel like all these companies really share responsibility for this. Because if they had come together and accepted all the promises that were made at the time about how to stay safe and well, and went to the government and said, “Take our voluntary promise and make it American law that binds even our sloppiest competitors,” this is what would have happened instead. We are in a complete regulatory vacuum. And we know what happens when companies are granted full amnesty. We get thalidomide, tobacco companies push cigarettes on our children, we get asbestos, which causes lung cancer. So it’s kind of ironic that their own resistance to the laws that dictate what they can and can’t do with AI is now coming back to haunt them.
There is currently no law prohibiting the development of AI to kill Americans, so the government could make a sudden request. If the companies themselves had come forward earlier and said, “We want this law,” they wouldn’t be in this situation. They really shot themselves in the foot.
The corporate counterargument has always been competition with China, and if American companies don’t do it, the Chinese government will. Does that argument hold?
Let’s analyze it. The most common talking point among AI company lobbyists (who are now better funded and outnumbered than those from the fossil fuel industry, pharmaceutical industry, and military-industrial complex combined) is that every time someone proposes some kind of regulation, they say, “But it’s China.” So let’s take a look at it. China is moving forward with a total ban on AI girlfriends. In addition to age restrictions, a ban on all anthropomorphic AI is also being considered. why? Not because they want to please America, but because they feel this is ruining China’s youth and weakening China. Clearly, America’s youth are also becoming weaker.
And when people say we have to race to build superintelligence to beat China, when we don’t actually know how to control superintelligence, so the default outcome is humanity losing control of the Earth to alien machines, what do you think? The Chinese Communist Party really likes control. Who in their right mind would think that Xi Jinping would allow Chinese AI companies to develop something that would overthrow the Chinese government? no way. Obviously, it would be very bad for the US government if it were to be overthrown in a coup by the first American company to build a super intelligence agency. This is a national security threat.
This is a compelling framework. Superintelligence is not an asset, but a threat to national security. Do you think that opinion is gaining traction in Washington?
When people in the national security community hear Dario Amodei explain his vision — he famously gave a speech in which he said there will soon be a nation of geniuses in data centers — they might start thinking, “Wait, did Dalio just use the word ‘nation’?” Maybe that land of data center geniuses should be on the same list of threats I’m monitoring. That sounds threatening to the US government. And I think pretty soon enough people in the U.S. national security community will realize that out-of-control superintelligence is a threat, not a tool. This is exactly like the Cold War era. There was competition with the Soviet Union for economic and military supremacy. We Americans won that competition without entering into a second competition to see who could place the most nuclear craters on other superpowers. People realized it was just a suicide. No one wins. The same logic applies here.
What does all this mean for the broader pace of AI development? How close do you think we are to the system you’re describing?
Six years ago, almost every AI expert I knew predicted that AI capable of human-level language and knowledge would be decades away, perhaps in 2040 or 2050. They were all wrong. Because we already have it. In some fields, we’ve seen AI progress very quickly from high school level to college level to PhD level to university professor level. Last year, AI won the gold medal at the International Mathematics Olympiad, which is just as difficult as the human challenge. Just a few months ago, I wrote a paper with Yoshua Bengio, Dan Hendrycks, and other top AI researchers that provided a rigorous definition of AGI. According to this, GPT-4 reached 27%. GPT-5 reached 57%. So we’re not there yet, but the rise from 27% to 57% immediately suggests it might not be that long.
When I lectured to students at MIT yesterday, I said that even if it takes four years, by the time you graduate, you may not be able to find a job. Certainly, it’s never too early to start preparing.
Anthropic is currently blacklisted. I’m curious to see what happens next. Will the other AI giants back them up and say we won’t take this too? Or will someone like xAI raise their hand and say Anthropic didn’t want the contract, we’ll take it? (Editor’s note: Hours after the interview, OpenAI announced its own contract with the Department of Defense.)
Sam Altman came out last night and said he supports Anthropic and has the same red line. I respect his courage to say that. Google wasn’t saying anything at the start of this interview. I think it would be very embarrassing for the company if they just remained silent, and I think many of our staff would feel the same way. I haven’t heard anything from xAI yet either. It will be interesting to see. Basically, everyone has a moment when they have to show their true self.
Is there a version that actually gives better results?
Yes, this is why I’m actually optimistic in a weird way. There is an obvious alternative here. If we start treating AI companies like other companies, if we do away with corporate amnesty, we’ll obviously have to do things like clinical trials and prove to independent experts that we know how to control it before we release something this powerful. Then we will have a golden age with all the great features of AI without any existential anxiety. That’s not the path we’re on right now. But maybe so.
