At the heart of every empire is ideology, a belief system that advances ideology and justifies expansion.
For European colonial powers, it was Christianity, a promise to save the soul while extracting resources. For today’s AI empire, it is artificial general information to “benefit all humanity.” And Openai is its leading evangelist, spreading enthusiasm throughout the industry with a reshaping way of building AI.
“We were interviewing people whose voices were shaking because of their enthusiasm for their belief in AGI,” journalist Karen Hao, bestselling author of “Empire AI,” told TechCrunch on a recent episode of Equity.
In her book, Hao compares the AI industry in general, and Openai in particular to the empire.
“The only way to truly understand the scope and scale of Openai’s actions is to recognize that in reality it has already grown stronger than almost every nation in the world, consolidating an extraordinary amount of political power as well as economic power,” Hao said. “They are terraforming the Earth. They are rewiring our geopolitics, everything in our lives. And you can only describe it as an empire.”
Openai described AGI as “a very autonomous system that surpasses humans in the most economically valuable task.” This explains, “We will enhance humanity by enhancing humanity by supporting the discovery of new scientific knowledge that will change the limits of possibilities.”
These ambiguous promises have driven the industry’s exponential growth – demand for its massive resources, the oceans of scrapped data, tense energy grids, and willingness to unlock the world to the world. Many experts serve a future where they may not arrive.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Hao says this pass is inevitable and that scaling is not the only way to get more progress with AI.
“We can also develop new technologies with algorithms,” she said. “We can improve existing algorithms to reduce the amount of data and calculate the data that needs to be used.”
But that tactic would have meant sacrifice speed.
“When defining the quest for Victory to build useful AGIs as something that takes away everything, that’s what Openrai did — and most importantly, speed more than anything else,” Hao said. “Pre-efficient speed, speed of safety, speed of exploratory research.”

For Openai, the best way to guarantee speed is to take existing techniques and “do something intelligently inexpensive, by sending more data, more supercomputers into existing techniques,” she said.
Openai set the stage and instead of falling behind, it decided to line up with other high-tech companies.
“And as the AI industry has successfully acquired most of the world’s top AI researchers and these researchers no longer exist in academia, the whole field is shaped by the agenda of these companies rather than actual scientific exploration,” Hao said.
Expenses are astronomical and will be. Last week, Openai said it expects to burn $115 billion in cash by 2029. Meta said in July that up to $72 billion will be spent on building this year’s AI infrastructure. Google expects to reach capital expenditures of up to $85 billion in 2025, most of which will be spent expanding AI and cloud infrastructure.
Meanwhile, the goalpost continues to move, and even with the increasing number of harms, the most “good for humanity” has not yet been realized. Harms like unemployment, wealth concentration, and AI are chatbots that fuel delusions and mental illness. In her book, Hao also documented workers in developing countries like Kenya and Venezuela, exposed to disturbing content, including child sexual abuse material, and was paid very low wages (about $1-2 per hour) in roles such as content moderation and data labeling.
Hao said it is a false trade-off to AI’s advancement for current harm, especially when other forms of AI bring about real benefits.
She pointed out the Nobel Prize-winning Alpha Fold in Google Deep Mind. It is trained on amino acid sequence data and complex protein folding structures, allowing accurate prediction of protein 3D structures from amino acids.
“These are the types of AI systems we need,” Hao said. “Alphafold does not cause mental health crises for people. Alphafold does not lead to huge environmental harms, because it is trained with a fairly small amount of infrastructure. There is no harm in content moderation.
Alongside a quasi-religious commitment to AGI, it is a story about the importance of racing to beat China in AI races, allowing Silicon Valley to have a liberalisation effect on the world.
“The opposite happened literally,” Hao said. “The gap continues to be closed between the US and China, and Silicon Valley has had a non-autonomous effect on the world.
Of course, many people argue that Openai and other AI companies are benefiting humanity by releasing ChatGPT and other large-scale language models. This promises great productivity benefits by automating tasks such as coding, writing, research, customer support, and other knowledge work tasks.
But the way Openai is structured – some are nonprofits, and some for-profits complicate how they define and measure their impact on humanity. And that’s gotten even more complicated with this week’s news, with Openai reaching an agreement with Microsoft, which ultimately went public.
Two former Openai safety researchers told TechCrunch that they feared that AI Labs have begun to confuse for-profit organizations and non-profit missions.
Hao reflected these concerns and explained the dangers of being so consumed by the mission of reality being ignored.
“Even though there’s accumulation of evidence that what they’re building is actually hurting a fair amount of people, the mission keeps all of that on paper,” Hao said. “It’s really dangerous and bleak about it. You can be surrounded by the belief system you’ve built and lose contact with reality.”