
Sam Altman never intended to compete Nvidia.
OpenAI started with a simple bet that better ideas, not better infrastructure, hold the key to artificial general intelligence. But that view changed a few years ago, as Altman realized that increased computing, or processing power, meant increased power and, ultimately, superiority.
On Monday morning, he announced his latest big deal. This puts OpenAI directly into the chip manufacturing business and into competition with hyperscalers.
OpenAI partners with: broadcom Collaboratively develop racks of custom AI accelerators built specifically for your unique models. This is a big change for companies that once believed intelligence came from smarter algorithms rather than bigger machines.
“What we discovered in 2017 was that we were getting the best results across scale,” OpenAI’s CEO said in an internal podcast on Monday. “It wasn’t something we set out to prove. It was something we actually discovered empirically because everything else pretty much didn’t work.”
The insight that it’s not smartness that matters, but scale has fundamentally reshaped OpenAI.
Now, the company is extending that logic even further, collaborating with Broadcom to design and deploy racks of custom silicon optimized for OpenAI workloads.
The deal gives OpenAI deeper control over the stack, from training frontier models to owning the infrastructure, distribution, and developer ecosystem that turns those models into a durable platform.
Altman’s rapid series of deals and product launches is creating a complete AI ecosystem. apple What we did for smartphones, microsoft Built for the PC with a focus on infrastructure, hardware, and developers.

hardware
Through our partnership with Broadcom, OpenAI is co-developing custom AI accelerators that are optimized for inference and tailored specifically for your unique models.
Unlike Nvidia and AMD Unlike chips designed for broader commercial use, the new silicon is built for vertically integrated systems, tightly coupling compute, memory, and networking into a complete rack-level infrastructure. OpenAI plans to begin deploying these in the second half of 2026.
The deal with Broadcom is similar to what Apple did with its M-series chips: control the semiconductor, control the experience.
But OpenAI goes further, engineering every layer of the hardware stack, not just the chip.
Broadcom systems are built on an Ethernet stack and are designed to accelerate OpenAI’s core workloads, giving the company physical advantages that are deeply intertwined with the software edge.
At the same time, OpenAI is moving into consumer hardware, an unusual move for a model-first company.
It acquired all shares of Jony Ive’s startup Io Inc. for $6.4 billion, bringing the legendary Apple designer into its inner circle. This was a sign that OpenAI wants to not only power the AI experience, but own it.
Ive and his team are exploring new kinds of AI-native devices designed to reimagine the way people interact with intelligence and move beyond screens and keyboards toward more intuitive and engaging experiences.
Early concept reports include screenless wearable devices that use voice input and subtle tactile sensations, and are envisioned as more companions to your surroundings than traditional gadgets.
With two bets on custom silicon and emotionally resonant consumer hardware, OpenAI adds two more powerful branches that OpenAI has direct control over.

Big hit product sale
OpenAI’s chips, data centers, and power will be combined into one coordinated campaign called Stargate, which will provide the physical backbone for AI.
Over the past three weeks, the campaign has heated up with some big deals.
OpenAI and Nvidia have agreed to a framework to deploy 10 gigawatts of Nvidia systems, backed by a proposed $100 billion investment. AMD will supply multiple generations of Instinct GPUs to OpenAI under a 6 GW contract. OpenAI can acquire up to 10% of AMD if it achieves certain implementation milestones. Broadcom’s custom inference chips and racks are scheduled to begin deployment in late 2026 as part of Stargate’s first 10 gigawatt phase.
In summary, this is OpenAI’s commitment to rooting the future of AI in an infrastructure we can call our own.
“We can design the entire system, thinking from the transistor etch to the token that comes up when you ask a question to ChatGPT,” Altman said. “You’re going to get significant efficiency gains that will lead to much better performance, faster models, cheaper models, all of that.”
Regardless of whether OpenAI delivers on all its promises, Stargate’s scale and speed are already reshaping the market, adding hundreds of billions of dollars to partner market capitalization and establishing OpenAI as the de facto market leader in AI infrastructure.
No rival seems to be able to match its pace or ambition. And that recognition alone has proven to be a powerful advantage.
developer
OpenAI’s DevDay revealed that the company isn’t just focused on building the best models, it’s betting on the people who build with them.
“OpenAI is trying to compete on several fronts,” said Gil Luria, head of technology research at DA Davidson, citing the company’s frontier model, consumer chat product, and enterprise API platform. “The company competes in one or more of these markets with all the major technology companies combined.”
He said Developer Day is aimed at helping companies incorporate OpenAI models into their tools.
“The tools they presented were very impressive. OpenAI is great at commercializing products in an attractive and easy-to-use way,” he added. “That being said, at least for now, they are struggling because the companies they are competing with have far more resources.”
Luria said the main competitors are primarily Microsoft Azure, AWS and Google Cloud.
Developer Day showed how proactive OpenAI is.
The company rolled out AgentKit for developers, new API bundles for enterprises, and a new App Store offering direct distribution within ChatGPT. According to OpenAI, that number now stands at 800 million weekly active users.
“This is Apple’s strategy: to own the ecosystem and become the platform,” said Deedy Das, partner at Menlo Ventures.

Until now, most companies have treated OpenAI as a tool in their stack. But with new features to publish, monetize, and deploy apps directly within ChatGPT, OpenAI is pushing deeper integration, making it difficult for developers to leave.
Microsoft CEO Satya Nadella pursued a similar strategy after taking over from Steve Ballmer.
To build trust with developers, Nadella turned to open source and acquired GitHub for $7.5 billion, a move that signaled Microsoft’s return to the developer community.
GitHub then became the launching pad for tools like Copilot, which cemented Microsoft at the center of the modern developer stack.
“OpenAI and all the big hyperscalers are looking to be vertically integrated,” said Ben van Roo, CEO of Legion, a startup building secure agent frameworks for defense and intelligence use cases.
“We use our models and our compute, and we build the next generation of agents and workflows with our tools. The market is huge. We’re talking about SaaS, large systems of record, and literally recreating parts of the workforce,” van Roo said.
SaaS, short for Software as a Service, is a group of companies specializing in enterprise software and services, including Salesforce, Oracle, and Adobe.
Legion’s strategy is to focus on model-agnostic, secure and interoperable agent workflows across multiple systems. The company has already deployed it within classified environments at the Department of Defense and is embedded across platforms such as NetSuite and NetSuite. sales force.
But those same changes also pose risks for model makers.
“Agents and workflows could make some of the larger LLMs more powerful and less necessary at the same time,” he said. “You can build inference agents with smaller, more specific workflows without GPT-5.”
Tools and agents built with leading LLMs have the potential to replace traditional software products from companies like Microsoft and Salesforce.
That’s why OpenAI is working hard to build infrastructure around its models. Because it not only makes them more powerful, but also harder to replace.
The real bet is not that the best model will win, but that the companies with the most complete developer loop will define the next platform era.
And that is ChatGPT’s current vision. It’s not just a chatbot, it’s an operating system for AI.

