Google has just made a big move in the AI infrastructure arms race, promoting Amin Vahadat to chief engineer for AI infrastructure, a newly created position that reports directly to CEO Sundar Pichai, according to an internal memo first reported by Semafor. That’s a sign of how important the effort has become as Google plans to spend up to $93 billion in capital spending by the end of 2025, a number that parent company Alphabet expects to grow even more next year.
Vahdat is no stranger to this game. The computer scientist with a PhD from the University of California, Berkeley started out as a research intern at Xerox PARC in the early ’90s and has been quietly building Google’s AI backbone for the past 15 years. Before joining Google in 2010 as an engineering fellow and vice president, he was an associate professor at Duke University and later a professor and SAIC chair at the University of California, San Diego. His academic credentials are impressive with approximately 395 published papers, and his research has always focused on making computers work more efficiently in large-scale environments.
Vahdat already has a high profile at Google. Just eight months ago at Google Cloud Next, he announced the company’s 7th generation TPU, called Ironwood, in his role as VP and General Manager of ML, Systems and Cloud AI. The specs he showed off at the event were also amazing. More than 9,000 chips per pod delivered 42.5 exaflops of computing power, more than 24 times the power of the world’s No. 1 supercomputer at the time. “The demand for AI computing has increased 100 million times in just eight years,” he told the audience.
As Semafor pointed out, behind the scenes, Vahdat orchestrates the unglamorous but important work that keeps Google competitive. That includes custom TPU chips for AI training and inference that give Google an edge over rivals like OpenAI, as well as the Jupiter Network, a superfast internal network that allows all its servers to communicate with each other and move large amounts of data. (In a blog post late last year, Vahdat explained that Jupiter is currently scaling to 13 petabits per second, enough bandwidth to theoretically support video calls for all 8 billion people on the planet at the same time.) It’s the invisible plumbing that connects everything from YouTube and search to Google’s massive AI training operations across hundreds of data center fabrics around the world.
Mr. Vahdat is also deeply involved in the ongoing development of Borg software systems. The Borg software system is Google’s cluster management system that acts as the brain that coordinates all the work done across the data center; its job is to know which servers should perform which tasks, when they should do them, and for how long. He said he oversaw the development of Axion, Google’s first custom Arm-based general-purpose CPU designed for data centers. The company announced this last year and is still developing it.
In other words, Vahdat is central to Google’s AI story.
In fact, in a market where top AI talent commands astronomical pay and constant hiring, Google’s decision to promote Vahdat to the C-suite may also be an exercise in retention. When you spend 15 years putting someone at the heart of your AI strategy, you ensure that that person stays.
tech crunch event
san francisco
|
October 13-15, 2026
