Marvell Technology Group Ltd. Headquarters, Santa Clara, California, September 6, 2024.
David Paul Morris | Bloomberg | Getty Images
Shares of Marvell Technologies rose nearly 6% on Monday after reports that Google would tap the chip design firm for two new chips that power artificial intelligence workloads.
Until now, Google has relied on Marvell rival Broadcom to design its Tensor Processing Units (TPUs). Broadcom shares fell nearly 2% on Monday following The Information’s report.
A potential deal between Google and Marvell could include TPUs as well as memory processing units, The Information reported on Sunday. Google and Marvell did not immediately respond to requests for comment.
Marvell and Broadcom both help customers translate chip designs into silicon and provide back-end support before the processors are manufactured at giant manufacturing plants such as Taiwan Semiconductor Manufacturing Company.
The role is a driver of growth for Marvell and Broadcom as more tech giants design in-house accelerators for AI.
In the rush to produce enough silicon to power AI, it wouldn’t be surprising to see Google diversify its chip deals beyond Broadcom. The partnership between Google and Broadcom is alive and well, having just been extended through 2031 in an expansion deal announced earlier this month.
Meta also made a big deal with Broadcom last week, pledging to deploy 1 gigawatt of its own custom MTIA chips using Broadcom technology.
Marvell stock rose more than 20% in March after the company reported strong fourth-quarter earnings and guidance as demand for AI soars. The stock price has continued to rise in April, rising nearly 50% so far.
Nvidia In March, it also announced a $2 billion investment in Marvel. The partnership will give Nvidia customers easy access to application-specific integrated circuits (ASICs) manufactured by hyperscalers such as Google.
Google was the first hyperscaler to start developing its own custom ASICs to accelerate AI workloads, releasing its first TPU in 2015. As Big Tech scrambles for enough computing and low-cost alternatives to Nvidia’s AI chips, major companies like Amazon, Meta, Microsoft, and OpenAI have all followed suit.
Google released its latest seventh-generation “Ironwood” TPU in November, and could release its next chip at its annual AI conference, Google Cloud Next, later this week.
Google’s custom microchips were originally trained for internal workloads and have been available to cloud customers since 2018. Meta, Anthropic, and Apple all now use TPUs, as Google increasingly encroaches on a market cornered by Nvidia’s graphics processing units.
Memory has been one of several bottlenecks facing AI chip makers in recent months, with supplies from memory makers such as Micron, SK Hynix and Samsung in short supply.
CNBC’s Kristina Partinevelos contributed to this report.
Video: Inside Google’s chip lab, where Gemini and Apple make custom silicon to train AI models
