Morgan Stanley expects Nvidia and Broadcom to reap further gains going forward as momentum in artificial intelligence continues. The bank reiterated its exaggerated claims against Nvidia, the leading name in AI. Morgan Stanley’s price target is $250, up from $235, representing a 41% upside from Nvidia’s Friday closing price. “We continue to see NVIDIA maintaining dominant market share as the threat is exaggerated, but we don’t know exactly what will turn things around,” analyst Joseph Moore wrote. “Customers’ biggest concern over the next 12 months is whether they will be able to procure enough NVIDIA products in general and Vera Rubin in particular.” NVDA YTD Mountain NVDA YTD Chart Moore said his multiple assumptions are still cheaper than peer Broadcom, but expensive for the broader semiconductor group, “albeit shrinking as the absolute level of margins and revenues makes it more difficult for multiples to grow.” The analyst also affirmed his Overweight rating on Broadcom and raised his price target from $409 to $443. This revised estimate is about 10% higher than Broadcom’s stock’s closing price Friday. Broadcom stock is up 74% this year. AVGO YTD Mountain AVGO YTD Charts Moore highlighted Broadcom’s large AI exposure as a positive and also praised the company’s growth potential. He pointed to Broadcom’s tensor processing units (TPUs) as a tailwind. “The supply chain for Google’s Tensor processors, designed and sold by AVGO, is being moved further up the chain, but this is at the slight expense of other Broadcom customers, with homegrown versions also becoming a major focus,” he wrote. “Even before the positive reviews for Gemini created the current wave of enthusiasm, we were hearing from multiple sources for TPUs, including analog companies, memory companies, and ODM partners, that they were being revised upwards.” However, he added the caveat that this supersedes other chip expectations for Broadcom. “Specifically, we believe that the MTIA build for customer meta (with volumes still expected in 2H 2026) has been slightly delayed and meta has been replaced with the expected TPU usage,” he added. “Unconfirmed reports suggest that Meta and Open AI are using TPUs, at least in part, to get used to using ASICs in order to eventually move to internal ASICs.”
