Oracle ‘s staggering forecast has reignited enthusiasm for artificial intelligence, particularly for one specialized chipmaker. The knockout revenue forecast made it clear that demand for AI workloads is set to continue, reassuring investors about long-term returns beyond the resource-intensive “training” phase of AI models. In other words, the shift to the “inference” phase — where a trained model can make predictions and real-world conclusions based on new data — has arrived. The market has long expected a higher payout from the inference phase of the AI buildout cycle, and many see Broadcom as a huge beneficiary. It is the leading maker of chips that are considered better-suited and cheaper for inference, which is attractive for hyperscalers looking to cut their soaring AI costs. Shares of the Broadcom jumped about 8% on Wednesday, riding the high of Oracle’s roughly 40% pop. Broadcom is a direct beneficiary of Oracle’s forecast, Stephanie Link, Hightower Advisors chief investment strategist and portfolio manager said Wednesday. “Inference is obviously going to be the next big driver, and Broadcom is certainly the number one player,” Link told CNBC. “That’s not to say that Nvidia and AMD and many other companies won’t benefit, but I just don’t think that Broadcom is as widely owned or or understood as Nvidia … In the past year, Broadcom actually has outperformed Nvidia, and I think it is because we are starting to see slow respect and appreciation for what they’re doing.” Broadcom is Link’s largest position, accounting for 7% of her portfolio. She’s owned the stock for about four years and recently added to her position after its quarterly results. AVGO 1Y mountain Broadcom stock performance over the past year. Broadcom’s shares are up 56% year to date and have jumped more than 145% over the past year. The company is the leader in merchant ASICs — or Application-Specific Integrated Circuits — which are custom processors primarily used for AI inference and networking. ASICs are generally cheaper than general-purpose GPUs and CPUs when it comes to inference, given their low power usage in data centers and highly optimized performance for specific tasks, meaning lower cost per inference request. Link highlighted Broadcom’s high-margin software business and recent acquisitions as attractive aspects of the stock. Broadcom’s revenue is split between semiconductor hardware and software, with about 41% of total revenue coming from infrastructure software, or its ASICs, networking chips and storage controllers. Software tends to be more profitable than hardware, given that it has lower development costs and can lead to recurring revenue. “Why I think Broadcom, for me, makes more sense, is because it’s more diversified, and they had every single segment beat expectations on AI, semis, infrastructure and software. Better EBITDA, better operating income,” Link said, adding that Broadcom also has industry-lading gross margins at about 78.4%. The pile-in on Broadcom comes after Oracle nearly turned into a hyperscaler overnight. The cloud infrastructure provider on Tuesday surprisingly reported that its remaining performance obligations — a measure of contracted revenue that has not yet been recognized — jumped 359% from a year earlier to $455 billion. Oracle now expects $18 billion in cloud infrastructure revenue in the 2026 fiscal year, with the company calling for the annual sum to reach $144 billion in the 2030 fiscal year. Oracle chairman and Chief Technology Officer Larry Ellison said during the company’s earnings call that the AI inferencing market will be “much, much larger” than the AI training market. “A lot of people are looking for inferencing capacity. I mean people are running out of inferencing capacity,” Ellison said during the call, recalling a client that previously requested for “all the capacity you have that’s currently not being used anywhere in the world.” The Oracle co-founder added that, “in the end, all this money we’re spending on training is going to have to be translated into products that are sold, which is all inferencing.”