Alphabet's Google is reportedly in negotiations with Marvell Technology to jointly develop two new chips targeted at improving the efficiency of AI model inference, according to people familiar with the discussions who spoke to the report.
Per the report, the planned hardware includes a memory processing unit - conceived to operate alongside Google's existing tensor processing unit (TPU) - and a distinct TPU engineered specifically for running AI models. The collaboration would align a memory-focused component with Google's TPU stack while also producing a fresh TPU variant optimized for inference workloads.
The discussions come as Google pursues a strategy to make its TPUs a commercial alternative to Nvidia's widely used graphics processing units (GPUs). The report notes that TPU sales have become an important contributor to growth within Google's cloud revenue, supporting the company's effort to demonstrate to investors that its investments in artificial intelligence are producing returns.
According to the report, the two companies aim to finalize the memory processing unit design as soon as next year, after which they would hand the design off for test production. The timeline described in the report indicates an intent to move from design to prototype testing within a relatively short window, subject to the outcome of ongoing talks.
Reuters could not immediately verify the report. Both Google and Marvell did not immediately respond to requests for comment, the report said.
The article also referenced an investment product that evaluates Alphabet's stock. It stated that ProPicks AI assesses GOOGL alongside thousands of other companies on a monthly basis using more than 100 financial metrics. The description indicated that the AI-driven tool generates stock ideas by evaluating fundamentals, momentum, and valuation, and cited past winners identified by the tool, including Super Micro Computer (+185%) and AppLovin (+157%).
Context and implications
If the reported talks progress, the partnership would focus on two chip designs intended to enhance inference efficiency and to integrate memory processing closely with Google's TPU architecture. The account of timing and next steps is limited to the information contained in the report.