Marvell may be preparing for a bigger role in Google’s in-house AI hardware plans, according to recent market chatter. The company is reportedly in discussions with Google about building application-specific integrated circuits (ASICs) designed for the company’s growing AI needs. If these talks turn into formal agreements, they could influence how Google sources future Tensor Processing Unit (TPU) components and related silicon used in data centers.
One of the most interesting details is the mention of memory processing units (MPUs). These MPUs are described as potential companion chips that would work alongside Google’s existing TPU products. In practical terms, that signals a focus on improving how AI accelerators handle memory and data movement—often a major bottleneck in modern AI workloads. Better memory-side performance can translate into faster throughput, improved efficiency, and smoother scaling for large AI models.
The conversations also reportedly include TPU chips aimed at AI inference. While training massive models grabs plenty of attention, inference is where many real-world AI services live—powering everything from search and recommendations to generative AI responses and enterprise automation. Purpose-built inference TPUs can help reduce cost per query, boost performance per watt, and allow Google to deploy AI features at enormous scale.
Naturally, the possibility of Marvell stepping in adds competitive pressure to current players connected to Google’s TPU supply chain. Broadcom and MediaTek are widely seen as important partners in this space, and any shift toward additional ASIC suppliers would intensify competition around pricing, capacity, and technical differentiation. For Google, expanding its supplier options could mean more flexibility, improved negotiating leverage, and faster iteration on custom AI silicon.
For readers tracking AI chips, data center hardware, and custom silicon trends, this is another sign that the race to control the AI stack is accelerating. Major cloud platforms increasingly want tailored ASIC designs—whether that’s TPU-related hardware, companion memory processors, or inference-focused accelerators—to optimize performance and operating costs at scale.
As of now, these are still discussions rather than confirmed product announcements. But if Marvell and Google move forward, it could shape the next wave of custom AI processors and raise the stakes for every company competing in the AI accelerator and ASIC market.






