OpenAI reportedly strikes $10 billion deal with Broadcom for custom AI chips to ease GPU crunch
Broadcom has disclosed a one-time $10 billion order for custom AI server racks from what it calls a “fourth major AI developer,” and multiple reports point to OpenAI as the buyer. If confirmed, the partnership would be one of the largest bets yet on purpose-built AI hardware, aiming to reduce reliance on scarce GPUs and secure long-term training capacity.
According to Broadcom, revenue from the order is expected to start contributing in the summer quarter of 2025. The purchase centers on custom accelerators the company refers to as XPUs—application-specific chips designed to speed up workloads like model training and inference. Broadcom’s chief executive, Hock Tan, said the new customer meaningfully improves the company’s outlook for 2026, signaling how transformational this deal could be for its AI infrastructure business.
This move isn’t about replacing Nvidia. It’s about hedging against bottlenecks and building a more predictable supply of compute. OpenAI’s CEO, Sam Altman, has openly acknowledged that GPU shortages have slowed releases, including the rollout of the ChatGPT-4.5 model. He has also outlined plans to add tens to hundreds of thousands of GPUs, underscoring the steep logistics and multi-month lead times that continue to challenge the industry.
Custom silicon offers an additional path. By working directly with a chipmaker, OpenAI can lock in specialized hardware tailored to its training stack, potentially improving throughput, reducing latency, and lowering total cost of ownership over time. It also helps diversify the compute mix—pairing Nvidia GPUs with dedicated accelerators and networking to keep massive clusters fed with data.
Networking is a major part of the equation. In August, Broadcom introduced its Jericho technology, designed to link data centers as far as 60 miles apart. That kind of long-reach, high-bandwidth connectivity can make large AI workloads more efficient by turning multiple sites into a cohesive, high-speed fabric—exactly the kind of capability hyperscalers need to train ever-larger models without wasting cycles.
What this could mean for OpenAI
– More predictable access to training compute, reducing exposure to GPU shortages
– Tailor-made accelerators optimized for its models and software stack
– Stronger long-term footing to scale next-generation AI systems
What this could mean for Broadcom
– A deeper push into AI infrastructure with a marquee customer
– A multi-year revenue boost starting in mid-2025 and shaping its 2026 outlook
– Validation for its XPU strategy and advanced data center networking portfolio
The broader takeaway: AI leaders are racing to secure compute at scale, and custom silicon is becoming a strategic lever alongside GPUs. With training runs growing in size and frequency, organizations are mixing and matching accelerators, interconnects, and network fabrics to squeeze out every bit of performance. If OpenAI is indeed behind this $10 billion order, it’s a clear signal that the next wave of AI progress will be built on diversified, tightly integrated hardware—designed not just to run models, but to relentlessly scale them.






