OpenAI is teaming up with semiconductor leader Broadcom to co-develop custom AI chips built for the next wave of large language models. This move isn’t just about making models faster—it’s about taking control of the hardware stack, easing reliance on Nvidia, and laying the groundwork for the next generation of AI.
The collaboration signals a major shift from purely software-driven innovation to a tight integration of hardware and software. OpenAI and Broadcom are designing AI accelerators and high-performance networking tailored for training and inference at massive scale. The goal is clear: handle enormous workloads more efficiently, boost throughput, and cut power consumption as models continue to grow in size and complexity.
Broadcom’s role goes beyond raw compute. The company will deliver advanced networking, optical interconnects, and other data center hardware that keeps systems running fast and reliably under heavy AI training loads. Initial systems are targeted for 2026, with broader deployment expected by 2029. The timing aligns with recent comments from Sam Altman urging the industry to rely on TSMC to expand global chip manufacturing capacity.
Unlike some competitors that design and manufacture everything in-house, OpenAI’s strategy is to co-develop with an established partner. This approach aims to accelerate time-to-market, control costs, and preserve design control over the chips themselves, while leveraging Broadcom’s production expertise and infrastructure capabilities.
What this means in practice:
– Reduced dependence on Nvidia GPUs for AI training and inference
– Improved efficiency, translating to lower power consumption and reduced training costs
– Faster scaling to support larger models and bigger datasets with greater reliability
Building custom silicon is a heavy lift. It demands years of R&D, substantial capital, and deep collaboration between AI researchers, systems engineers, and hardware teams to ensure seamless integration with current and future models. But the payoff can be transformative.
By creating its own hardware foundation, OpenAI is aiming for long-term sustainability and performance leadership. The company says custom chips will let it embed learnings from frontier model development directly into the silicon, unlocking new levels of capability and intelligence. The plan also includes deploying up to 10 gigawatts of custom AI accelerators—an indicator of the scale OpenAI is targeting.
Will this partnership keep OpenAI ahead in the AI race? It certainly strengthens the playbook. With purpose-built chips, advanced networking, and a clear path to large-scale rollout, OpenAI is positioning itself to train bigger models faster, more efficiently, and at lower cost—key advantages as AI competition heats up.






