OpenAI Eyes $20B Cerebras Chip Deal to Cut Dependence on Nvidia

OpenAI is reportedly making a massive new bet on AI hardware, with a multi-year agreement that could see the company pay chip startup Cerebras Systems more than US$20 billion for AI server capacity. According to details shared in a recent report, the arrangement signals a major push by the ChatGPT maker to secure long-term computing power as demand for advanced AI models keeps accelerating.

The reported Cerebras deal stands out not just for its size, but for what it suggests about OpenAI’s strategy moving forward. Training and running large language models requires enormous amounts of specialized compute, and the market has been heavily centered around Nvidia’s GPUs. By bringing Cerebras into the mix, OpenAI would be taking a clear step toward diversifying its AI hardware supply chain—an approach that can reduce risk from shortages, rising prices, and the uncertainty that comes with depending too heavily on a single dominant supplier.

Cerebras is known for building high-performance chips and systems designed specifically for AI workloads, positioning itself as an alternative route for companies hungry for scalable AI server capacity. If this agreement is accurate, it reflects how intensely competitive the race for AI infrastructure has become, where long-term access to compute can be just as critical as model breakthroughs.

For OpenAI, locking in large-scale AI server capacity could help ensure smoother development cycles, faster training runs, and more reliable deployment for widely used AI services. More broadly, it highlights a growing industry trend: leading AI firms are increasingly looking beyond the usual hardware options to secure computing resources and keep pace in the rapidly evolving AI landscape.