OpenAI is racing to lock down more computing power after delays hit its ambitious Stargate data center project, a slowdown that’s now forcing the company to look elsewhere to keep its AI development on track.
According to a report from The Information, progress on Stargate has stalled enough that OpenAI is actively pursuing backup options to cover the shortfall. That means leaning more heavily on existing cloud partners while also exploring alternative hardware routes that could help supply the massive compute required to train and run advanced AI models.
The urgency is easy to understand. As demand for AI tools continues to surge, the competition for high-performance chips and data center capacity has become intense. When a major infrastructure plan like Stargate runs into delays, even temporarily, it can create a ripple effect across product timelines, research schedules, and the ability to scale AI services reliably.
To stay ahead, OpenAI’s strategy appears to center on diversifying where its compute comes from. Relying on multiple cloud providers and considering different hardware options can help reduce dependence on any single project or supplier. It’s also a practical way to maintain momentum while long-term infrastructure efforts work through construction, supply chain, and deployment hurdles.
For anyone watching the AI industry, this highlights a bigger trend: the next phase of AI isn’t just about smarter models, it’s about who can secure enough computing capacity to build and deliver them at scale. OpenAI’s push for cloud compute alternatives shows just how critical infrastructure has become in the race to power the future of artificial intelligence.






