OpenAI Eyes Massive 30GW AI Computing Expansion by 2030, Leaving Rivals Far Behind

OpenAI is ramping up for one of the biggest computing expansions the AI industry has seen so far, with a stated goal of reaching 30 gigawatts (30GW) of AI compute capacity by 2030. If that target holds, it would represent a massive leap in the infrastructure needed to run and train large-scale generative AI systems, and it raises an important question: can the semiconductor supply chain actually keep up?

The company says its growth since the launch of ChatGPT has been driven by rapidly increasing demand, with both revenue and compute usage scaling quickly as more users and businesses adopt AI tools. OpenAI’s takeaway is clear: interest in AI isn’t slowing down, and the next wave of adoption will require far more capacity than today’s data centers can provide.

To meet that demand, OpenAI plans to expand compute by nearly 16x by 2030 compared with its 2025 baseline. In 2025, it was operating at roughly 1.9GW of AI compute. It had previously committed to building out 10GW of compute for AI, with more than 8GW of that already identified. Now, OpenAI is pushing beyond those earlier targets with its new 30GW ambition.

A buildout of this scale would ripple far beyond OpenAI itself. More compute means more AI accelerators, more supporting hardware, more data centers, and significantly more electricity. It also implies a wider build cycle across the tech and energy sectors—new manufacturing capacity, new facilities, upgraded infrastructure, and more jobs tied to semiconductor production and data center construction.

OpenAI’s long-term compute plan also connects to growing industry talk about custom AI hardware. The company has outlined ideas for a bespoke AI chip design that could incorporate multiple stacks of high-bandwidth memory (HBM), a component that has become critical for modern AI performance. But HBM is also one of the biggest bottlenecks right now. Supply remains tight, and memory makers are racing to expand production capacity to keep pace with AI-driven demand.

There’s also a broader downside that’s hard to ignore: the current AI boom has already put pressure on the entire tech market. As more chipmaking capacity gets pulled toward AI accelerators and data center parts, everyday consumer products can feel the impact through higher prices and constrained availability—whether that’s smartphones, PCs, or gaming hardware. That’s why the next phase of expansion can’t just be “more factories.” It needs to be smarter, more efficient semiconductor manufacturing that can serve both AI growth and mainstream consumer demand at the same time.

If OpenAI reaches 30GW by 2030, it won’t just be a milestone for one company—it could become a defining stress test for the global semiconductor industry, the HBM supply pipeline, and the power infrastructure required to fuel the AI era.