Nscale is ramping up its AI data center ambitions in a big way. The company confirmed it will add 30,000 more NVIDIA Vera Rubin GPUs on top of the 100,000 Rubin chips already planned for deployment by 2027. That brings Nscale’s total Rubin GPU roadmap to 130,000, highlighting just how quickly demand for next-generation AI compute is accelerating across Europe.
So why the surge in interest around NVIDIA’s Vera Rubin platform? These chips are being positioned as a major leap forward for AI inference performance, a key ingredient in the industry’s push toward agentic AI—systems designed to plan, reason, and act with far more autonomy than today’s tools. As AI workloads grow beyond training into always-on inference at scale, data center operators are racing to build infrastructure that can deliver high throughput efficiently.
Nscale says the additional capacity is being driven by strong demand from partners and customers, including Microsoft. With this expansion, the company expects its Narvik campus in Norway to reach 230MW of AI compute capacity, which it describes as the largest onshore infrastructure project in Norway. In practical terms, that scale is meant to support large, power-hungry AI deployments while giving enterprise and developer customers access to cutting-edge GPU resources closer to where they operate in Europe.
The plan isn’t just about adding more chips—it’s also about deploying the newest architectures at rack scale. Nscale intends to use NVIDIA’s Vera Rubin NVL72 systems, which integrate 72 Rubin GPUs per system along with Vera CPUs, creating dense, data center-ready building blocks designed for demanding inference workloads. Nscale has also signaled it will blend multiple NVIDIA platforms over time, including Grace Blackwell and Vera Rubin, to serve a wider range of customer needs as AI models and real-world applications evolve.
As for where all these GPUs will go, the newly announced 30,000 Rubin GPUs are targeted for Narvik, while the broader 100,000-chip deployment is planned across Nscale’s sites in Europe, including the UK, Norway, and additional locations. All of it is slated to come online by 2027, underscoring the long lead times and massive coordination required to build AI-first data center capacity at this level.
For businesses and developers watching Europe’s AI infrastructure race, Nscale’s move is another clear signal: demand for high-performance NVIDIA AI GPUs—and the data centers required to power them—is only growing, and large-scale inference is becoming one of the most important battlegrounds in the next phase of AI.






