NVIDIA Announces GB200 NVL4 With Quad Blackwell GPUs & Dual Grace CPUs, H200 NVL Now Generally Available 1

NVIDIA Unveils GB200 NVL4 Featuring Four Blackwell GPUs and Two Grace CPUs; H200 NVL Released for General Availability

NVIDIA is making waves in the world of high-performance computing and artificial intelligence with its latest innovations: the Blackwell GB200 NVL4 solution and the Hopper H200 NVL. These cutting-edge hardware platforms promise to revolutionize enterprise servers and elevate the capabilities of AI and HPC workloads.

The NVIDIA H200 NVL is now generally available, bringing with it PCIe-based Hopper cards that can connect up to four GPUs within an NVLINK domain, delivering seven times the bandwidth of conventional PCIe solutions. This remarkable increase in speed makes the H200 NVL a versatile option for data centers looking to optimize for both hybrid HPC and AI applications. The card boasts 1.5 times more high-bandwidth memory, 1.7 times the LLM inference performance, and an impressive 1.3 times the HPC performance. Under the hood, it features 114 streaming multiprocessors, totaling 14,592 CUDA cores and 456 tensor cores, with up to 3 teraflops of FP8 performance. This powerhouse comes equipped with 80 Gb of HBM2e memory, aligned across a 5120-bit interface, all while consuming a TDP of 350 Watts.

On the Blackwell front, NVIDIA introduces the GB200 NVL4 module, a significant leap from the original GB200 Grace Blackwell Superchip AI solution. The NVL4 module doubles the computing prowess with two CPUs and two GPUs, amounting to a 4-GPU NVLINK domain and 1.3 terabytes of coherent memory. This module promises a 2.2-fold improvement in simulations and a 1.8-fold boost in training and inference performance, making it a formidable choice for cutting-edge AI development. Despite the powerhouse performance, users can expect a power draw reaching close to 6KW, aligning with the demanding specifications of advanced AI workloads.

NVIDIA’s concerted efforts to push the boundaries of AI computing are further evidenced by their recent world record achievements in MLPerf v4.1 for both training and inference tasks. These innovations are not only a testament to the power of the Blackwell architecture but also highlight the continued enhancements of the Hopper platform. Looking ahead, NVIDIA is accelerating its AI roadmap with a one-year cadence, introducing forthcoming infrastructures like Blackwell Ultra and Rubin.

With these groundbreaking developments, NVIDIA positions itself as a key player propelling AI technology into the future, showcasing hardware solutions that are set to transform the landscape of high-performance computing.