A server rack filled with NVIDIA GPUs is connected to a digital representation of a neural network.

OpenAI Trains GPT-5.2 on NVIDIA AI GPUs as Blackwell and Blackwell Ultra Race Ahead on Performance and Value

OpenAI has unveiled GPT-5.2, calling it its most advanced frontier AI yet, and it’s built end to end on NVIDIA’s latest silicon. Training and deployment run on a mix of Hopper and Blackwell GPUs across Azure data centers, including H100, H200, and the GB200 NVL72 platform. The goal is clear: smarter models delivered faster, at scale. OpenAI says the upgrade translates into real productivity gains, with enterprise users saving 40–60 minutes per day and power users reclaiming more than 10 hours a week.

Early results show sharp improvements across a wide range of benchmarks compared to the previous generation:
– Knowledge work tasks: 70.9% wins or ties, up from 38.8%
– SWE-Bench Pro (public): 55.6% vs 50.8%
– SWE-bench Verified: 80.0% vs 76.3%
– GPQA Diamond (no tools): 92.4% vs 88.1%
– CharXiv Reasoning (with Python): 88.7% vs 80.3%
– AIME 2025 (no tools): 100.0% vs 94.0%
– FrontierMath Tier 1–3: 40.3% vs 31.0%
– FrontierMath Tier 4: 14.6% vs 12.5%
– ARC-AGI-1 (Verified): 86.2% vs 72.8%
– ARC-AGI-2 (Verified): 52.9% vs 17.6%

Under the hood, NVIDIA’s new-generation platforms are pushing the pace. With NVFP4 precision and fresh software optimizations, the GB200 NVL72 sees a 45% performance jump in MLPerf Training v5.1 versus v5.0. Blackwell Ultra raises the bar further, delivering 1.9x the speed of GB200 NVL72 and up to 4.2x faster performance than Hopper H100 systems. These results were measured while training Llama 3.1 405B at a 512-GPU scale.

It’s not just about speed; value is improving too. The GB200 NVL72 provides roughly 90% better training performance per dollar than H100-based setups, alongside a 3.2x overall training throughput boost. In the latest MLPerf Training benchmarks, GB200 NVL72 systems came in at about 3x faster on the largest model tested compared to Hopper, with nearly 2x better performance per dollar. Looking ahead, GB300 NVL72 platforms push performance even further, with more than a 4x speedup versus Hopper.

Availability is expanding quickly. Blackwell and Blackwell Ultra GPUs are rolling out broadly across major cloud providers, emerging “neo-cloud” platforms, and server makers. Instances powered by Blackwell are already accessible through leading clouds, while Blackwell Ultra is now shipping from both server manufacturers and cloud partners.

The takeaway for enterprises and developers is straightforward: GPT-5.2 offers stronger reasoning, coding, and math capabilities; NVIDIA’s Blackwell-era systems cut training time and cost; and the combined stack is arriving in the cloud so teams can adopt it sooner. Faster models, lower cost per unit of performance, and better outcomes on real-world tasks make this a meaningful step forward for AI productivity at scale.