Delta Unveils 800V DC Power and Liquid Cooling to Fuel the Next Wave of AI Data Centers

At Nvidia GTC 2026, CEO Jensen Huang delivered a clear message for the future of AI infrastructure: power delivery and liquid cooling are no longer “support systems” you add after the fact. They’re becoming central design pillars that must be engineered together from day one.

As AI accelerators grow more powerful and workloads become more demanding, the traditional approach of scaling data centers by simply adding more servers is hitting hard limits. Heat density is rising, energy demands are surging, and inefficiencies in power conversion and distribution are harder to ignore. Huang’s takeaway was straightforward: the next generation of AI racks and full-scale AI factories will need tightly integrated power and thermal management, designed as a single system rather than separate parts.

A major theme at the conference was the growing role of liquid cooling. Air cooling has served data centers for decades, but next-gen AI hardware is pushing beyond what fans and airflow can handle efficiently at scale. Liquid cooling is increasingly viewed as essential for maintaining performance, controlling operating costs, and keeping dense AI clusters stable under constant load.

Just as important is how power gets delivered to these new AI environments. Instead of relying solely on conventional power architectures, the industry is moving toward higher-voltage direct current approaches that can reduce losses, improve efficiency, and better support the extreme power needs of AI racks. The push toward more advanced power delivery is tightly linked to cooling, because every watt lost in conversion or distribution often becomes heat that must be removed. That’s why Huang stressed co-design: energy and thermal decisions now directly shape each other.

The broader implication is that building AI-ready data centers is becoming a multidisciplinary engineering challenge. It’s not just about choosing GPUs or building out networking anymore. Leaders planning next-gen AI data centers will need to think about the rack as a complete platform—compute, networking, power, and cooling all engineered to work together. This integrated approach is quickly becoming a competitive advantage, especially as organizations race to deploy larger AI clusters and expand into full AI factory-scale operations.

Huang’s GTC 2026 comments highlight where the market is headed: the future of high-performance AI infrastructure will be defined as much by electrical and thermal design as by raw compute. For anyone investing in AI data centers, the message is timely—success will depend on power-efficient delivery, liquid cooling readiness, and system-wide design choices made early, not patched in later.