NVIDIA Feynman GPUs Push Power Semi Content To $191,000, 17 Times Increase Over Blackwell As Industry Pushes 800V DC Architectures

NVIDIA’s Feynman GPUs Ignite a 17× Surge in Power Semiconductor Content to $191K as 800V DC Data Centers Take Off

As AI data centers race to train larger models and serve more users, the conversation is shifting from raw performance to a tougher constraint: power. New estimates suggest that by the time NVIDIA’s Feynman-era AI racks arrive, power delivery hardware alone could become one of the biggest cost drivers inside the rack—scaling far faster than many people expect.

A research breakdown from Morgan Stanley maps how the “power semiconductor content” inside NVIDIA-style AI racks is projected to grow across multiple GPU generations. In simple terms, it’s an estimate of how much money worth of power-related chips and components need to be inside each rack to safely and efficiently deliver electricity to hundreds of power-hungry GPUs and supporting hardware.

From Blackwell to Feynman, the jump is massive. Using Blackwell’s B200 as the baseline, the estimated power semiconductor content per rack sits around $11,234. As the Blackwell platform scales up, GB200 adds roughly another $4,000, and GB300 adds about $3,500 more. That puts the Blackwell generation around $17,761 in power semiconductor costs per rack.

But the curve steepens dramatically with the next platforms.

Rubin, expected to arrive after Blackwell, is projected to push power semiconductor content past $33,000 per rack—roughly triple the baseline B200 level. Then comes Rubin Ultra, which is estimated to multiply the power system cost again, landing near $95,000 per rack.

Feynman, planned for later in the roadmap (after Rubin), is where the numbers become eye-opening. The projection shows Feynman racks doubling Rubin Ultra’s power semiconductor content, reaching approximately $191,000 or more per rack. Compared with the Blackwell baseline, that’s an estimated 17x increase—just for the power-related semiconductor content.

Why power delivery is becoming so expensive

Modern AI racks aren’t just “servers with GPUs.” They’re effectively power-dense computing factories. As rack-level power targets climb toward megawatt-class designs, delivering that power efficiently, safely, and within physical space limits becomes extremely difficult using older distribution methods. The cost surge isn’t only because racks consume more electricity; it’s also because the supporting power electronics must become more advanced, more dense, and more robust.

In the projected cost makeup, the largest slices come from the systems that convert and regulate power at multiple stages:

1) Power Conversion Systems (PCS) are estimated to represent about 27% of the power semiconductor content.
2) Second-stage Voltage Regulation Modules (VRM, including VPD/SiVR designs) account for roughly 26%.
3) Power Supply Units (PSUs) contribute around 19%.
4) Lateral VRMs are estimated around 15%.
5) Intermediate Bus Converters (IBC, first-stage) and battery backup or UPS components land in the 4–5% range each.
6) The remaining small shares come from items like switches, network interface components, and protection parts such as eFuses.

This breakdown highlights a key point: the “hidden” infrastructure inside the rack—conversion, regulation, protection, and delivery—becomes increasingly central as GPU density and rack power rise.

The shift to 800 VDC: NVIDIA’s answer to the power wall

To keep scaling, NVIDIA has already outlined a move toward 800 VDC power architectures for future AI data centers. This is expected to replace legacy 48V/54V approaches, which run into serious barriers at megawatt rack levels.

The motivation is practical. Higher voltage reduces current for the same power, which helps cut losses and reduces the amount of copper and bulk cabling required. It also allows more compact power distribution hardware, freeing up rack space for compute.

The biggest bottlenecks with existing lower-voltage designs include:

Space constraints: Current rack designs can require multiple power shelves. At megawatt scale, sticking with 54 VDC could consume so much rack space that there’s little room left for actual compute hardware. One workaround would be dedicating an entire additional rack just for power supplies—an expensive and space-inefficient solution.

Copper overload: Delivering around 1 MW through 54 VDC requires extremely high current, which in turn demands heavy copper busbars. At data-center scale, that copper requirement becomes enormous and increasingly impractical.

Inefficient conversions: Multiple AC/DC conversion steps across the power chain increase energy losses and add more potential failure points.

Why 800 VDC is attractive for next-gen AI racks

An 800 VDC design addresses several of those pain points:

Higher efficiency and lower losses: Fewer conversion stages and lower current can reduce wasted energy, especially when power is stepped down closer to where it’s used.

Smaller infrastructure footprint: Thinner cables and smaller power components can return valuable rack space to compute density.

Enabled by newer power semiconductors: High-voltage, high-efficiency switching increasingly depends on wide-bandgap semiconductors like Gallium Nitride (GaN) and Silicon Carbide (SiC), which are better suited to these demands than many traditional solutions.

Better fit for megawatt-class AI “factories”: As racks move toward ultra-dense GPU deployments, this power approach is designed to support the physical and electrical realities of megawatt-level operation.

Safety and stability features: Operating at higher voltage requires additional safety engineering, including specialized relays, hot-swap designs, and isolation and sensing hardware.

When 800 VDC is expected to show up

The first major introduction of 800 VDC is expected with NVIDIA’s Kyber rack era, targeted around 2027. These racks are expected to align with the Rubin Ultra family in extremely dense configurations, including discussions of designs hosting hundreds of GPUs in a single rack and using liquid cooling for very high total rack power.

What this means for the AI data center market

If these projections hold, the industry is heading toward a world where scaling AI isn’t just about buying more GPUs. It’s equally about whether the power delivery ecosystem—VRM suppliers, power conversion vendors, advanced semiconductor manufacturers, and data center infrastructure providers—can ramp fast enough to meet demand.

The headline number is hard to ignore: an estimated rise from roughly $11K in power semiconductor content per rack in the Blackwell baseline to around $191K+ in the Feynman era. That 17x leap is a signal that the next big battleground in AI infrastructure won’t only be compute—it will be power, efficiency, and the hardware that makes extreme rack densities possible.