NVIDIA Says Five-Year-Old GPUs Are Aging Like 'Fine Wine,' As Prices Climb Due To Growing AI Demand

NVIDIA Claims Older GPUs Are “Aging Like Fine Wine” as AI Demand Sends Prices Soaring

GPU demand is exploding in the middle of the ongoing AI supercycle, and the effects are showing up in a surprising place: the prices of GPUs that are already four to five years old. Instead of becoming cheaper with age, many older NVIDIA data center chips are getting more expensive, pushed up by relentless demand from AI training and inference workloads worldwide.

NVIDIA’s CEO recently compared the situation to “good wine,” noting that GPU prices aren’t behaving the way most buyers expect. Traditionally, tech hardware depreciates quickly as new generations arrive. But today’s market is different. Data centers building or expanding AI capacity still need massive amounts of GPU compute, and even older accelerators remain useful—so they’re being bought, deployed, and traded aggressively rather than quietly phased out.

This “fine wine” idea used to mean something else in the GPU world: performance gains over time thanks to better drivers and software optimizations. In this case, the phrase is about pricing. The twist is that older GPUs aren’t just holding their value; in some cases, they’re climbing in price because supply is tight and demand keeps accelerating.

A major reason is that GPUs remain the primary workhorse for AI compute. CPUs are also seeing stronger demand, but GPUs are still the core component for most modern AI pipelines. With AI adoption surging across industries, nearly every data center pursuing AI initiatives is competing for the same limited pool of hardware. That competition has contributed to shortages and constraints throughout the semiconductor supply chain.

Those constraints don’t stop at GPUs. Wafer availability, GPU and CPU manufacturing capacity, and even memory components like DRAM are under pressure. With bottlenecks across production and packaging, the result has been price increases across a wide range of technology components—especially the parts critical for AI infrastructure.

What’s especially notable is that this demand is spilling over into older generations of NVIDIA data center GPUs. Hardware that might normally be considered “previous cycle” is still highly viable for AI and compute work, helped by continued software stack improvements, platform support, and ongoing optimization. That keeps these accelerators relevant and productive, even years after launch.

Industry chatter echoes the same theme: demand for older GPUs is accelerating, and pricing for parts such as H100, H200, L40S, and even A100-class accelerators has risen compared with the previous quarter. Capacity remains tight across many fleets, with providers reporting they’re largely sold out as customers race to secure compute for training larger models and serving more AI workloads.

The takeaway is clear: the AI boom is reshaping the normal lifecycle of GPU pricing. Instead of older chips steadily falling in value, scarcity and real-world usefulness are turning them into sought-after assets—hardware that, like “good wine,” is getting more expensive with age.