NVIDIA’s Huang: The AI Boom Isn’t a Bubble—It’s Driven by Real Demand

Is artificial intelligence headed for a dot-com-style implosion? NVIDIA’s Jensen Huang doesn’t think so. He argues that the surge in AI isn’t a speculative frenzy disconnected from reality, but a wave powered by genuine, growing demand for computing power.

Huang’s comparison centers on how infrastructure is being used. During the late-90s internet boom, telecom companies laid far more fiber-optic cable than the market needed. Much of it remained unused—“dark fiber”—a symbol of the era’s overbuilding and misaligned expectations. Today’s AI landscape, he says, looks very different: virtually every GPU that can be deployed is running real workloads.

If you’re not familiar with the term, dark fiber describes excess fiber-optic cables installed during the early internet buildout. Companies assumed demand would skyrocket and over-provisioned lines to “future-proof” their networks. When usage failed to catch up, those cables sat idle, generating little or no return. It was a classic case of infrastructure outpacing actual need.

AI, Huang contends, is on the opposite trajectory. Many consumers still equate AI with chatbots or image generators, but behind the scenes, the technology is advancing quickly—systems are improving at reasoning, grounding outputs in research, and tackling enterprise-grade tasks. As organizations experiment and scale, two curves are rising in tandem: the number of AI queries being made and the compute required to serve them. That pairing, he suggests, is evidence of real utilization rather than artificial demand.

Whether or not you buy the bubble rebuttal, it’s clear AI still has significant headroom. More industries are testing and deploying models, and that expansion will require immense data center capacity and specialized chips—whether they come from NVIDIA, AMD, or Intel. The opportunity is huge, but it’s not without constraints.

Power is a major one. Training and running advanced models consumes substantial energy, and the grid, cooling, and facility upgrades needed to support large-scale clusters won’t materialize overnight. Another challenge lies in deployment: even if chips are available, cloud providers and enterprises must integrate them efficiently into their stacks to unlock full value. That means software optimization, network design, and system-level engineering—areas that can slow rollouts if not executed well.

The takeaway: investors may see echoes of the dot-com era in soaring valuations, but the underlying dynamics are different. Unlike the overbuilt, underused infrastructure of the 1990s, today’s AI ecosystem is soaking up compute at an accelerating pace. The bigger questions aren’t about whether demand exists—they’re about how fast the industry can build the power, integration, and operational foundations to sustain it.