Two men at a press conference surrounded by microphones, one speaking, another listening attentively.

Jensen Huang Warns TSMC Must Double Output in 10 Years to Keep Up With NVIDIA as the AI Boom Accelerates

TSMC’s chipmaking capacity is already stretched thin, and new comments from NVIDIA CEO Jensen Huang underline just how intense the pressure has become. According to Huang, the world’s leading contract chipmaker may need to grow its manufacturing output by more than 100% over the next decade—and even that level of expansion would be required simply to satisfy NVIDIA’s demand.

That statement does more than highlight NVIDIA’s current momentum. It signals how long the company expects the AI boom to last, and how massive the next wave of AI infrastructure could become. Huang described the scale of TSMC’s planned ramp as an unprecedented buildout, framing it as one of the largest infrastructure investments ever contemplated. In other words, this isn’t a short-term capacity crunch. It’s a multi-year transformation of global semiconductor manufacturing, driven largely by AI data centers and high-performance computing.

TSMC has already been moving aggressively in that direction. Over recent quarters, the company has accelerated fab expansion plans and increased capital expenditures, reflecting the industry’s belief that demand for advanced chips is set to remain strong for years. A key driver behind this expansion is also geopolitical risk: TSMC is pushing deeper into manufacturing and supply chain investments outside Taiwan, with major projects expanding across the US, Japan, and parts of Europe.

A central piece of that global strategy is the company’s effort to build a more complete supply chain footprint in America. The buildout is described as massive in scope, spanning advanced packaging, semiconductor production, and research and development facilities. Meanwhile, TSMC’s Arizona operations are progressing toward advanced nodes, including 3nm production, with future transitions targeting even more advanced technologies such as A16, while also taking into account industry policies around node deployment and timing.

What’s driving much of this urgency is NVIDIA’s accelerating product roadmap and the sheer scale at which it sells AI hardware. With platforms such as Grace Blackwell and the next-generation Vera Rubin, NVIDIA consumes a significant share of TSMC’s advanced production capacity. That demand has been strong enough that NVIDIA has reportedly become TSMC’s largest customer, surpassing Apple in only a few years—a major shift in the foundry landscape that shows how AI chips have become the biggest priority for cutting-edge manufacturing lines.

Another important factor shaping how future capacity is allocated is prepayment. With foundries increasingly offering mechanisms for customers to secure wafer supply early, companies with the scale and financial power to commit ahead of time can lock in a larger share of production. That dynamic strongly favors major AI and HPC buyers, reinforcing the expectation that a meaningful portion of new production coming online will be directed toward AI accelerators and data center hardware.

All of this helps explain why NVIDIA remains difficult to challenge at the high end. It’s not only competing on chip performance or software ecosystems—it’s competing on scale and supply access. Being able to secure manufacturing capacity early, especially at leading-edge nodes and advanced packaging, can be just as decisive as the silicon itself. And with the AI infrastructure race accelerating worldwide, manufacturing capacity has effectively become one of the most valuable resources in the technology industry.

If Huang’s outlook is even close to accurate, the next decade won’t just bring faster chips—it will bring a historic expansion in semiconductor production, led by TSMC and fueled by surging demand for AI computing.