TSMC Forms A Team of 200 R&D Specialists For Silicon Photonics, Partners Include NVIDIA & Broadcom 1

NVIDIA Poised to Pioneer TSMC’s A16 1.6nm Node as AMD Intensifies the AI GPU Arms Race

NVIDIA is reportedly lining up as the first customer for TSMC’s A16 process node, signaling a notable strategy change for the AI chip leader after years of favoring mature manufacturing technologies. If accurate, this move would put NVIDIA at the front of the line for TSMC’s next major leap in silicon scaling—an uncommon position for the company—and underscores how intense competition in AI compute has become.

Historically, NVIDIA has focused on architectural innovation to drive performance while letting others adopt TSMC’s newest nodes first. Companies like Apple, MediaTek, and Qualcomm have usually taken early slots on fresh process technologies, while NVIDIA opted to extract more from proven nodes. Reports now suggest that pattern is about to change, with NVIDIA targeting A16 to power its next-generation AI platforms as it chases higher performance and efficiency.

A16 is set to be a milestone node for TSMC. It’s expected to introduce Gate-All-Around transistors (GAAFET) and a backside power delivery system known as Super Power Rail. Together, these technologies aim to reduce power losses, boost frequencies, and pack more transistors into the same area, pushing performance-per-watt forward and keeping Moore’s Law-style scaling alive for advanced AI workloads. Among these changes, backside power delivery is especially significant because it can meaningfully lower resistance on critical power paths and improve signal integrity.

The timeline being discussed points to TSMC starting high-volume manufacturing around late 2026, with NVIDIA’s first A16-based products likely arriving in late 2027 or early 2028. On NVIDIA’s roadmap, that could align with Rubin Ultra or, more plausibly, the Feynman generation, which is expected to bring multiple architectural and platform-level upgrades alongside the process transition.

For TSMC, securing NVIDIA as a lead A16 customer would be a strong win. AI hardware makers are converging on the foundry’s most advanced nodes to meet soaring compute demand, which should translate into robust utilization and revenue for this process generation.

If this pivot holds, expect NVIDIA’s future AI GPUs to deliver notable gains in performance, energy efficiency, and density—key advantages for training and inference at massive scale. It would also mark a rare moment where NVIDIA steps onto a brand-new manufacturing node ahead of the pack, reflecting how crucial leading-edge silicon has become in the race for AI dominance.