Tesla Pulls 2nm AI Chip Production Onto US Soil, Splitting AI6 and AI6.5 Between Samsung Texas and TSMC Arizona 1

Tesla Brings 2nm AI Chip Manufacturing to the U.S., Dividing AI6 and AI6.5 Between Samsung Texas and TSMC Arizona

Elon Musk has shared fresh details about Tesla’s next steps in building a full in-house AI ecosystem, and the biggest takeaway is clear: Tesla is lining up two separate 2nm chip programs—one with Samsung and another with TSMC—to power future AI workloads across its products and supercomputing efforts.

The update follows Tesla’s recent milestone: the successful tape-out of its AI5 chip. AI5 is being manufactured with Samsung and is part of a broader push by Musk-led companies to design custom silicon instead of relying solely on off-the-shelf solutions. Musk noted that the AI5 team finished tape-out 45 days ahead of schedule, but the speed came with tradeoffs. To hit the aggressive timeline, Tesla had to accept several design concessions—choices that helped move quickly but left room for improvement in the next generation.

That’s where AI6 comes in. According to Musk, Tesla’s AI6 chip is planned to be produced on Samsung’s 2nm process in Texas and is expected to deliver roughly double the performance of AI5. Another notable shift is the memory upgrade: while AI5 uses LPDDR5X, AI6 is set to adopt LPDDR6. This move is positioned as one of the ways Tesla will correct the compromises made during AI5’s rapid development, improving the platform’s overall capability and efficiency.

After AI6, Tesla intends to roll out an enhanced variant called AI6.5. This chip is described as a further-optimized design with additional performance gains, and it’s slated to be manufactured using TSMC’s 2nm technology in Arizona. In other words, Tesla is not betting everything on a single foundry for its future AI roadmap—it’s leveraging both Samsung and TSMC at the cutting edge of 2nm manufacturing.

Beyond raw process-node improvements, Tesla’s upcoming chip designs are also targeting key architectural changes that could significantly boost real-world AI throughput. One of the most talked-about upgrades involves the TRIP AI computation accelerators dedicated to SRAM. The plan is to cut these SRAM-dedicated accelerators in half, a change expected to make effective memory bandwidth within the SRAM cache dramatically higher—described as an “order of magnitude” greater than DRAM bandwidth for computations that stay inside the SRAM cache. For AI inference and other bandwidth-sensitive workloads, that kind of on-chip efficiency can be just as important as headline performance.

Tesla’s AI hardware and inference leadership also indicated the team has been removing legacy blocks carried over from older internal IP, a cleanup effort that should help streamline future designs and make the silicon more purpose-built for upcoming needs. The end goal is a more efficient and scalable chip platform that can support Tesla’s broader AI ambitions, including projects tied to Tesla, SpaceX, and xAI.

On timelines, AI5 is expected to enter volume production in the 2026 to 2027 window. The next-generation AI6 and AI6.5 chips are pointed at a broader 2027 to 2029 timeframe. Meanwhile, Musk has also mentioned longer-range plans tied to Tesla’s Terafab concept, where the company would aim to scale more of its custom AI chip production once that initiative is completed. However, with Terafab still years away, Tesla is expected to remain an important customer for major chip manufacturers as it ramps its AI compute strategy.

With AI5 now taped out and AI6/AI6.5 mapped to advanced 2nm nodes, Tesla’s roadmap signals a more aggressive push into custom AI hardware—one that prioritizes faster iteration, higher performance per watt, and tighter integration between silicon, memory, and inference software.