Tesla is turning its custom chip ambitions up to maximum. Elon Musk has signaled that the company isn’t just building silicon to support its cars and AI projects—it wants to go head-to-head with NVIDIA-level performance while producing what he describes as the highest-volume chips in the world.
In recent posts, Musk said Tesla’s next-generation AI chip, called AI5, is now in good shape. With that milestone reached, he claims the company will restart work on Dojo3, its in-house supercomputer effort designed to power large-scale AI training. That’s a notable shift because Tesla previously appeared to be stepping back from Dojo, leaning more heavily on external compute hardware. The reversal suggests Tesla believes it needs far more AI capacity than it can comfortably rent or buy—and that owning the silicon roadmap could become a strategic advantage.
While Tesla hasn’t shared detailed specifications for Dojo3, Musk’s comments point toward a system built around AI5-based clusters. The implication is that Tesla wants a unified silicon platform that can scale across multiple products: vehicles, AI training infrastructure, and potentially robotics like Optimus. If Tesla can standardize on a common architecture, it could simplify development while lowering costs across everything from onboard inference to data-center training.
Musk also outlined an aggressive chip cadence. He suggested Tesla intends to push forward through multiple future generations—up to AI9—on roughly a nine-month cycle. That kind of rapid iteration is more typical of top-tier chip roadmaps than traditional automotive timelines, and it underscores how serious Tesla is about making AI hardware a core pillar of the business.
On performance, Musk claimed AI5 targets “Hopper-class” capability in a single system-on-chip configuration, with “Blackwell-like” performance when using a dual-die setup. He also emphasized cost, describing AI5 pricing as extremely low compared to what similar compute might cost through other options. The message is clear: Tesla wants high-end AI performance without paying high-end AI prices, and it believes custom silicon is the path to that advantage—especially if Full Self-Driving (FSD) becomes mainstream and compute demand skyrockets across the fleet.
Musk framed AI5 as critical to Tesla’s future, even saying solving it was “existential” and that he spent many weekends personally working with teams to get it across the finish line. That level of leadership focus reflects how much Tesla sees hardware control as a competitive moat, not just a technical side project.
Still, becoming a major chip player isn’t something a company achieves on ambition alone. Designing competitive silicon is only part of the challenge. Verifying designs, managing power and thermals, ensuring long-term stability, and executing reliable manufacturing at scale are all difficult—and they typically take years of refinement. Tesla may be aiming for the front of the pack, but execution will determine whether it can truly rival established AI chip giants.
For now, Tesla’s direction is unmistakable: more in-house compute, faster chip generations, and a push to control both performance and cost at the silicon level. If the company delivers on AI5 and successfully brings Dojo3 back online, it could reshape how Tesla competes—not just as an automaker, but as an AI and computing platform company.






