China’s push to build a self-reliant AI hardware supply chain is accelerating, and high-bandwidth memory (HBM) has become one of the most important battlegrounds. Domestic memory makers are racing to bring HBM chips to market so they can pair them with China’s rapidly expanding lineup of AI accelerators. But despite fast progress across the industry, a major hurdle has emerged: China’s top DRAM manufacturer is reportedly struggling to get HBM3 ready for mass production on schedule.
China’s AI market continues to expand quickly, driven by companies like Huawei and other local chip designers working around restrictions on advanced chip manufacturing. As AI models grow larger and datacenter workloads become more demanding, HBM has become essential. Unlike conventional memory, HBM is designed to deliver extremely high bandwidth by stacking memory dies and using advanced packaging to keep data moving quickly between memory and AI processors. For modern AI training and inference performance, access to HBM can directly shape competitiveness.
At Semicon China 2026, Chinese companies highlighted new DRAM and HBM-related progress. JCET, a major player in semiconductor packaging, showcased an HBM3e packaging solution using 2.5D stacking technology. The company claims the approach can reach up to 960 GB/s of bandwidth per stack and improve interconnect density by about 20% compared to earlier generations. Those are the kinds of performance targets needed for next-generation AI systems, where memory bandwidth can be just as critical as raw compute capability.
However, the key challenge is not simply having a promising design or packaging concept. Manufacturing capacity and mature production capability are what determine whether these solutions can scale beyond demos and sampling. The report indicates JCET’s limitation isn’t the architecture itself but the ability to manufacture at the required scale, meaning the company may need to rely on outside manufacturing resources to bring its HBM3e efforts to market.
Even more concerning for China’s domestic AI supply chain is the status of CXMT, widely considered China’s leading DRAM maker. CXMT’s HBM3 project—described as its fourth-generation HBM effort—was initially targeted for a first-half 2026 launch. But the timetable now appears to be slipping. Industry sources suggest the company has not yet placed orders associated with mass production, and the project may be delayed into the second half of 2026.
Insiders also claim CXMT’s HBM3 remains in the testing phase, with materials and supply readiness still aligned more with limited sample production than with full-scale output. In practical terms, that implies the memory may not be available in the volumes domestic AI chipmakers need to support new product launches, large deployments, or sustained datacenter demand.
This delay matters because the global HBM roadmap is moving quickly. Major international memory vendors are already pushing forward with HBM3E ramp-ups for next-generation AI datacenters, while HBM4 development is advancing toward mass production. HBM4 is expected to be a foundational component for upcoming datacenter platforms, including next-wave accelerators from NVIDIA and AMD expected later this year. As the rest of the market moves on to newer memory generations and higher production volumes, any slip in HBM3 availability increases the gap China must close.
For domestic AI chipmakers, the short-term impact could be a bottleneck. If local HBM3 production can’t ramp in time, companies such as Huawei may face difficult choices: depend more heavily on external memory solutions where available, redesign around alternative memory configurations, or delay certain next-generation products until domestic HBM supply stabilizes. In an AI market where launch timing and datacenter rollout schedules are critical, memory supply can become a decisive constraint.
The bigger picture is clear: China’s HBM ambitions are real and progressing, but HBM is one of the hardest semiconductor products to industrialize at scale. Advanced stacking, packaging, yield management, and supply chain readiness all have to align. Until HBM3 reaches reliable mass production, China’s domestic AI ecosystem may continue to face pressure at the exact point where bandwidth-hungry models and accelerators demand it most.





