Huawei and Cambricon on Track to Break the Million-Unit AI Chip Barrier by 2026

China’s race to build a homegrown AI chip ecosystem is gathering speed. Fresh forecasts point to a significant ramp in domestic AI compute in 2025, with JPMorgan expecting Huawei to ship roughly 600,000–650,000 AI chips next year. Cambricon Technologies is also set to scale, with projected shipments in the 125,000–150,000 range. Combined, that puts local suppliers on track to deliver around 725,000 to 800,000 AI accelerators in 2025—enough to meaningfully bolster data center capacity and reduce reliance on foreign hardware.

Why this matters goes well beyond the numbers. A larger pipeline of domestically produced AI chips gives Chinese cloud providers, research labs, and enterprises the headroom to train and deploy larger models, roll out more reliable inference services, and support fast-growing generative AI workloads. It also tightens supply chains, helping mitigate the risk of disruptions and export constraints while aligning performance with local software frameworks and use cases.

For Huawei, the forecasted volumes suggest an aggressive push to meet demand from hyperscale data centers and enterprise AI projects. Cambricon’s expected shipments reinforce its role as a key player in specialized accelerators designed for AI training and inference. While these figures don’t capture every vendor participating in China’s AI hardware landscape, they highlight a clear, concerted move to scale up domestic compute.

Several forces are driving this acceleration. AI adoption is expanding across industries like finance, e-commerce, manufacturing, healthcare, logistics, and telecom. As models grow more complex, the need for high-performance, energy-efficient accelerators increases. Data sovereignty and latency-sensitive applications also favor local infrastructure, encouraging investment in domestic data centers and on-premise AI systems. Meanwhile, a maturing ecosystem of developer tools, frameworks, and optimization libraries is making it easier to extract value from locally produced chips.

There are challenges ahead. Manufacturing capacity and yields must keep pace with demand, especially for advanced nodes and high-bandwidth memory integration. Software compatibility and ecosystem maturity will remain critical—developers need robust compilers, kernels, and toolchains to fully utilize the hardware. Power efficiency and thermal management are increasingly important as operators seek to temper energy costs while scaling AI clusters. Finally, real-world performance, cost per token for inference, and total cost of ownership will shape buyer decisions as competition intensifies.

What to watch over the next year:
– The pace of ramp from pilot to mass production for both vendors
– Adoption by major cloud providers and enterprise customers in priority sectors
– Benchmarks and case studies that demonstrate training and inference efficiency
– Progress in software stacks, developer tooling, and model optimization
– Supply chain stability, including packaging, memory, and networking components
– Energy efficiency gains that enable larger, greener AI deployments

The bottom line: If the current projections hold, 2025 will mark a pivotal year for China’s AI semiconductor ambitions. With Huawei and Cambricon together targeting as many as 800,000 units, domestic AI compute capacity is set for a substantial lift. That momentum could reshape purchasing patterns, accelerate local AI innovation, and intensify competition in the global market for data center accelerators.