China’s leading homegrown chipmaker, Hygon, has revealed an ambitious roadmap aimed squarely at strengthening the domestic computing and AI ecosystem. The company says it is developing six new chips for China’s tech market, headlined by its next-generation C86 CPU, a general-purpose processor designed to push server and enterprise performance much closer to the world’s top platforms.
Hygon has been steadily expanding its footprint in China with earlier C86 processors that attracted strong interest from local customers. Now it’s preparing the jump from today’s C86-4G series (used across mainstream and enterprise deployments) to a more aggressive C86-5G generation that targets modern data center workloads and high-concurrency business needs.
The big story is the CPU core itself. Hygon says the next-gen C86 will introduce a new microarchitecture with an IPC (instructions per cycle) increase of more than 15%, and the company has previously referenced a 17% uplift for the same family. That kind of per-core efficiency improvement is especially important for servers, where performance-per-watt, licensing costs, and density matter as much as raw peak numbers.
Another key upgrade is SMT4 support. Instead of the common approach where each CPU core runs two threads, SMT4 allows four concurrent threads per core. In real-world terms, that can help boost throughput in highly parallel workloads such as virtualization, cloud services, databases, and other multi-user enterprise applications where thread-level concurrency is critical.
Instruction set support is also getting a meaningful update. The C86-5G plan includes AVX512 compatibility, which strengthens vector compute performance for workloads such as scientific computing, analytics, media processing, and various AI-adjacent tasks. On top of that, Hygon is adding new AI acceleration instructions including INT8 and BF16 support, formats widely used to speed up inference and training scenarios while keeping efficiency high.
On performance positioning, Hygon claims its next-gen C86 processors are intended to rival Intel’s latest Xeon 6 series. If execution matches the goal, it signals a major step forward for China’s domestically targeted server CPU competitiveness, particularly in organizations aiming to scale modern data centers with locally sourced hardware.
But the CPU is only one piece of a broader play. Hygon is also working on a dedicated GPU accelerator called DCU (designed for AI training and compute). According to the disclosed details, DCU will use a full-precision GPGPU architecture, support multi-precision computing (including FP64, FP16, and BF16), and pair with high-speed HBM memory. The company also highlights an ultra-high-speed inter-chip interconnect, a crucial ingredient for scaling performance across multiple accelerators in AI training environments. Hygon positions this accelerator as comparable to NVIDIA’s A100-class capabilities, underscoring its focus on serious data center AI workloads rather than entry-level acceleration.
The remaining four chips aim to build the infrastructure around CPUs and accelerators—exactly what’s needed for high-performance computing clusters and AI data centers:
One is a PCIe 5.0 switch built for high-speed I/O expansion, featuring 104 lanes and targeting the same class of high-end PCIe switching solutions widely used in servers and enterprise platforms.
Another is a scale-up interconnect switch intended for ultra-high-speed communication across multiple GPUs and CPUs, positioned in the same conceptual space as technologies used to link accelerators together for large-scale training.
Hygon also lists a 400G network interface chip under its ScaleFabric branding. It’s designed for 400Gb/s port speed with native RDMA support, credit-based flow control, and characteristics aimed at lossless low-latency networking. Hygon claims network card communication latency of 0.93 microseconds and support for up to 256K queue pairs, while also emphasizing plug-and-play deployment and a pathway to 800G.
Finally, the ScaleFabric 400/800G switch is intended to support native RDMA at 400Gb/s and 800Gb/s. Hygon cites high switching density (including an 80 x 400Gb/s configuration), 64Tb switching capacity, and 260ns switching latency, along with fast failover behavior for link failures. The company also notes a self-developed 112G SerDes IP, highlighting signal integrity and high-speed connectivity as part of its platform strategy.
Taken together, the roadmap reads like a full-stack data center plan: a new CPU architecture, a compute-focused GPU accelerator, and the switching and networking components required to connect everything at modern bandwidth and latency targets.
Hygon expects these chips to begin entering production between 2026 and 2027, meaning the next couple of years will be critical as the company moves from roadmaps to real-world deployments and measurable performance. If the stated goals land, Hygon’s C86-5G and its companion AI and networking silicon could become a cornerstone of China’s next wave of domestically focused server, AI training, and high-performance computing infrastructure.






