AMD has hit a major milestone in the data center market. Thanks largely to the continued rise of its EPYC server processors, AMD’s data center revenue climbed to a new peak and, for the first time, moved ahead of Intel’s in the first quarter of the year.
This shift comes as “agentic AI” and large-scale AI inference workloads reshape what data centers need most. For years, GPUs were viewed as the primary engine of AI computing. Now the balance is changing fast as companies realize that high-performance CPUs are just as critical for feeding accelerators, managing complex workloads, and keeping inference pipelines running efficiently. As a result, demand for server-class processors has surged, lifting both AMD and Intel—but AMD is currently gaining the most ground.
According to a report cited by DigiTimes, AMD’s data center revenue exceeded Intel’s during Q1, marking a first for the company. AMD had already been showing stronger momentum in the data center segment since the third quarter of 2025, but this is the first time that momentum translated into a clear revenue lead for a full quarter.
One of the biggest drivers behind this shift is the changing CPU-to-GPU balance inside modern AI systems. The industry has been moving away from configurations where a small number of CPUs support many GPUs. The ratio has tightened rapidly from roughly 1 CPU for every 8 GPUs, to 1:4, and now it’s trending toward 1:1 in some deployments. In practical terms, the more GPUs companies install to meet AI demand, the more CPU capacity they also need to avoid bottlenecks and keep those accelerators fully utilized.
That CPU surge is now so intense that many organizations are looking for any way to secure supply. Some firms are requesting additional server CPU capacity, while others are exploring in-house processor designs to better match their AI infrastructure needs. However, it’s not simply a matter of choosing AMD or Intel—the biggest limiter is manufacturing and the broader supply chain.
The supply situation has become a central issue across the entire industry. Leading foundries are facing constraints, and chipmakers that depend on external manufacturing are feeling the pressure. AMD, in particular, relies heavily on TSMC to produce its advanced chips, and that dependence can limit how quickly it can satisfy demand. To reduce risk and increase available output, AMD is also reportedly looking at Samsung as an additional source of manufacturing capacity.
The takeaway is clear: demand for data center CPUs is strong enough that, with fewer supply constraints, both AMD and Intel could potentially be reporting even higher numbers than they are today. The amount of compute capacity being planned and reserved for the rest of the decade suggests the market still has significant room to expand—if production can keep pace.
At the same time, the competitive landscape isn’t limited to traditional x86 server processors. Arm-based contenders are also building momentum. NVIDIA’s upcoming Vera is expected to play an important role in the Rubin platform, pairing Vera CPUs with Rubin GPUs for large-scale AI deployments. Arm is also seeing major interest in its AGI CPU efforts, even raising revenue expectations tied to that roadmap—another sign that data centers are actively exploring alternatives to meet long-term AI compute needs.
Looking ahead, the next few years will bring fresh server platforms from both major x86 players. AMD is preparing Venice and AI-focused Verano, while Intel is working toward its 18A-based “Diamond Rapids” and a follow-up platform known as Coral Rapids with SMT enabled. With AI inference continuing to grow and CPU demand rising alongside accelerator deployments, the data center CPU battle is far from over—but AMD’s latest revenue milestone shows just how quickly the market is changing.






