Arm recently pulled the curtain back on its newest AGI-focused CPU, and CEO Rene Haas didn’t mince words about the company’s ambitions. The message was clear: Arm believes it’s ready to challenge x86 in the data center and take meaningful market share as “agentic AI” becomes a bigger pillar of modern computing. Intel, however, isn’t buying the premise that Arm’s new chip represents a serious threat—at least not in the way Arm is framing it.
The server CPU landscape has shifted quickly over the past few years. Cloud providers, enterprises, and AI infrastructure builders are no longer thinking only about traditional web services and virtualization. They’re increasingly planning for agentic AI workloads—systems that coordinate tasks, tools, and multi-step workflows—which can elevate the importance of general-purpose CPU performance alongside accelerators. That shift has created fresh openings for every major CPU vendor, and it’s why Arm’s AGI CPU announcement landed with so much noise.
Arm’s pitch is that typical x86 CPUs weren’t built with agentic AI efficiency as a first-class goal. One of Arm’s executives, Mohamed Awad, positioned the AGI CPU as an answer to the constraints of running these workloads on conventional x86 server chips, arguing that legacy approaches make it harder to sustain efficiency. He also took aim at Intel’s long-running simultaneous multithreading approach—best known as Hyper-Threading—calling it outdated and implying it can introduce bottlenecks rather than solving them.
Arm’s argument is straightforward: multithreading can increase the number of threads a core can work on, but it doesn’t magically double key resources like I/O capacity or memory bandwidth. In other words, if the system is already limited elsewhere, adding more threads may just shift the choke point.
Intel’s Data Center Group leadership pushed back hard on that framing. Intel’s EVP and GM for the Data Center Group, Kevork Kechichian, suggested the anti-SMT messaging is more of a marketing narrative than a decisive architectural advantage. His implication: Arm is criticizing SMT because its cores don’t support it, not because SMT is inherently the wrong direction for data center CPUs.
To reinforce the point that SMT still matters in modern server design, Kechichian pointed to upcoming server CPU designs in the market that continue to include SMT support—signaling that the industry still views the feature as valuable for throughput, utilization, and certain classes of workloads.
Intel also highlighted that it isn’t standing still on the “agentic AI-ready” conversation. Kechichian referenced Intel’s Clearwater Forest Xeon lineup as the closest comparison to Arm’s AGI ambitions. Clearwater Forest is positioned around extremely high core counts—reportedly reaching 288 cores per socket—aiming to deliver strong compute density for data center deployments. The tradeoff, as noted, is that memory bandwidth per core may lag behind, an important factor as AI-adjacent workloads often stress memory access patterns and data movement.
While the technical debate over SMT, SIMD choices, and compute density will continue, Arm has another advantage that’s harder to quantify on a spec sheet: ecosystem momentum and platform influence. Arm-based server efforts have already gained visibility through deployments and partnerships in the broader industry, and Arm cores have built a strong reputation for power-efficient performance across multiple platforms. That kind of ecosystem pull can matter when hyperscalers and infrastructure buyers are deciding what to standardize on for the next generation of deployments.
For Intel, current hyperscaler demand is described as more concentrated in networking-oriented use cases, but the bigger story may be the market itself expanding. As agentic AI grows and more workloads like orchestration, coordination, and general AI infrastructure services become commonplace, the total addressable market for general-purpose CPUs could rise. If that happens, it won’t necessarily be a winner-takes-all outcome—more CPU demand could lift multiple vendors at once, including Intel, Arm-based platforms, and others competing for slices of the rapidly evolving data center stack.
In the near term, Arm is framing its AGI CPU as a purpose-built answer to where AI infrastructure is going next. Intel’s response is that the claims of clear superiority over x86 are overstated, and that the real-world data center still values features like SMT—along with dense, scalable Xeon platforms designed for high-throughput environments. The next phase of competition will likely be decided less by bold announcements and more by adoption, performance-per-watt in production, and how well each platform handles the messy reality of modern AI-driven data center workloads.






