NVIDIA has officially sold off the last of its stake in Arm, closing the chapter on a relationship that once looked like it could turn into a full acquisition. According to recent SEC filings, the remaining shares NVIDIA held were valued at roughly $140 million. While the sale is being described as a financial move rather than a shift in product direction, the timing has still sparked fresh debate about what kind of CPUs will matter most as AI infrastructure evolves.
Arm’s CPU architecture has played a major role in NVIDIA’s rise as the defining force in modern AI hardware. It has supported key NVIDIA platforms built for training and inference, and Arm remains central to NVIDIA’s current roadmap, including the upcoming Vera CPU line. But at the same time, the broader AI market is changing quickly, and CPUs are becoming more important in a way that’s easy to underestimate if you only think about AI as “GPU work.”
Why CPUs are suddenly a bigger deal in AI, especially agentic AI
A major driver is inference, and more specifically agent-based AI systems where models don’t just generate text but also perform actions. In these “agentic” workflows, the system is constantly juggling tool calls, API requests, memory lookups, retrieval steps, scheduling, and orchestration logic. That means the CPU ends up handling a large share of the work that keeps the overall pipeline moving. If the CPU can’t feed instructions efficiently, expensive GPUs can sit idle waiting—an outcome hyperscalers are eager to avoid.
This shift helps explain why both Intel and AMD have been reporting strong hyperscaler demand for data center CPUs. The total addressable market for server CPUs is expanding as AI deployments scale, not only for training clusters but also for inference fleets that need tight coordination and fast responsiveness.
x86 vs Arm: why some believe x86 has an edge for agentic workloads
Analysts have pointed to a specific challenge for Arm-based CPUs in AI servers: weaker momentum in certain deployments, partially attributed to lower GPU scheduling efficiency compared with x86 platforms. The idea is that when workloads become highly latency-sensitive and fragmented into countless microtasks, responsiveness can matter as much as raw throughput.
One frequently cited advantage is single-threaded burst performance. In many agentic environments, what matters is how quickly the CPU can execute short, critical bursts of work to keep the GPU pipeline busy. When millions of micro-operations are being triggered, even small delays can create bottlenecks that ripple through the system.
There’s also a less glamorous but equally powerful factor: enterprise inertia. Many data centers are deeply standardized around x86, including firmware, virtualization, management tooling, and years of software that has been built, validated, and optimized for x86 servers. For hyperscalers in the middle of a major upgrade cycle, staying within a familiar x86 ecosystem can reduce friction, speed deployment, and simplify operations at scale.
NVIDIA’s x86 exploration and what it could mean
Even with Arm still positioned as important to NVIDIA’s CPU roadmap, NVIDIA is reportedly exploring x86 options as well. Its recent work with Intel points toward enabling x86-class CPU solutions inside NVIDIA’s NVLink-connected server rack designs—an approach that could make a lot of sense if customers want NVIDIA’s accelerated infrastructure but prefer x86 compatibility in the host CPU layer.
That doesn’t mean NVIDIA is walking away from Arm. The Vera CPUs are described as fully Arm-based today. But if demand continues to grow for x86-centric AI server designs—especially for agentic inference and orchestration-heavy deployments—NVIDIA could eventually broaden its CPU strategy in future generations beyond Vera.
What the Arm stake sale likely signals
Despite the chatter, the stake sale itself is currently framed as a straightforward financial decision rather than a signal that NVIDIA is downgrading Arm’s importance. NVIDIA can continue to partner closely with Arm while holding no equity position at all. Still, with AI server priorities shifting toward orchestration-heavy inference, the market is paying close attention to whether x86 or Arm will dominate the CPU side of next-generation AI platforms.
If the industry continues moving toward agentic AI at scale, the winning CPU strategy may be less about ideology and more about practical performance, scheduling efficiency, software compatibility, and the ability to keep GPUs working at full utilization.






