Intel Xeon 6900P "Granite Rapids P-Core" Launched: Scaling To 128 Cores, Up To 2.1x In HPC & 5.5x In AI Versus AMD EPYC, Much Faster Than 128 Core Turin "Zen 5" In AI 1

Scaling to 128 Cores: Dominating HPC and AI with Up to 5.5x Performance Over AMD’s EPYC

Intel is unleashing a powerhouse with its latest Xeon 6900P “Granite Rapids” CPUs, packing a punch with up to 128 high-performance cores. These next-gen processors promise to rival AMD’s EPYC in the high-performance computing (HPC) and artificial intelligence (AI) sectors.

For a while, Intel’s Xeon line has trailed behind AMD’s EPYC, which has led the market in both performance and efficiency. But with the Xeon 6900P, Intel aims to reclaim its dominance. This new lineup breaks fresh ground with advanced core technologies and features that bring it toe-to-toe with AMD’s finest offerings.

Earlier this year, Intel introduced the Xeon 6700E “Sierra Forest,” boasting up to 144 cores. Moving forward, the Xeon 6900E series will push this further to 288 cores by early 2025. The Xeon 6900P lineup, focusing on performance cores, is set to tackle AMD’s upcoming EPYC Turin, delivering significant performance boosts compared to previous generations like Emerald Rapids and Genoa.

Key features of the Xeon 6900P series include:

– Support for DDR5 at up to 6400 MT/s
– Support for MRDIMM at up to 8800 MT/s
– Up to 128 performance cores
– Six UPI 2.0 links, reaching speeds of up to 24 GT/s
– Up to 96 PCIe 5.0/CXL 2.0 lanes
– Up to 504 MB L3 cache
– Intel Advanced Matrix Extensions (AMX) with FP16 support

These CPUs are built using a chiplet-heavy design, incorporating up to five chiplets for “Granite Rapids” P-Core CPUs. Utilizing the “Intel 3” process node, the compute dies feature Redwood Cove P-Cores and integrated memory controllers, while the I/O die leverages the “Intel 7” process node, offering various I/O controllers and accelerator engines.

Xeon SKUs are set to launch in various configurations such as:

– Xeon 6900P: Up to 128 cores
– Xeon 6700P: Up to 86 cores
– Xeon 6500P: Up to 48 cores
– Xeon 6300P: Up to 16 cores
– Xeon 6900E: Up to 288 cores
– Xeon 6700E: Up to 144 cores

The modular compute die architecture provides several advantages:

– Direct socket-agent access through a monolithic mesh
– Flexible routing and modularity for die-specific row and column definitions
– Partitioning of cache into sub-numa clusters, shared among all cores
– Efficient distribution of I/O traffic across columns to reduce congestion
– Hierarchical global infrastructure
– High-speed fabric extension across dies using EmiB technology

Intel’s Xeon 6900 “Sierra Forest” and “Granite Rapids” processors will be supported by the LGA 7529 socket platform, also known as Birch Stream, and the Avenue City reference platform. This platform accommodates 1S/2S configurations with CPUs featuring up to a 500W TDP, 12 memory channels, 96 PCIe Gen 5.0/CXL 2.0 lanes, and six UPI 2.0 links, delivering speeds up to 24 GT/s.

High-end configurations include:

– Intel Xeon 6900P with up to 128 cores
– Intel Xeon 6900E with up to 288 cores

These CPUs also introduce Multiplexed Rank DIMMs (MRDIMMs), supporting speeds up to 8800 MT/s and raising performance by up to 32% in HPC workloads and 33% in AI tasks, rendering an average improvement of up to 21%.

Intel is also partnering with NVIDIA for AI system integrations, leveraging its Gaudi 3 accelerator in a range of solutions. The Xeon 6 processors will scale from 64 cores up to 72 cores, with customizable frequencies for higher performance on demand.

In terms of performance, Intel claims significant gains over AMD’s EPYC, citing up to 5.5 times higher AI inferencing performance and 2.1 times higher HPC performance. Across various workloads including General Compute, Data & Web Services, HPC, and AI, the Xeon 6900P demonstrates an average performance improvement of 2.28 times and a 60% boost in efficiency compared to 5th Gen Emerald Rapids.

When comparing against AMD’s EPYC, Intel boasts:

– 2x cores per socket
– 1.2x higher average performance per core
– 1.6x higher average performance per watt
– 30% lower average total cost of ownership (TCO) for similar performance levels

Intel also indicates that, under the right circumstances, its Emerald Rapids chips can outperform comparable chips from AMD, and the Granite Rapids versus Turin comparison shows even more pronounced performance advantages.

With these advancements, Intel aims to make a strong comeback in the server CPU segment, challenging AMD and setting new standards in both performance and efficiency.Intel has just raised the stakes in the high-performance computing arena with its latest Xeon 6900P “Granite Rapids” processors, which are geared up to take on AMD’s heavyweights. When pitted against AMD’s top-tier EPYC Genoa 96-core and even the formidable Bergamo 128-core CPUs, Intel’s Xeon 6980P showcases some eye-popping performance gains across a wide array of workloads.

In the realm of Vector Databases, Intel’s advanced AMX instructions propel the Xeon ahead, delivering up to a 2.71x performance boost. And that’s not all—Scalable Vector Search (SVS) technology yields a staggering 7.34x gain, highlighting the significant leaps Intel has made.

But the comparison doesn’t stop there. Intel has put its flagship Xeon 6980P against AMD’s EPYC Genoa and Bergamo CPUs across various general computing and data center tasks. The results? An impressive 3.25x performance boost, solidifying Intel’s position as a powerhouse in processing performance.

Intel’s Xeon 6900P lineup is introduced with five potent SKUs:
– **Xeon 6980P:** Leading the charge with 128 P-Cores and 256 threads, this chip hits a base clock of 2.0 GHz, peaking at 3.9 GHz, with a single-core boost clock up to 3.2 GHz. A hefty 504 MB of L3 cache and 500W TDP make it a titan in the CPU world.
– **Xeon 6979P:** Right on its heels with 120 cores, the same boost clock at 3.9 GHz, and matching memory support.
– **Xeon 6972P:** Offering 96 cores, with slightly scaled-down speeds, yet still pushing the boundaries of performance.
– **Xeon 6952P:** This 96-core variant, with a 400W TDP and 480 MB of L3 cache, positions itself as a highly efficient option.
– **Xeon 6960P:** The entry in the lineup boasts 72 cores, with the highest base clock of 2.7 GHz and an all-core boost hitting 3.8 GHz, making it a robust choice for less demanding high-performance tasks.

The Xeon 6900P series provides impressive configuration options, all supporting 12-channel DDR5-6400 MRDIMM memory and extensive PCIe lanes, ensuring remarkable scalability for data centers.

Here’s a brief summary of the specs:

– Xeon 6980P: 128 Cores / 256 Threads, 2.0 / 3.9 GHz, 504 MB L3 Cache, 500 W TDP
– Xeon 6979P: 120 Cores / 240 Threads, 2.1 / 3.9 GHz, 504 MB L3 Cache, 500 W TDP
– Xeon 6972P: 96 Cores / 192 Threads, 2.4 / 3.9 GHz, 480 MB L3 Cache, 500 W TDP
– Xeon 6952P: 96 Cores / 192 Threads, 2.1 / 3.9 GHz, 480 MB L3 Cache, 400 W TDP
– Xeon 6960P: 72 Cores / 144 Threads, 2.7 / 3.9 GHz, 432 MB L3 Cache, 500 W TDP

The new Intel Xeon 6900P series marks a significant milestone for the company, promising a level of performance that aligns well with the escalating demands of modern data centers. As AMD prepares to roll out its next-gen Turin CPUs, we are set for an exciting clash of the titans in the server market, with both giants vying for supremacy. Stay tuned for real-world benchmarks that will reveal how this battle plays out!