Fresh clues from recent Linux kernel patches are shining a brighter light on Intel’s next-generation Diamond Rapids Xeon CPUs, and they point to a major shift in how the company is building its upcoming data center processors. The new information suggests Intel is doubling down on a modular, tile-based design that separates compute from key I/O and memory functions, a move aimed at scaling performance and improving platform flexibility for future server workloads.
At the heart of Diamond Rapids are two newly referenced tiles with distinct roles: CBB and IMH.
The CBB, short for “Core Building Block,” is described as the compute tile. This is where the main CPU cores live and where the raw processing horsepower comes from. What’s especially notable here is what Intel appears to be changing compared to the prior approach: instead of keeping the integrated memory controller on the same tile as the compute portion, Diamond Rapids is expected to split it out.
That’s where the second tile comes in. The IMH, or “Integrated I/O & Memory Hub,” is the tile that’s expected to handle the integrated memory controller (IMC) along with I/O responsibilities. According to the patch details, Diamond Rapids may include up to two IMH dies, separate from the CBB compute dies. There’s also a suggestion that the IMH could be positioned on a base tile-style layout, similar to other recent multi-tile designs Intel has used elsewhere.
The patches also indicate Diamond Rapids continues relying on discovery tables for uncore enumeration, echoing the general idea used in prior Xeon generations, but with some meaningful updates. Each CBB die and each IMH die reportedly has its own dedicated discovery table, which hints at a more granular, scalable approach as Intel increases core counts and expands platform complexity.
There are also changes in how performance monitoring discovery is handled. Instead of using only one method to retrieve the global discovery portal, Diamond Rapids appears to use PCI for IMH PMON discovery and MSR for CBB PMON discovery. In addition, several new PMON types are introduced, including SCA, HAMVF, D2D_ULA, UBR, PCIE4, CRS, CPC, ITC, OTC, CMS, and PCIE6. Another difference called out is that IIO free-running counters are MMIO-based in Diamond Rapids rather than following the older approach.
For data center buyers and platform watchers, one of the biggest forward-looking signals is PCIe Gen6 support. With PCIe Gen6 positioned as the next major interconnect leap for servers, storage, and accelerators, its appearance in these Diamond Rapids details aligns with expectations that next-gen CPU platforms will prioritize bandwidth and connectivity—especially as AI and high-performance computing continue to drive demand for faster CPU-to-GPU, CPU-to-NIC, and CPU-to-storage communication.
Outside of the tile architecture and platform capabilities, prior reports and rumors around Diamond Rapids are already setting expectations high. Intel’s Diamond Rapids Xeon lineup is widely expected to scale up to 192 cores, with some speculation reaching as high as 256 cores, though nothing has been confirmed on that upper limit. The chips are expected to use Intel’s 18A process node and feature Panther Cove P-cores as the core architecture.
Early platform chatter also points to extremely high power ceilings—up to around 650W TDP—on an LGA 9324 platform, with multi-socket configurations expected for enterprise and hyperscale deployments. If these details hold, Diamond Rapids is shaping up to be a heavyweight server CPU aimed squarely at dense compute, large memory footprints, and next-generation I/O requirements.
As for timing, Intel is expected to introduce Diamond Rapids sometime around mid-2026 or the second half of 2026. If the kernel patch discoveries are any indication, the architectural groundwork is already taking shape, and the move toward clearly separated compute and I/O plus memory tiles could be one of the defining design choices that helps Intel scale Xeon further in the years ahead.






