Rambus has unveiled details about its groundbreaking HBM4 Memory Controller, promising substantial advancements over the current HBM3 and HBM3E technologies. This new development is set to propel the AI and Data Center industries into a new era, delivering significantly faster memory speeds and higher capacities within each stack.
As JEDEC inches closer to finalizing the HBM4 memory specifications, we’re gaining insight into the capabilities of this next-gen solution. Primarily targeting AI and Data Center applications, HBM4 is designed to enhance the existing HBM architecture in remarkable ways.
Rambus has announced that its HBM4 Memory Controller will deliver speeds exceeding 6.4 Gb/s per pin, outstripping the first HBM3 generation and offering greater bandwidth than HBM3E solutions while retaining the 16-Hi stack and 64 GB maximum capacity design. The initial bandwidth rating for HBM4 is an impressive 1638 GB/s, which is 33% higher than HBM3E and doubles the bandwidth of HBM3.
Currently, HBM3E solutions operate at maximum speeds of 9.6 Gb/s with bandwidths reaching up to 1.229 TB/s per stack. In comparison, HBM4 is poised to reach speeds up to 10 Gb/s and bandwidths up to 2.56 TB/s per HBM interface, more than doubling the performance of HBM3E. However, the full potential of HBM4 will unfold gradually as production yields improve. Notable features of HBM4 include ECC, RMW (Read-Modify-Write), and Error Scrubbing, among others.
In terms of production, SK Hynix has reportedly started mass-producing its 12-layer HBM3E memory, offering capacities up to 36 GB with 9.6 Gbps speeds. They are also expected to finalize HBM4 production this month. Meanwhile, Samsung plans to commence mass production of HBM4 memory by late 2025, with tape-out anticipated this quarter.
On the application front, NVIDIA’s Rubin GPUs, set for a 2026 release, will be among the first AI platforms to harness the power of HBM4 memory. AMD’s Instinct MI400 is also expected to adopt this next-gen design, although official confirmation is still pending.
Here’s a quick rundown comparing the different HBM memory specifications over the years:
– HBM1: 128 GB/s bandwidth, 4 GB max capacity, 4 DRAM ICs per stack
– HBM2: 256 GB/s bandwidth, 8 GB max capacity, 4-8 DRAM ICs per stack
– HBM2e: 460.8 GB/s bandwidth, 16 GB max capacity, 4-8 DRAM ICs per stack
– HBM3: 819.2 GB/s bandwidth, 24 GB max capacity, 8-16 DRAM ICs per stack
– HBM3E: 1.2 TB/s bandwidth, 24-36 GB max capacity, 8-16 DRAM ICs per stack
– HBM4: Expected to offer 1.5 – 2.56 TB/s bandwidth, 36-64 GB max capacity, 8-16 DRAM ICs per stack
With these cutting-edge advancements, HBM4 is poised to significantly elevate the performance and capabilities of AI and Data Center technologies, setting new benchmarks for speed and capacity in the industry.






