A close-up of a Samsung semiconductor chip featuring a grid of golden connectors.

Samsung’s HBM4 Lands in NVIDIA’s Vera Rubin GPUs, Claiming the Crown as the Fastest AI Memory Yet

Samsung is making a serious push back into the high-bandwidth memory (HBM) spotlight, and its latest update suggests the comeback is already underway. The company says its next-generation HBM4 memory is now commercially deployed, signaling that Samsung isn’t just showcasing prototypes—it’s shipping technology designed for the world’s most demanding AI and data center workloads.

At the heart of Samsung’s new HBM4 is a sixth-generation 1c DRAM process paired with a 4nm logic die, with both components sourced internally. That end-to-end control over key manufacturing steps matters in a market where performance, yield, and supply stability can decide who wins major AI accelerator contracts.

What’s drawing the most attention is speed. Samsung rates its HBM4 at 11.7 Gbps pin speed, a sizable leap that the company positions as a major step up from the 8 Gbps level associated with earlier solutions. Samsung also says the memory can reach up to 13 Gbps when overclocked—an important detail given that next-generation AI platforms are hungry for more bandwidth to keep massive GPUs fully fed with data.

Samsung’s current shipped HBM4 solutions reportedly use a 12-layer stack, and the company has confirmed a 16-layer version is in development. Moving to 16 layers is significant because it can push per-module capacity up to 48GB, strengthening the case for HBM4 in training and inference environments where model sizes and context windows keep expanding.

Samsung hasn’t named the customer behind the commercial deployment, but the timing and the performance targets naturally point toward major AI infrastructure players. Next-generation AI architectures are increasingly focused on lowering latency and improving responsiveness, and memory bandwidth and capacity are central to achieving those goals. HBM4’s combination of high pin speeds, stacked capacity, and advanced logic is positioned to fit that direction, especially for platforms that prioritize fast data movement at scale.

Looking ahead, Samsung expects its HBM revenue this year to grow threefold compared to 2025, showing how aggressively the company believes this rebound can translate into real sales. It also plans to introduce HBM4E in the second half of 2026, indicating that Samsung intends to keep iterating quickly as competition heats up across the HBM market.

While Samsung still faces a challenge in matching the level of broad customer adoption achieved by its top rivals, the message here is clear: Samsung’s HBM4 is no longer just a roadmap promise. With commercial deployment underway, higher speeds already demonstrated, and bigger stacks on the way, HBM4 could become a pivotal product line in Samsung’s effort to reclaim ground in AI-focused memory.