Samsung has lifted the curtain on its next-generation HBM4 memory at SEDEX 2025, a clear signal that the company is ready to go head-to-head in the rapidly intensifying high-bandwidth memory race powering today’s AI explosion.
HBM4 sits at the heart of modern AI and high-performance computing, where memory bandwidth and efficiency directly influence model training and inference speeds. After a period of slower momentum, Samsung appears to be back in form. According to DigiTimes, the company’s HBM4 logic die yield has reached around 90%, a milestone that strongly suggests mass production is on track and major delays are unlikely.
To accelerate early adoption, Samsung is reportedly focusing on a three-pronged strategy: competitive pricing, higher production capacity, and faster pin speeds. The pin speed target is said to hover around 11 Gbps, which would place it ahead of competing offerings from SK hynix and Micron. While NVIDIA has not yet greenlit Samsung as an HBM4 supplier, the company’s progress indicates growing confidence in securing key design wins.
The competitive landscape is heating up beyond Samsung. SK hynix also showcased its HBM4 modules at the event, developed in collaboration with TSMC. With multiple heavyweights moving quickly, the DRAM market is poised for a new phase of intense rivalry, driven by unprecedented demand from AI data centers, accelerators, and next-gen GPUs.
The takeaway is straightforward: with yields climbing, performance targets rising, and production plans solidifying, HBM4 is on the verge of becoming the backbone of AI hardware. Expect the next wave of supply agreements, performance benchmarks, and capacity ramps to shape leadership in this critical market over the coming year.






