Samsung Set to Begin February HBM4 Deliveries to Nvidia and AMD

Samsung Electronics is gearing up to take a big step in the AI hardware race, with reports suggesting the company is close to starting shipments of its next-generation high-bandwidth memory, HBM4. According to sources cited in recent coverage, Samsung could begin delivering its sixth-generation HBM4 chips to Nvidia as soon as February. Another report indicates shipments may start even earlier, potentially next month, and could include both Nvidia and AMD.

HBM4 is part of the specialized memory category designed for extreme data throughput, making it a critical component in today’s AI accelerators, data center GPUs, and other high-performance computing systems. As demand for AI training and inference continues to surge, high-bandwidth memory has become one of the most important pieces of the supply chain. The faster and more efficiently a chip can move data, the better it can keep powerful GPUs fed with the information they need—especially in large-scale AI workloads.

If Samsung begins HBM4 shipments on this timeline, it could strengthen the company’s position in the competitive HBM market at a moment when Nvidia and AMD are both pushing aggressively into next-generation AI platforms. Early delivery windows also hint at preparations for upcoming GPU and accelerator launches, since memory qualification and supply planning typically happen well ahead of major product rollouts.

While exact volumes and contract details haven’t been confirmed in these reports, the takeaway is clear: Samsung appears to be nearing a key milestone in HBM4 readiness, and top AI chipmakers may soon have another major source for the high-bandwidth memory that underpins modern AI performance.