HBM3E and HBM4 chips on display in a lighted frame.

Samsung’s HBM Comeback: Co-CEO Signals a Bold Sprint Toward HBM4 Leadership

Samsung’s position in high-bandwidth memory (HBM) is starting to look far stronger than it did just a few quarters ago. After a rocky period during the industry’s rapid pivot toward AI-focused memory, new signals suggest Samsung’s next-generation HBM4 could become one of the most powerful and in-demand solutions on the market—potentially helping the company regain momentum and even overtake key rivals in market share over the next few years.

For decades, Samsung has been a dominant force in DRAM, but the explosion in demand for HBM in AI accelerators changed the competitive landscape. When the market shifted aggressively toward HBM3, Samsung reportedly ran into obstacles that slowed its progress with major customers. Early hurdles included DRAM yield challenges and thermal concerns, which contributed to delayed certifications from top-tier clients such as AMD and NVIDIA. While competitors benefited from the surge in AI-driven orders, Samsung’s HBM business faced questions about whether it could keep pace.

That narrative may be changing quickly.

In a recent New Year’s address, Samsung co-CEO and chip chief Jun Young-hyun indicated that external customers are showing renewed confidence in Samsung’s HBM4 direction. The takeaway is clear: Samsung sees HBM4 as a major growth lever tied directly to the AI boom, and customer sentiment is improving. Jun reportedly noted that customers have even remarked that “Samsung is back,” while also emphasizing there is still work ahead to further strengthen competitiveness.

One major reason Samsung appears to be gaining traction is its progress in advanced DRAM development, particularly its early start on 1c DRAM. That head start can translate into better performance, higher efficiency, and stronger scalability—advantages that matter enormously for AI training and inference hardware where bandwidth and power efficiency are critical. On top of that, Samsung has been working closely with key ecosystem partners, including NVIDIA, to secure a firmer spot in the AI supply chain.

Performance claims are another reason HBM4 is drawing attention. Samsung is said to be preparing HBM4 modules featuring extremely fast pin speeds rated at 11 Gbps, placing them among the fastest in the industry. In a market where every gain in bandwidth can improve accelerator performance and data throughput, speed improvements like this can be a strong differentiator—especially for customers building next-generation AI systems that are constrained by memory bandwidth.

Beyond raw specs, Samsung’s scale could also play a meaningful role. With extensive DRAM manufacturing capacity and supply infrastructure, the company can approach HBM4 with a combination of volume readiness and pricing strategy. Competitive pricing—paired with performance that meets or exceeds customer expectations—can be a potent mix when hyperscalers and chipmakers are placing massive AI-related memory orders.

Looking ahead, market forecasts are increasingly optimistic about Samsung’s trajectory. Modeling cited by analysts suggests Samsung’s HBM market share could surpass SK hynix by 2027. If that projection holds, it would represent a major turnaround after Samsung lost its lead recently, and it would underscore how quickly the competitive rankings can shift in the AI memory era.

Of course, this isn’t a winner-takes-all market. The scale of AI-driven demand is so large that multiple memory suppliers are positioned to benefit. With AI accelerators expanding across data centers, enterprise deployments, and cloud platforms, companies like Micron, SK hynix, and Samsung all have strong incentives to ramp HBM capacity and lock in long-term customer commitments.

Still, Samsung’s apparent rebound—paired with strong customer interest in HBM4 performance and improved execution—sets the stage for one of the most closely watched battles in the semiconductor memory industry through 2027 and beyond.