Samsung’s fortunes in the high-bandwidth memory (HBM) sector might be on the rise thanks to AMD’s embrace of the HBM3E process in its latest AI accelerators. This development potentially sets the stage for a long-awaited approval from NVIDIA.
Samsung’s journey in the AI segment has been bumpy, especially with its foundry division struggling for several quarters. While there was anticipation that NVIDIA might partner with them, Samsung previously failed to meet NVIDIA’s stringent standards. This setback seemed to stall their progress in the competitive market.
However, fresh insights suggest a turning point. AMD has announced the use of Samsung’s HBM3E 12-Hi stacks in its newest AI accelerators. This partnership could mark a significant shift in industry dynamics. Additionally, Samsung is in talks with AMD to collaborate on the upcoming Instinct MI400 accelerator lineup using its HBM4 process.
The latest from AMD reveals that its Instinct MI350X and MI355X AI accelerators will incorporate HBM3E memory from Samsung and Micron. With these GPUs boasting 288 GB of memory, it’s evident that AMD is leveraging Samsung’s advanced 12-Hi stacks. Moreover, AMD’s plans to expand to rack-scale solutions suggest a rising demand for HBM3E, potentially boosting Samsung’s prospects significantly.
As Samsung gears up to ramp HBM4 production in the latter half of the year, and with AMD’s integration into the Instinct MI400 accelerators, Samsung’s market position could strengthen. The tech giant is keen to regain its edge in the industry, particularly as rivals like SK hynix and Micron have made substantial strides by aligning with NVIDIA. This next phase with AMD could be the catalyst Samsung needs, supported by a robust HBM4 lineup.






