As the global race to build bigger and faster AI systems accelerates, the bottleneck is changing. It’s no longer just about who makes the most powerful GPU architecture. Increasingly, the real limitation comes down to memory physics—how quickly data can move, how much can sit close to the processor, and how efficiently that memory can be stacked and cooled at scale.
Right now, High Bandwidth Memory (HBM) sits at the center of modern AI computing because it delivers the kind of bandwidth and low latency that training and running large AI models demand. The challenge is supply. HBM capacity is heavily concentrated among just three major manufacturers: Samsung, SK Hynix, and Micron. That tight concentration matters because it can restrict production, influence pricing, and slow down how quickly new AI servers and accelerators can roll out across the industry.
With demand rising sharply, the conversation is starting to shift toward alternatives—new memory architectures that could reduce reliance on HBM or complement it in future AI hardware. One of the ideas gaining attention is ZAM, a proposed “alternative” approach that’s being pushed into the spotlight by major industry efforts involving Intel and SoftBank. The big question is whether ZAM could realistically replace HBM, or if it’s more likely to become another option in a growing toolbox of AI memory technologies.
What’s driving interest in alternatives is simple: AI workloads are becoming so data-hungry that even the best compute can sit underutilized if memory can’t keep up. As models grow, the need for larger memory pools and higher throughput becomes just as important as raw processing power. In that environment, any technology that can offer comparable bandwidth, better scalability, or easier manufacturing could become a serious contender—especially if it helps diversify supply beyond the current HBM leaders.
For now, HBM remains the standard that AI hardware is built around, and its supply chain concentration continues to shape the pace of the AI boom. But as Intel and SoftBank push ZAM as an alternative memory architecture, the industry is clearly signaling that the next major breakthrough in AI performance may come from memory innovation, not just faster GPUs.






