As the AI boom accelerates, one component is increasingly shaping performance, supply chains, and pricing across the entire hardware stack: memory. A new research note from Korea Investment & Securities (KIS) argues that memory prices may not fall anytime soon, even if some in the market expect demand to cool. The reason is simple but powerful: in modern AI infrastructure, memory capacity is tightly linked to how productive an AI GPU can be.
KIS frames memory as a core lever behind GPU utilization. When an AI system has more HBM and DRAM capacity available close to the GPU, the accelerator can keep working instead of waiting for data to arrive. That higher utilization translates into more tokens processed over time and, importantly for AI operators, a lower cost per token. In other words, adding memory doesn’t just improve a benchmark score—it can improve the economics of running large AI models.
That dynamic is influencing buying behavior in a major way. Hyperscalers have reportedly placed long-term orders for memory capacity because memory directly affects throughput and efficiency. And KIS believes this trend may continue, meaning that even if a typical supply crunch period passes, the pricing effects could linger. If customers are still prioritizing memory to maximize GPU productivity, demand pressure doesn’t disappear just because the calendar turns.
The report also pushes back on the idea that rising prices alone will force demand down. KIS points out that the market may be underestimating how much system-level performance matters in AI deployments. Even if DRAM prices remain elevated—KIS notes DRAM prices are running about three times higher year over year—the financial payoff from better GPU utilization can still justify the spend. For companies training and serving models at scale, memory can be a multiplier on GPU value rather than a simple cost line item.
KIS also highlights knock-on effects across other memory types. Strong demand for HBM and DRAM modules, combined with tight capacity, is contributing to higher demand for NAND as well. This runs counter to the belief that shifting more AI data needs toward NAND might relieve pressure on DRAM. Instead, the report suggests NAND demand could stay strong as AI systems evolve and integrate more storage alongside compute and memory.
Another reason NAND can see sustained momentum is pricing flexibility. Compared with DRAM, NAND is cheaper, which can make it easier for buyers to scale purchases sharply when workloads surge. That elasticity could support continued high volumes even during periods of extreme demand.
Finally, the report notes that tight HBM supply and aggressive hyperscaler demand are pushing the industry to innovate. Memory makers are exploring ways to increase the number of memory dies inside a single package, including advanced packaging techniques such as hybrid bonding, which can reduce reliance on traditional interconnect structures. The takeaway is that demand is not only driving prices—it’s accelerating technical change in how next-generation memory is built.
Overall, KIS’s message is that memory is no longer a secondary component in the AI era. It is a critical determinant of AI GPU efficiency, total system performance, and cost per token—factors that can keep HBM, DRAM, and even NAND demand higher than many expect, and potentially keep memory prices elevated for longer.






