Micron just delivered a record fiscal Q2, powered by surging demand for DRAM, NAND, and high-bandwidth memory (HBM). But the bigger takeaway isn’t only the strong quarter—it’s what Micron believes comes next. The company’s CEO says the rapid rise of AI is still in the “first innings,” and that memory demand is poised to grow far beyond today’s already-tight market as AI expands from training into large-scale inference.
In comments shared during an interview, Micron’s CEO Sanjay Mehrotra described memory as a “strategic asset” in the AI era. As more AI services move from experimentation to real-world deployment, the industry’s focus shifts toward inference—where models generate outputs in real time for users and businesses. That shift drives a massive need for token throughput, and token throughput depends on speed and capacity across the memory stack. In simple terms: to make AI respond faster and handle more users at once, systems need more memory and faster memory.
That’s especially important because AI compute doesn’t scale on processors alone. Memory is a core part of performance. AI GPUs are hungry for HBM, while AI CPUs lean heavily on DRAM—and both categories are under supply pressure. Micron’s view is that the current environment isn’t just about higher demand or stronger pricing; it’s about supply being difficult to expand quickly, which is showing up across the market.
The broader hardware roadmap points to why the memory squeeze may persist. Next-generation accelerators are pushing into denser, higher-bandwidth HBM standards such as HBM4, aiming to raise both bandwidth and capacity. On the system side, DRAM requirements are climbing quickly as newer AI-heavy workloads push platforms to support dramatically larger memory pools—figures like 400GB are being discussed as expectations rise for AI-enabled servers and enterprise deployments. At the same time, LPDDR is gaining momentum in large-scale environments thanks to its power efficiency, which matters when AI infrastructure is deployed at massive scale and energy costs become a limiting factor.
Micron says its record Q2 results were driven by strong demand, tight industry supply, and execution that delivered new highs across revenue, gross margin, earnings per share, and free cash flow. The company also expects to post another set of significant records in fiscal Q3, suggesting that the momentum isn’t fading.
One of the most notable forecasts is how much of the memory market AI could consume. Based on current trajectory, AI demand for DRAM and NAND is expected to exceed 50% of total industry total addressable market this year. Micron says traditional servers and AI servers both remain robust, but the ability to meet demand is constrained by limited DRAM and NAND supply. It also expects DRAM demand to keep rising as refreshed platforms and newer systems roll out.
On the product front, Micron is moving quickly through multiple AI-focused memory generations. The company is supplying HBM4 36GB (12-Hi) DRAM for NVIDIA’s Vera Rubin platform, while also working to reach mature yields on current HBM3 processes. Looking ahead, Micron is developing next-generation HBM4E and expects it to ramp next year.
Beyond HBM, Micron is also expanding LPDDR-based solutions aimed at high-capacity, high-efficiency deployments. It recently introduced a 256GB SOCAMM2 memory solution built on LPDDR5X modules, designed to scale up to 2TB capacities. The company also notes it is supplying NVIDIA’s “Groq 3 LPX” with DDR5 memory, and that the Groq LPU is positioned to offer up to 12TB capacity per chip—another sign of how quickly AI hardware is scaling memory footprints.
While data center and AI are booming, Micron’s outlook for consumer markets is more constrained. The company expects PC and mobile unit volumes to decline in the low double digits, pointing to constrained supply and higher prices. Even so, one trend is becoming clearer: 32GB is increasingly becoming the preferred configuration for PCs expected to run agentic AI workloads locally, reflecting how AI features are raising baseline memory expectations in everyday devices.
The message from Micron is straightforward: AI is moving into a phase where memory capacity and speed will determine real-world performance, not just raw compute. And with supply still tight and next-generation AI systems demanding more bandwidth and larger memory pools, the push for faster, denser memory looks like it’s only getting started.






