Micron has officially begun mass production of the Micron 9650, positioning it as the world’s first PCIe Gen6 data center SSD to reach this milestone. Designed for the next wave of AI infrastructure, the 9650 targets one of the biggest bottlenecks in modern computing: moving massive amounts of data quickly and efficiently between storage and compute.
After first previewing its PCIe Gen6 SSD plans roughly two years ago, Micron is now moving from promise to product with the 9650 delivering up to 28 GB/s sequential read and 14 GB/s sequential write performance. That’s a major leap in real-world throughput—roughly doubling sequential read speed compared to typical PCIe Gen5 solutions, while also offering about a 40% improvement in sequential write. Beyond raw bandwidth, Micron is also highlighting up to 2x better efficiency, a key advantage as energy limits increasingly shape data center design.
Two series for different data center workloads
Micron is rolling out the 9650 in two lines aimed at different types of enterprise demand.
The Micron 9650 PRO series focuses on read-intensive environments, rated at 1 drive write per day (DWPD). It will be available in 7.68TB, 15.36TB, and 30.72TB capacities, making it a fit for workloads where fast reads dominate—such as model serving, retrieval workflows, and large-scale data access.
The Micron 9650 MAX series is tuned for mixed-use scenarios, rated at 3 DWPD. It comes in 6.4TB, 12.8TB, and 25.6TB capacities, targeting environments that require heavier sustained writes in addition to strong read performance.
PCIe Gen6.2 and NVMe 2.0, backed by Micron G9 TLC NAND
Across the lineup, Micron says the 9650 uses the PCIe Gen6.2 x4 interface with NVMe 2.0 support. The drives are built on Micron’s G9 TLC NAND and are rated for up to a 3.6 GB/s I/O rate—described as the fastest ever in a shipping SSD.
Why PCIe Gen6 matters for AI training and inference
PCIe Gen6 is a major step in I/O architecture because it generally doubles available bandwidth over PCIe Gen5. For AI systems, that additional headroom is increasingly important. Large models and modern inference patterns—especially extended context windows and retrieval-augmented generation pipelines—can be heavily constrained by how quickly data can be streamed in and accessed.
Micron is positioning the 9650 as a way to keep accelerators supplied with data more consistently, helping reduce stalls and improving overall utilization. As next-generation AI servers evolve, data movement is also shifting toward more direct paths between accelerators and storage, reducing reliance on the CPU. Higher PCIe bandwidth helps enable that peer-to-peer style architecture by removing limits that can choke data flow.
Performance gains versus PCIe Gen5
Micron highlights improvements across both sequential throughput and random performance when compared to PCIe Gen5-class SSDs:
Sequential read: 28,000 MB/s vs 14,000 MB/s (100% higher)
Sequential write: 14,000 MB/s vs 10,000 MB/s (40% higher)
Random read: 5.5M IOPS vs 3.3M IOPS (67% higher)
Random write: 900K IOPS vs 720K IOPS (22% higher)
These gains are aimed squarely at data center environments where storage performance can directly influence time-to-train, inference responsiveness, and overall system throughput.
Performance per watt: faster without breaking the power budget
One of the most important messages around the Micron 9650 is that performance improvements don’t have to come with an unacceptable power penalty. With AI infrastructure already consuming a growing share of electricity and power availability becoming a real constraint, storage that demands significantly more energy can create new bottlenecks.
Micron emphasizes “strong performance-per-watt,” stating that at a 25-watt power state, the 9650 can deliver twice the performance of PCIe Gen5 drives. It also cites efficiency advantages like:
Sequential read MB/s per watt: 1,120 vs 560 (2x better)
Sequential write MB/s per watt: 560 vs 401 (1.4x better)
Random read KIOPS per watt: 220 vs 132 (1.7x better)
Random write KIOPS per watt: 36 vs 28.8 (1.2x better)
In practical terms, faster transfers at similar power can mean lower total energy usage per task—helping operators hit sustainability targets while still scaling AI capacity.
Air-cooled and liquid-cooled options: storage enters the thermal spotlight
As data centers push higher performance across CPUs and GPUs, cooling has become a platform-level consideration rather than a component-level afterthought. Micron is acknowledging that reality by offering the 9650 in both air-cooled and liquid-cooled configurations.
The move is a signal that high-performance PCIe Gen6 SSDs may increasingly live in systems where airflow alone isn’t enough, and where storage must be integrated into broader thermal strategies—especially in dense AI servers.
A turning point for high-performance data center storage
With the 9650 now in mass production and undergoing qualification with major OEM and AI data center customers, Micron is framing this as a broader shift: storage is no longer just a supporting component. In AI-focused infrastructure, it can shape system performance, efficiency, and return on investment.
The Micron 9650 is essentially built for a new expectation—storage that doesn’t merely “keep up,” but actively helps keep GPUs and accelerators fed, reduces data movement bottlenecks, and supports the next generation of AI training and inference at scale.






