Samsung is already moving quickly on next-generation mobile DRAM, with reports indicating the company has begun shipping LPDDR6X memory samples to key partners, including Qualcomm. While the broader industry is still waiting for the official LPDDR6 era to begin, this early sampling suggests Samsung is preparing the ground for what comes next in high-speed, power-efficient memory for AI and advanced computing.
According to information shared by a Korean outlet, Samsung has completed major development work on LPDDR6 and is preparing for mass production in time for a planned rollout in the second half of 2026. The initial LPDDR6 performance target is said to be 10.7 Gbps, alongside roughly 21% better efficiency compared to LPDDR5. Improved versions are expected to push performance further, with speeds projected to reach 14.4 Gbps or higher.
LPDDR6X is positioned as an enhanced form of LPDDR6, designed to expand the capabilities of the standard even more. While the final LPDDR6X specifications have not yet been formally locked in by the relevant industry standards body, more details are expected to emerge over the course of the year as development progresses and partners begin evaluating early hardware.
One of the most interesting parts of this report is Qualcomm’s apparent role in testing these samples. The company is expected to use LPDDR6X in a future AI accelerator known as the AI250, a successor to the AI200. These chips are designed for AI inferencing workloads and rely on LPDDR memory rather than the more expensive high-bandwidth memory used by many data center accelerators.
That choice reflects a growing reality in the AI hardware market. HBM offers massive bandwidth, but it also comes with higher cost, higher power demands, and complicated manufacturing requirements tied to advanced packaging, validation, and testing. With supply constraints also affecting the broader memory ecosystem, LPDDR can look increasingly attractive for AI solutions that prioritize efficiency and cost-effective scaling instead of chasing the absolute highest bandwidth at any price.
In terms of capacity, the report suggests Qualcomm could scale its LPDDR-based accelerators aggressively. The AI200 is expected to support up to 768 GB of LPDDR memory, while the next-generation AI250 paired with LPDDR6X could potentially exceed 1 TB. If that happens, it would highlight a different approach to AI accelerator design—leaning on large memory pools and power efficiency to deliver strong inferencing performance, particularly where cost and deployment flexibility matter.
Even with Samsung already sending samples, LPDDR6X itself is not expected to arrive soon. Realistically, the memory standard is still several years out, with expectations pointing to late 2027 or early 2028 for broader availability. Until then, the industry will be watching LPDDR6 ramp up first, with LPDDR6X representing the next step in the evolution of fast, efficient DRAM for AI and next-generation computing devices.






