SK hynix Reportedly Merging DRAM and NAND Into a Unified High-Bandwidth Storage Stack to Supercharge On-Device AI

SK hynix is reportedly building a new kind of memory package for mobile devices that could supercharge on-device AI. Called High-Bandwidth Storage (HBS), the design blends DRAM and NAND into a single, tightly integrated module to deliver higher data speeds, lower latency, and improved energy efficiency for next-generation smartphones and tablets. According to ETNews, the company expects this approach to noticeably boost AI performance on mobile chipsets.

At the heart of HBS is a packaging technique dubbed Vertical Wire Fan-Out (VFO). Instead of relying on conventional curved wiring paths, VFO stacks up to 16 layers of DRAM and NAND and connects them in straight, vertical lines. Shorter, straighter connections mean less signal loss and less distance for data to travel, which translates into faster processing and higher bandwidth. It’s a similar philosophy to what High-Bandwidth Memory (HBM) brought to GPUs and AI servers, but tuned for the space and power constraints of mobile devices.

The efficiency gains are clear. SK hynix says VFO reduces the amount of wiring required by a factor of 4.6, trims power consumption by about 5 percent, and improves heat dissipation by roughly 1.4 percent. The company also reports a 27 percent reduction in overall package height, opening the door to slimmer devices and better thermal behavior under sustained workloads.

Another advantage is how HBS is made. Unlike HBM, it doesn’t require Through-Silicon Via (TSV) drilling to connect stacked chips. Skipping TSV can lower manufacturing complexity and cost while improving yield—key benefits when scaling production for mainstream phones and tablets.

When paired with an application processor, this unified HBS module is designed to handle AI workloads directly on the device. SK hynix already has real-world experience with VFO-based DRAM packaging in products like Apple’s Vision Pro, suggesting the technology is mature enough to transition into broader mobile use.

If the rollout matches expectations, HBS could become a foundational upgrade for mobile AI—delivering faster data access, cooler operation, and longer battery life, all inside thinner devices. It’s a practical blueprint for the next era of AI-ready smartphones and tablets, where memory and storage work together as a single high-speed engine.