The image features text stating 'Tachyum Open Sources 281GB/s TDIMM™ for the Future of AI and Computing' alongside the Tachyum logo and tagline 'Doing More With Less'.

DDR6 TDIMMs Target 2028 Launch: Up to 1TB per DIMM and 5x Bandwidth Boost to 231GB/s

Tachyum has revealed a new open-source memory standard called TDIMM (short for Tachyum DIMM), positioning it as a major leap in memory bandwidth and module capacity for next-generation AI data centers and high-performance computing.

The company says the first implementation, based on DDR5, can deliver a dramatic bandwidth increase over today’s common server modules. In Tachyum’s figures, standard DDR5 RDIMMs top out at about 51 GB/s of bandwidth, while DDR5 TDIMM would reach as high as 281 GB/s. That’s roughly a 5.5x uplift, aimed at feeding modern CPUs and accelerators that are increasingly bottlenecked by memory throughput.

TDIMM is also designed to scale capacity per module well beyond typical server DIMMs. Tachyum’s reference designs start at 256 GB per module, step up to 512 GB for taller module designs, and extend to an “extra tall” form factor that can reach 1 TB on a single module. If those capacities materialize in real platforms, it could enable denser memory configurations with fewer slots, which is especially attractive for AI training and inference systems that benefit from large in-memory datasets and larger model footprints.

At a technical level, TDIMM changes the module interface in a few notable ways. Compared with DDR5 RDIMM, TDIMM doubles the data path from 64-bit data to 128-bit data while keeping 16-bit ECC. It does this with a new 484-pin connector versus the 288-pin connector used by today’s DDR5 RDIMMs. Tachyum says the physical dimensions remain similar to existing DDR5 modules even though the connector and signaling are different, meaning it would require new motherboard and platform support rather than drop-in compatibility.

One of the more interesting claims is the efficiency of the redesign: Tachyum states that TDIMM increases signal count by about 38% yet delivers double the bandwidth. It also suggests the design would need around 10% fewer DRAM ICs, which the company argues could translate into approximately 10% lower cost.

Looking further out, Tachyum is already talking about evolutionary updates to TDIMM that could push bandwidth far beyond today’s roadmaps. The company predicts future TDIMM iterations could scale to 27 TB/s of bandwidth by 2028. For context, it contrasts that with a projected jump from 6.7 TB/s on DDR5 to 13.5 TB/s on DDR6 over the same general timeframe.

Tachyum is tying TDIMM directly to the economics of large-scale AI, arguing that memory is a key lever for reducing the cost and power required for massive training workloads. Company statements suggest TDIMM could play a central role in making far larger AI models more affordable and more widely accessible, including models trained on extremely broad bodies of human knowledge.

As with any newly announced hardware standard, the biggest question is execution: industry adoption, real-world performance, platform availability, and an ecosystem of vendors willing to build to the specification. TDIMM is an ambitious proposal with attention-grabbing bandwidth and capacity targets, but its real impact will depend on whether it moves from specification and claims into shipping, validated products across server and data center platforms.