Rambus is taking aim at one of the biggest pain points in modern AI infrastructure: moving enough data, fast enough, between CPUs, GPUs, accelerators, and high-speed storage. As AI clusters grow larger and more complex, bandwidth and latency bottlenecks can quickly limit performance, especially when multiple workloads are competing for the same interconnect resources.
To help solve this, Rambus has announced its new PCIe 7.0 Switch IP featuring Time Division Multiplexing (TDM). The company says this new addition to its interconnect IP lineup is built to address the escalating bandwidth, latency, and scalability demands seen in AI, cloud data centers, and high-performance computing environments.
The core idea behind adding TDM is smarter, more efficient use of PCIe links. Instead of leaving capacity underused or letting certain traffic patterns dominate, TDM allows traffic to be scheduled and multiplexed across shared links in a more controlled way. For system architects building disaggregated or pooled compute designs, this kind of deterministic, low-latency behavior can be especially valuable as resources are shared across many nodes and devices.
Based on the PCIe 7.0 specification, the new switch IP is positioned for next-generation AI and data center SoCs that need extreme bandwidth density, stronger traffic management, and the ability to scale without introducing unpredictable performance dips. Rambus highlights support for a wide range of workload profiles, including bandwidth-hungry AI training, latency-sensitive inference, and heavy data movement between compute and NVMe storage.
Rambus also emphasizes that this PCIe 7.0 Switch IP is designed to fit into leading-edge ASIC platforms and to work alongside the rest of its PCIe 7.0 portfolio, which includes controllers, retimers, and debug solutions. The goal is to help customers shorten development cycles and improve time-to-market while still meeting stringent power, performance, and reliability targets required in today’s AI and data center deployments.
With AI systems pushing interconnect fabrics harder than ever, Rambus is betting that PCIe 7.0 switching combined with Time Division Multiplexing can help data move more efficiently, keep latency predictable, and make large-scale architectures easier to design and operate—key advantages as the industry continues its shift toward bigger, more distributed compute infrastructure.






