AMD is set to revolutionize the energy efficiency of AI clusters, aiming for a staggering 20-fold increase by 2030. This ambitious goal is a part of AMD’s commitment to making high-performance computing more energy-efficient and sustainable. By enhancing energy efficiency, AMD plans to make computation more scalable, ensuring AI doesn’t demand excessive power.
The company recently announced that it had surpassed its 30×25 goal from 2021, which targeted a 30x improvement in energy efficiency for AI training and high-performance computing nodes between 2020 and 2025. This achievement highlights AMD’s ongoing dedication to energy-efficient design.
Looking forward, AMD has its eyes set on a new target: a 20x improvement in rack-scale energy efficiency for AI training and inference by 2030, using 2024 as the baseline year. As AI workloads continue to expand, system-level efficiency gains are crucial. This shift from node efficiency to rack-scale efficiency, expected to outpace the industry’s past trends significantly, involves holistic improvements across CPUs, GPUs, memory, and more.
Achieving a 20x efficiency improvement at this scale will have profound effects. For instance, training a typical AI model in 2025 could be done using significantly fewer racks, leading to a dramatic 95% reduction in operational electricity use and reducing carbon emissions from 3,000 to just 100 metric tons of CO2 for model training.
AMD is committed to pushing the boundaries of what’s possible by prioritizing efficiency alongside performance. As they progress towards this goal, AMD will provide updates, showcasing how these advancements benefit the entire computing ecosystem.






