AMD ROCm 7.2.2 Brings Ryzen AI 400 CPU Support and Speeds Up Local AI Inference

AMD is pushing local AI to the next level with its latest ROCm updates, adding fresh support for new Ryzen APUs and making it easier than ever to run powerful AI workloads directly on consumer hardware.

At CES 2026, AMD introduced ROCm 7.2.7, and one of the biggest takeaways is official support for the newly announced Ryzen AI 400 “Gorgon Point” APUs. That matters because ROCm is the foundation AMD uses to accelerate AI and compute workloads across its hardware ecosystem. With new APU support arriving quickly, AMD is signaling that edge AI and on-device AI aren’t side projects anymore—they’re becoming core parts of the company’s software strategy.

A major highlight in this ROCm generation is AMD’s focus on real-world AI apps people actually use. The company says it has heavily optimized ROCm with ComfyUI, a popular image generation workflow suite, and claims up to 5x higher performance with ROCm 7 compared to earlier versions. For users experimenting with AI image generation locally, improvements like these can translate into faster iterations, smoother workflows, and less time waiting on renders.

AMD also appears to be leaning harder into mainstream Ryzen and Radeon users. The company notes that Ryzen and Radeon support has doubled over the last year, pointing to a deliberate effort to make ROCm feel more “consumer-ready” rather than something limited to servers and data centers. If that trend continues, more everyday PCs could become capable local AI machines without complicated setups or specialized hardware.

Another big move is AMD’s push to make Windows a first-class platform for ROCm. AMD announced smoother integration with the ONNX path for inference and training, aimed at Windows AI users and PC makers. ROCm support for PyTorch on Windows is also expanding, alongside TheRock Software Package, an open-source build platform for HIP and ROCm. Together, these updates suggest AMD wants developers and OEMs to view Windows not as an afterthought for AI development on AMD hardware, but as a primary target.

Perhaps the most interesting claim is how close local AI is getting to cloud-level results. AMD showcased comparisons between open-source models running locally—such as GPT-OSS on Ryzen AI MAX+ APUs—and similar models hosted in the cloud. According to AMD, when measuring with widely used benchmarks like GPQA Diamond and MMLU, local and cloud environments can deliver similar quality. The message is clear: thanks to improvements in ROCm and more capable consumer hardware, running serious AI locally is no longer a compromise—it’s becoming a practical alternative.

All of this points to one direction: local AI deployment is rapidly becoming more powerful, more accessible, and more realistic for everyday users. With ROCm 7.2.7 expanding APU support, improving popular AI software performance, and strengthening Windows tooling, AMD is positioning itself for a future where on-device AI is mainstream rather than cloud-dependent.