NVIDIA is pushing its RTX AI PC lineup even further with a fresh batch of upgrades designed to deliver faster generative AI performance, smoother creator workflows, and better efficiency—without requiring new hardware. The company is positioning this rollout as a free RTX AI performance upgrade, and it targets two big areas: large language model speedups and faster image generation through new precision support.
One of the headline improvements is a boost to local LLM performance. NVIDIA says users can see up to 40% higher performance in supported language models, including options such as GPT-OSS, Nemotron Nano V2, and other popular lightweight LLM builds. For anyone running AI tools on a Windows 11 RTX PC, this translates into quicker responses, faster inference, and a more responsive “local AI assistant” experience—especially when multitasking or working with larger prompts.
The second major upgrade is native NVFP4 support arriving in popular creator-focused AI workflows. Specifically, NVFP4 acceleration is being enabled for ComfyUI pipelines tied to Flux.1, Flux.2, and related image-generation models. NVIDIA claims performance gains of up to 4.6x in these workloads, which can significantly reduce render times for generating images, iterating on prompts, or processing batches.
What makes the NVFP4 and NVFP8 additions particularly notable is the efficiency angle. NVIDIA notes that these modes can enable much smaller model footprints—up to 60% smaller LLMs in certain cases—while also shifting more of the workload into system memory. That can help free up GPU resources, reduce VRAM pressure, and improve overall stability when running multiple AI apps at once. Compared to older BF16-focused workflows, VRAM usage can drop substantially, which is especially helpful for laptops and mainstream RTX AI PCs that don’t have ultra-high memory headroom.
NVIDIA is also expanding what RTX PCs can do for video generation. A new audio-to-video model called LTX-2 is being introduced with RTX acceleration. It’s described as a leading open-weights video model and is capable of generating up to 4K video in as little as 20 seconds. With NVFP8 support enabled, NVIDIA says performance can improve by up to 2x, helping creators move from concept to preview much faster.
On top of raw generation speed, NVIDIA is improving video quality with Super Resolution for GenAI video through RTX Video. This feature is set to land in ComfyUI in February and is designed to upscale 720p AI-generated video to 4K with better clarity and detail. NVIDIA highlights a major time savings here as well: generating a 10-second 4K video using NVFP8 acceleration and then applying Super Resolution reportedly takes about 3 minutes, compared to around 15 minutes using older methods.
Rounding out the update list is AI Video Search support coming to Nexa Hyperlink. The goal is to offer RTX-optimized private search across videos, images, and documents—making it easier to find specific moments, assets, or information inside personal libraries while keeping processing local and optimized for RTX hardware.
Taken together, these updates reinforce NVIDIA’s strategy for RTX AI PCs: keep improving AI and creator performance through software optimizations like NVFP4 and NVFP8, expand support in popular tools such as ComfyUI, and deliver meaningful speed and efficiency gains for local LLMs, image generation, and AI video workflows—all as ongoing upgrades for existing RTX owners.






