Intel used GDC 2026 to reveal a major step forward in how game textures could be stored and streamed in the future. Intel graphics engineer Marissa Dubois presented Texture Set Neural Compression (TSNC), a newly productized, standalone SDK that takes last year’s research prototype and turns it into something developers can realistically integrate into production pipelines.
At its core, TSNC is Intel’s take on neural texture compression, designed to shrink texture data far beyond what traditional GPU block compression can manage. Classic formats like BC1 through BC7 are fast and widely supported, but they rely on fixed rules that can’t adapt to the unique patterns inside a game’s art assets. TSNC flips that approach by training a small neural network to understand and reconstruct a specific group of textures, producing much higher compression while keeping quality loss controlled.
Instead of treating every texture independently, TSNC targets an entire material’s texture set at once. That includes the usual physically based rendering maps: diffuse (albedo), normal, roughness, metallic, ambient occlusion, and emissive. The key idea is that these maps share a lot of structure. Edges, patterns, and surface details often line up across channels, creating redundancy that conventional block compression largely ignores. TSNC exploits that shared information by learning a compact latent representation that a small multi-layer perceptron can decode back into the texture data at runtime.
How TSNC compresses textures: the feature pyramid approach
TSNC’s compression is built around what Intel calls a feature pyramid. In practical terms, it’s a set of four latent-space textures encoded using BC1, arranged at different resolutions. Intel currently offers two main configurations, giving developers a choice between higher quality and maximum compression.
Variant A focuses on preserving quality while still delivering big savings. It uses two full-resolution latent images and two half-resolution ones. With 4K source textures, that means two 4K and two 2K latent images, totaling about 26.8 MB compared to around 256 MB for uncompressed bitmaps. That’s more than 9x compression, nearly doubling what you’d typically expect from standard block compression alone (roughly 4.8x). Intel reports perceptual quality loss of about 5% using the FLIP metric, which tends to show up mostly as small precision loss in normal maps, while other channels remain visually close to the original.
Variant B is the more aggressive option built for situations where memory and storage matter more than perfect fidelity. It steps the latent images down through 1/2, 1/4, and 1/8 resolution levels, pushing compression beyond 17x. The trade-off is that quality loss becomes easier to spot. Intel notes that BC1 block artifacts can appear, particularly in normal maps and in channels like ambient occlusion and roughness. FLIP error rises to around 6–7%, which Intel says is enough to be noticeable. This makes Variant B a better fit for distant assets, secondary materials, or cases where textures won’t be viewed up close.
From research prototype to SDK: what changed since GDC 2025
Intel’s original prototype was built using PyTorch, but TSNC has now been rewritten from the ground up using Slang compute shaders. One of the practical benefits is backend flexibility: the same decompressor logic can target different environments depending on where a studio wants the decoding to happen. Whether a developer is working in Unreal Engine, a custom engine, or even choosing CPU-based decompression for certain workflows, TSNC is designed to adapt without requiring entirely separate decoder implementations.
On the GPU side, Intel supports Microsoft’s DirectX 12 Cooperative Vectors API. This allows TSNC to tap into Intel Arc’s XMX matrix cores for accelerated neural inference (available across A-series and B-series Arc GPUs). For hardware that doesn’t have XMX support, TSNC includes a fallback path using standard fused multiply-and-add (FMA) operations, which can run on CPUs and on non-Intel GPUs as well.
Four deployment options for game developers
Intel outlined four ways developers can deploy TSNC, each balancing storage savings, VRAM usage, and runtime cost differently:
1) Install-time decompression: Games ship with compressed textures, then decompress them during installation. This helps reduce download size and distribution bandwidth, but textures end up uncompressed on the user’s drive afterward.
2) Load-time decompression: Textures remain compressed on disk, then decompress into VRAM when the game loads. This can reduce install size and also ease VRAM pressure during loading sequences.
3) Stream-time decompression: Designed to pair with texture streaming systems. Assets decompress on demand as needed. This approach can offer strong savings for both disk and memory, but introduces runtime inference overhead while streaming.
4) Sample-time decompression: The most aggressive and most technically interesting option. Textures stay compressed in VRAM permanently and are decoded per pixel inside the shader during rendering. This can dramatically reduce VRAM usage, but it adds constant inference work during gameplay.
Which approach makes sense depends on the game, target hardware, and engine architecture. For example, open-world titles with heavy streaming demands might favor stream-time, while VRAM-limited platforms could benefit most from sample-time if the performance budget allows it.
Performance numbers: sample-time decoding looks more realistic than expected
Intel also shared inference benchmarks from a Panther Lake laptop using B390 integrated graphics, tested at a full 1080p compute shader workload. The measured per-pixel costs were:
FMA path: 0.661 nanoseconds per pixel
XMX accelerated path: 0.194 nanoseconds per pixel
That’s roughly a 3.4x speedup when using hardware-accelerated matrix operations. What stands out is that these results come from integrated graphics, suggesting that even the demanding sample-time approach may be more practical than many developers would assume. Intel notes that discrete GPUs should see even lower overhead.
Release plans for the TSNC SDK
Intel says an alpha version of the Texture Set Neural Compression SDK is planned for later this year, with beta and public releases to follow. Exact dates haven’t been finalized, but the direction is clear: Intel wants TSNC to move from a promising graphics research concept into a tool studios can actually ship with, potentially changing how high-resolution PBR texture sets are stored, streamed, and rendered in future PC games.






