
At SIGGRAPH & HPG 2025, Intel presented significant advancements in visual fidelity and performance enhancements for both integrated and discrete GPUs.
Enhancing Visual Quality: Intel’s Strategic Focus on Integrated GPUs
The landscape of integrated graphics processing units (iGPUs) has transformed dramatically. A decade ago, iGPUs served primarily for media playback, and gaming experiences were largely unsatisfactory. However, recent advancements have allowed many integrated solutions to approach the performance levels of entry-level discrete GPUs. Intel is now dedicated to further enhancing visual quality and performance in these units.
To reach these objectives for future generations of iGPUs and their discrete counterparts, Intel’s strategic initiatives include:
- Enhancing efficiency in Path Tracing
- Exploring Neural Graphics technologies
- Introducing innovative physically-based effects, such as fluorescence
The primary aim is to provide high-fidelity visual effects, particularly Path Tracing, optimized for power-efficient devices leveraging iGPUs. Path Tracing, known for its computational demands due to the extensive use of photon paths for simulation, requires noise reduction to produce a clean image. Intel’s approach involves Resampled Importance Sampling, which enhances visual quality by a remarkable factor of up to ten times.

The following innovations contribute to this enhanced quality:
This recent work, which was accepted at SIGGRAPH 2025, advances real-time path tracing by refining Resampled Importance Sampling.This technique organizes samples into local histograms and utilizes Quasi Monte Carlo sampling with antithetic patterns to minimize noise effectively. When combined with blue noise, this method leads to a significant visual enhancement, achieving up to ten times better results.
This advancement builds upon the leading techniques employed in renowned AAA titles like Cyberpunk 2077, bridging the gap between high-end gaming experiences and low-power hardware.
Despite numerous obstacles, the evolution of technology is evident—from the initial experiments with simple scenes to the complex, animated Jungle Ruins scene with intricate elements like dynamic vegetation and lighting transitions, fully path traced at 1 sample per pixel (1 SPP), achieving a steady 30 frames per second at 1440p on the Intel B580 GPU.
via Intel
In addition to these developments, Intel revealed the second iteration of Open Image Denoise, an AI-accelerated ray tracing tool available to a wider audience. The first version of this open-source library is well-regarded in the industry, and the update promises enhanced cross-vendor support, improving compatibility across all major GPUs, including those from Intel, NVIDIA, and AMD.
Intel is actively working on the next version of the denoiser, which will integrate a neural network architecture to further enhance visuals and boost performance. A recent demonstration highlighted the “Path Tracing a Trillion Triangles“on the Intel Arc B580 GPU at 1440p, delivering a stable 30 FPS.
Performance and image quality are directly related to the number of rays processed at each phase of path tracing.
To minimize compute demands and memory usage, we employ 1 SPP and a single ray for each bounce. However, due to the inherent variability of path tracing, the resulting images may exhibit noticeable noise. Each pixel’s rendering is based on one random light path, leading to significant fluctuations in brightness and color, particularly in complex lighting conditions such as indirect illumination and reflections. Our solution involves using a spatiotemporal joint neural denoising and supersampling model to clean up the noise and enhance details.
via Intel

Key highlights from this impressive demonstration include:
- Reducing the computational cost of path tracing for real-time performance is a prominent challenge, which is a current focus of research within both industry and academia. Throughout our series of blog posts, we share insights on real-time path tracing for the animated one-trillion-triangle Jungle Ruins scene, successfully reaching 30 FPS at 1440p with an Intel Arc B580 GPU.
- This blog series emphasizes practical applications of 1 SPP denoising and supersampling, including metrics for visual quality assessment, managing animations in high-complexity scenes featuring one trillion instanced triangles, and the trade-offs involved in content creation and performance optimization.

Notably, Intel aims to enhance detail reconstruction and noise reduction using a spatiotemporal joint neural denoising and supersampling model. This approach bears similarities to NVIDIA’s Ray Reconstruction technology from DLSS 3.5, as well as AMD’s upcoming Ray Regeneration feature within its FSR Redstone technology.
- Fine Texture Details: Denoisers often produce smoother results through optimization aimed at noise reduction. However, this may loss fine details, especially when distinguishing between high-frequency noise and actual signal becomes challenging.
- Flickering: Although individual denoised frames appear clean, slight variations from frame to frame can lead to perceptible shimmering over time, particularly due to changing lighting or scene dynamics. A balanced temporal loss can stabilize outputs, but overuse can result in ghosting artifacts.
- Moiré Patterns: These arise from undersampling high-frequency details, creating visual interference between scene detail and pixel grids. Training the model with diverse samples addressing such patterns can improve performance in denoising.
- Shadow Reconstruction: Accurately reconstructing shadows remains complex without motion vectors. Training with samples exhibiting varying lighting conditions allows improved shadow reproduction by the model.
- Disocclusion: Challenges arise in areas previously occluded but made visible through movement. The model struggles to reconstruct these due to inconsistent patterns, sometimes resulting in ghosting artifacts. Enriching training data with representative samples aids in managing this issue.
- Reflections: Like shadows, reflection reconstruction relies on noisy color input. Incorporating the first non-specular hit in auxiliary buffers greatly enhances reflection quality, especially for reflective surfaces.
To further empower high-performance visual fidelity in low-power GPUs, Intel has introduced hardware-accelerated texture set Neural Compression (TSNC), compatible with DirectX Cooperative Vectors. This technology maximizes the potential of AI-driven hardware features in contemporary chips, achieving up to a 47x performance enhancement compared to traditional compute-focused implementations using FMA (Fused Multiply Add).Here are some noteworthy performance metrics:
- Intel Arc 140V (Lunar Lake): 2.6 ms (BC6 baseline) / 2.1 ms (TSNC with Cooperative Vectors)
- Intel Arc B580 (Battlemage): 0.55 ms (BC6 baseline) / 0.55 ms (TSNC with Cooperative Vectors)

Intel’s TSNC demonstrates performance levels equal to or superior to conventional BC6 compression while utilizing a smaller texture memory footprint, thus optimizing resource usage and enhancing overall performance.
The insights shared through Intel’s recent demonstrations and publications highlight the company’s forward trajectory. Intel is evolving from its previous image, showcasing a company poised for innovation in the GPU arena. With architectures like Xe2, Intel seems well-positioned as a formidable player in both entry-level discrete GPU markets and integrated solutions. These promising advancements could revolutionize the iGPU segment, with strong anticipation for their upcoming implementations, reflecting their commitment to open-source developments.
Leave a Reply ▼