Skip to content

Accelerating Real-Time Path Tracing

January 28, 2014

We’ve been told that real-time ray tracing is coming soon. It can produce superb renderings with unattained levels of realism. It works by simulating light, and leads to much more elegant architectures. It’s the future, but unfortunately, it’s been coming soon for several years now. Like thorium reactors, it’s one of those technologies that just seems to make so much sense, but that isn’t getting adopted, probably because we already have something that works “well enough”, and we aren’t yet willing to invest the effort required to convert to a radically different technology. There’s too much inertia, we haven’t yet reached that breaking point. Still, the potential is there, and with the advent of GPGPU, real-time global illumination is very close to becoming a reality in the world of gaming, as demonstrated by the Brigade renderer.

I should point out that ray tracing, path tracing, and global illumination are not the same things. Global illumination is a simulation of the entire light transport within a scene, which accounts for the ways in which light can illuminate objects indirectly, through several reflections between multiple surfaces. It produces superior renderings because it accounts for phenomena such as soft shadows, caustics, and color bleeding. Path tracing is an extended form of ray tracing which achieves global illumination, usually by tracing inverted light rays from the eye to the light sources in a scene. This is done through nondeterministic, approximate sampling, because there is an infinite number of paths light rays could take through a scene.

The way images are currently generated with path tracing is to send many rays through each pixel of the image. These rays then randomly scatter in the scene in ways that obey the properties of the materials they encounter, and eventually reach light sources. The problem with this is that the generated images have significant visual noise (grain), and it takes a large number of rays per pixel to eliminate this noise (easily up to 256+ rays per pixel). As you can imagine, this takes a tremendous amount of computational power. Generating a 4 megapixel image could easily require tracing a billion rays, with each ray bouncing several times through the scene. This is very difficult to achieve at real-time framerates, even with today’s GPUs.

Lately, I started thinking that maybe there’s a more efficient way to do this. Tracing hundreds of rays per pixel seems enormously wasteful. If there’s one thing I learned from my computer science education, it’s that algorithmic optimizations can pay off orders of magnitude more than clever little code tweaks. We’re rendering each pixel independently, when in fact, we know that the colors of pixels are rarely independent variables, because most images contain structured patterns. Maybe path tracing ought to take a hint from the world of image compression, and try to generate images using less information, less samples. It’s not just about the amount of data or raw computational power you have, using it intelligently matters as well. More concretely, I’ve been wondering if it might be possible to inspire ourselves directly from JPEG compression, and trace less rays to try and estimate DCT coefficients for multi-pixel patches, instead of individual pixels. I’m speculating here, but I believe that such an approach might be able to produce less noisy images from much less samples (i.e.: 20 or 50 times less).

3 Comments
  1. Miguel Lemos permalink

    Very smart! I will be following your advances!

  2. It’s a great idea and it’s called adaptive sampling. It’s commonly used in ray tracing and fractal generations.

Leave a comment