Baking Ambient Occlusion Maps on the GPU

Despite the trend toward screen-space ambient occlusion, static ambient occlusion maps are handy for improving the realism of non-deformable objects like cars or spaceships for free. However, baking them in Blender is a chore because raytracing a proper solution using the CPU takes ages. I've been toying with the idea of using the GPU to speed this up for a while now, but only recently figured out how to put it all together and make it work. Here's how it works at a high level.

I iterate over the following steps many times from different camera positions along a hemisphere (to emulate the sky). First pass, I render the object from the camera position, writing depth values to a depth texture. This will be used to cast shadows in the next step. Second pass, I render the object again but this time I have my vertex shader set up to transform the verts into the UV map's layout. However in the fragment shader, I check the expected (non-uv-map-layout) vertex position against my depth texture to determine occlusion. I write the value of the z component of the screenspace normal (or zero if I determined the pixel was occluded using my depth texture) to a color texture. Third pass, I accumulate the value of the color texture (times 1 / number of iterations) into an encoded RGB color texture (I could have used a float texture instead here).

The result is a nice looking 1024x1024 ambient map that takes 3.9 seconds to generate using my aging nvidia 7900GT. I performed 784 iterations, which seems to be the "sweet spot" for this method. Larger outputs are easy, up to whatever size your graphics card supports. I've attached an example. Notice how the left and right doors have darker areas around where the side mirrors attach. Neat!

GPU generated ambient occlusion map

Comments are closed.

Dev Journal and Project Hosting