July 3, 2012

Dream Build Play Tech Postmortem

by Niklas Hansson

At the end of a deadline for any project a certain amount of tiredness always arises, but more than that a feeling of disappointment “What that all we did ?”,”Couldn’t we have done more ?”. The bigger the project the more pronounced these feelings raises.

With Project Temporality we want to prove that XNA can be used to create XBLA style lush 3D environments. We hope that the steps along that path that we have trodden will inspire others, and help show the potential of XNA and managed development, you really can do whatever you put your mind to. And we hope that you do agree we reached at least a part of the way to our goal, we have a long way to go still but now it seems possible.

So we want to talk a bit about the techniques we used in PT. To make our visuals distinct and not just standard “sci-fi pretty” we had to make a lot of technical base decisions to reach our goals. One goal of our rendering technology was to try to avoid as much as possible the hardness/harshness that much real-time graphics have in their lighting and edges. This was really important to us as we wanted to have a vibrant but soft look in the game.

One of our base techniques is exponential shadow mapping, thats the reason why our shadows look soft and smooth and neither really sharp or worse pixelated as we are used to in most games. We will upload an example of the XNA shadow mapping sample ported to using exponential shadow maps later on just got to write up some extra explaining text. Exponential shadow mapping (ESM) just like variance shadow mapping builds on the idea that if we could blur the shadow map texture instead of performing blurring in the per pixel look up like in Percentage Closer Filtering(PCF) we can perform a much greater blur effect and those create smoother transitions than we could afford else. It also has a really nice property as long as you don’t update the shadow buffer you won’t need to re-perform the blurring which makes the soft shadows almost free on non updated shadow maps. Also the blurring can be done as a separable blur filter which means it’s well adapted to the strengths of the XBOX360 platform with it’s low ALU performance.

These images are booth from the XNA shadow mapping sample using a 2048×2048 resolution texture, now the sample in itself could benefit from numerous improvements but it still works out well for this comparison as they both suffer from those problems. The one difference I have added is that I allow the ESM shadow to go to black instead of just multiplying with 0.5. On the old solutions the shadow looks low-res and jagged but with ESM it looks smooth, it can be made look even better with some tweaking but this was enough for a sample. For our goal of having a soft smooth world without jaggies using ESM was a no brainer especially since it doesn’t double shadow texture usage like Variance Shadow Mapping does.

Another technique that is a big part of our look is our temporally reprojected multi-resolution screen-space ambient occlusion(SSAO) and bounced light solution. SSAO is really cool and helps a lot with visuals however it is not without it’s share of problems one being that it only picks out creeks and small scale occlusions. This mean that if there is occlusion on a larger scale the algorithm has problems handling it. This is due to the fact that you need a huge amount if samples to to get good coverage.

We solved this by performing SSAO at three different resolutions, we down-sample parts our Gbuffer to these resolutions. This allows us to use much more far reaching SSAO without running into a samples limitation. Because we get a much better cache-coherency on the GPU due to the samples still being close to each other on the texture map. We also get to scale away small ridges etc that doesn’t matter to our large scale occlusion. Then we combine the data from the 3 different resolutions to create our final ambient occlusion term.

However even with this technique it would cost to much to run it on the Xbox360 with a good sample quality, so we also use a temporal re-projection technique that tries to match a pixel on the screen with a pixel on the screen of the last frame and reuse SSAO data from that frame if possible. This allows us to use a huge amount of samples because we can amortize the cost over multiple frames. The way this work is that we save data from the last frame then for each pixel on the screen calculate a world space position. Then reproject it using last frames view and projection matrix to find a pixel on last frame screen. We then verify that it’s the same pixel by comparing the position of them(done using depth due to cost issues) and then select if we are going to use that value or not. In fact what we do is that we use the old value and a new one and rotate the samples every frame this means that even if we had only 4 samples per frame we can get 32 or even 64 sample quality and if we up our sample count we can increase the quality even more.

We hope you agree that the effect of the technique is very visible on the right part as the left one looks very flat, and I also believe you will feel the softness and the range of it as quite different compared to normal SSAO techniques. We definitely feels that this was worth the amount of time we had to spend on tweaking it and working hard to make it possible to get it running on a 360, we should mention that our technique uses both normals and depth for every pixel and could be seen as a simplified model of the Bunnel disc algorithm, it is based upon Practical SSAO but with a lot of tweaks and fixes. And of course multi resolution makes a huge difference.

Another important part of the technique is our per pixel re-lightning scheme. To show this one off i have opted to use the classical crytek spoonza as this is an extra per pixel technique the results obviously won’t be as impressive as with for example cryteks LPV technique but this is just a part of what gives PT that special look and feel. This could be done with just a small change to our SSAO code, but actually making it fit into the pipeline and work properly so it didn’t just made corners brighter was a lot of work.A naive implementation of this is actually worse than no implementation. After we have applied direct light and emissive light we do our ssao pass and while we do calculate ssao for our pixels we also calculate for each pixel how much if the other pixels light would be reflected upon this (under the assumption that the pixels are fixed area discs in 3d space)

The part of this that we are the most happy about isn’t actually the bounced red light from the draperies even though it breaks up that large shadow nicely, but we have other systems to handle that kind of bouncing in PT. It’s the small subtle details how bounced light makes the bottom of the pillars brighter than the top. Small details like the wall above the arches are brighter because light had bounced on them. You can also again see the effect of our large scale ssao on the pillars to the right side. This image is actually from quite early code but due to all the different systems working together in PT we thought using this instead better demonstrated the effect.

The final part of what gives PT it’s look is our global bounced light solution including localized and blurry reflections. Our basic engine is a fully deferred render capable of handling hundreds shadow casting lights in a level (though we learned that artistically that it’s better with a lower number) Projected texture and so on, all feed into a physically based shading model (that will be the topic of a separate post) but we aren’t using that as our primary source of light most light of what you see is bounced light either bounce specular light or bounced diffuse light.

We do this using a Box projected system of environment mapping where the map is projected upon a box shape representing the environment. For each pixel we calculate a reflection vector using that position and the eye vector. But instead of using this for our cube-map lookup we instead fine the intersection point of the box from that position using it’s vector and use that position on the box as our reflection vector. This allows us to get accurate reflection of objects that look natural like the picture below.

For blurry reflection we use a form of mip-map convolution, we basically use a similar method to ATI cube-map gen with angular extents but we use a phong lobe for the weighting of the samples when we down-sample to get a reflection that matches properly to the specular size in the game. And then when doing the texture lookup in the code we use the micro-facet distribution (roughness of the surface, similar to the old gloss term in non physically based shading models) of the pixel to select which mip-map we use. This is imperfect due to the fact that our look up is antitropic so we do some fudging and interpolation at the lower mip-maps to hide the error.


If you watch the image above you should be able to see a radical difference in the clarity of the reflection this is due to different glossiness values on the surface.

Except for this we also used a similar method for bouncing light in the scene, it is a limited method because occlusion can only happen at box boundaries but it still worked out well for us, we use the position and the normal of the object to find out where it intersects the box and then weights that vs a vector that goes from the center of the box to the object to get a final lookup position and use that value as it’s local ambient value for that pixel. To get a nice smooth light distribution we perform 5 light bounces all in a linear HDR space. Which is basically just rebuild the cube-maps but using the old cube-maps as input to the world we could do more bounces but after 5 it’s not possible to see a difference anymore. If there is an interest in this we could probably do an article for this too but it’s a bit more complex than the shadow mapping.

In the images above the left images represents the result of the direct lightning and the rest of the light comes from our bounced light representation, we are quite happy with the results which works well even in large rooms.

We hope you have enjoyed this view of what makes Project Temporality tick visually, there is of course a ton of more stuff but this is the most important systems that we also think the least amount of people are using at the moment. We wish we could have gone into more detail on some parts but then this entry would never have been finished at all.

Share this



Dream Build Play Tech Postmortem Dream Build Play Tech Postmortem Dream Build Play Tech Postmortem Dream Build Play Tech Postmortem

Comments are closed.