December 8, 2012

Sparta 3D : Physically Based Shading

by Niklas Hansson

For this entry into our this is Sparta series we are going to talk about our shading system. It’s going to be quite a bit in depth so be prepared. After this we will go back to talking a bit more about the softer parts of the game. As mentioned in the last post we are careful to do our calculations in a physically correct color space. But when we come to our shading equations we take it one step further by using a physically based BRDF(basically a shading equation) instead of one based on observation (like the phong lighting equation). For people interested in a deeper understanding than we can cover here you can either look at my Nordic Game Conference Speak on the subject or you can read Naty Hoffman’s excellent papers from siggraph 2010 Here and Here.

So what do we mean with physically based ? What we mean is that we respect the basic properties of light, for example when light hit’s a surface a part of it is reflected (this is what we call specular light, light that has never entered the material and therefore isn’t colored by it) another part of the light enters the surface and some of this is absorbed by the surface but some re-exits the surface this is the part we call diffuse light and is the part of light that takes on the color of the object, so this is the light we mostly see. Well one thing that a physically based lighting (PBL from now on) should ensure is that the amount of light that bounces from the object can’t be greater than the amount of light hitting it. Pretty obvious but not a rule in computer graphics in fact fixing so that this is true is called energy conversation and is an important part of being physically based, Neither Phong or Blinn-Phong are energy conserving in fact booth of them reflect more light than hits them under a lot of material presets.

For Blinn-phong or Phong this can be fixed by adding a normalization factor that guarantees that the amount of light reflected cannot be bigger than the incoming light.The normalization factors does this by lowering the strength of the specular light if the area reflecting light is larger and making the light stronger if the area is smaller so that the same amount of light is reflected no matter the size of the highlight assuming the surface’s substance is the same.

This all works using something called a Micro-Facet BRDF(Bi-Directional Reflectance Distribution Function). While this might sound advanced it is basically creating the surface as a multitude of facets of perfectly reflecting mirrors.The rougher the surface of the material the more different directions the mirrors point in. And the BRDF calculates which % of mirrors are pointing directly at your eye. This is what is referred to as the roughness of the material and is what decided if we have a large weak specular or a small strong.

The other value that is necessary when doing a physically based solution is a substance value, this is the reflective property of the surface which means how much light it reflects and is based on it’s real material. For example all steel has the same substance because it’s steel but some steel has a mirror like reflection and some does not and this is due to the roughness. Brushed steel has a very high roughness for example.

Of course just having energy conservation inside the specular isn’t enough what if we have a nice shiny white material then it would reflect a lot of light but a lot of light would enter and re-exit the surface meaning it still emits more light than hit it. In this case we need energy conversation between the specular and the diffuse part. This is due to the fact that light that bounced can’t enter the surface because it has already bounced, this sounds obvious but a lot of game engines doesn’t handle this properly.

However the big advantage for me with going physically based (except that the Artists saves a ton of time thanks to much easier maps to work with) is that because your values actually mean something you can use them to handle ambient light and reflections with proper blurriness too. So that reflections and specular works together. This requires that you blurs your cubemaps correctly but works like a charm and is used in engines like CryEngine3.

For our cubemaps we however went one step further. Normal cubemaps has the problem that they don’t match up with the objects reflecting in them they are just a image being reflected. We instead use a technique called Box projected cubemaps where we project the cube images on a mesh representing the room and then correctly fins the reflection intersection with that one and finds the proper pixel. This allows us to have working reflections booth glossy and non glossy essentially for free. The same technique is also used to re project bounced light inside the room on objects moving through it.

In these examples you can see the effect of the box projection in the first one I have modified the wall segments closer to me to have a blurry reflection so you can se the button in the middle of the room of them (You can also observe the visual difference compared to the last one that doesn’t feel as metallic even where we can se no reflection this is an effect of our energy-conservation between specular and diffuse light.

This next image shows the effects of box projection using different roughness values on the floor pieces.

Backing all this is a fully deferred renderer, but thanks to our use of a physically based shading model we can produce a wealth of different materials without letting our G-Buffer get out of control size wise. We hope you have enjoyed this look into the more technical side of Project Temporality and as always don’t hesitate to tell us what kind of information you are interested in hearing about.

Share this



Sparta 3D :  Physically Based Shading Sparta 3D :  Physically Based Shading Sparta 3D :  Physically Based Shading Sparta 3D :  Physically Based Shading
December 5, 2012

This is Sparta

by Niklas Hansson

This is the first in what will be a series of articles talking about the tech we have built to create this game, Sparta3D. We have gotten a couple of requests to delve a bit deeper into the technical side of what makes our engine tick and how it’s creating this special look so here it is. The basics that we built Sparta 3D on is a couple of Techniques .

  • Properly gamma corrected Linear Space lightning
  • A Physically based lightning model.
  • Deferred Rendering
  • True Quadratic Light fall off
  • Box projected cube maps
  • Temporal multi resolution SSAO
  • Bounced Light Solution

All of this also boils down to a couple if philosophies with the main thing being “In computer games most lighting is hard and harsh but in real life most of it is soft”, “While it’s easy to make a movie look good with yellow and teal a more full colored spectrum is more interesting” and “The basics of a good looking image is good lighting”. So all the we have done with the engine is focused on these things to allow a soft nice environment with lush colors and with a high quality lightning that doesn’t require to much artist work to look good. This does not necessarily make it a better or a worse engine. It makes it a better engine for our specific game. This is a bit of a test run to see how much technical depth we should go into here so we’ll gauge the interest, but let’s get started on the specifics.

One of the most important things when doing 3d rendering is to get your lighting calculations to work properly. If the lighting is off great artists can work hard to salvage something but it’s always gonna be an uphill battle. Since we won’t our artists to spend their time working on creating as high a quality as possible we don’t want to force them to fight the tech, we want the tech to help them fight their battles.

One of the issues when doing lightning is Gamma, this actually comes from how the monitor is displaying your image instead of from your code but you still will have to compensate for it. Due to how old CRT monitors worked a linear change in value for the RGB channels resulted in a non linear change in value on the screen. This might be a bit confusing so I’ going to explain it using some images created by John Hable

The blue line in the middle is a linear line straight upwards which would mean that brightness would follow the RGB values. However the red line is what actually happens when you try to output an image through a computer monitor and this is due to the old CRT monitors (which all modern LCD,TFT,LED etc mimics). So what we need to do to keep data looking correctly is to first apply the yellow curve before we send it to the monitor which will apply the red curve and they will cancel each other out and everything will work out ? Well as long as you don’t make any changes to the data like blending textures or performing lighting you don’t have to think about this. Because the artist created the texture on a Screen it is already over brightened with the yellow line to look correct, the same happens if you are making a digital photo it automatically applies the yellow line to make it look correct.

Like you see in this image from John Hable some transformations is performed the mathematical transformations for most screens can be approximated as Gamma 2.2. This means that the monitor takes your input value between 0 and 1 and raises it with 2.2 this is the red curve. While raising it with 0.45 is the yellow curve. So as you can clearly see here the camera over brightens the image before saving it to disk to compensate for the screens darkening.

So why do this matter to us ? Well what happens if yo would like to add together two images that we say have 50% brightness. Well this gives an RGB value of 187 in monitor space. If we add them together we would get a RGB value of 384. Way above the 256 of clear white we expected. However if we first apply the red line to get them back from an over brightened state to the blue line we would be adding 128 with 128 and create the expected 256 (capped at 255 as a max number), afterwards we will have to apply the yellow curve again to get back to what the monitor is exspected but sicne 255 maps to 1 and 1 raised with any power is still 1 everything works out. For extra information look at GPU Gems or John Hables webpage or for more in depth the Uncharted 2 Lightning presentation.

Working in XNA sadly Microsoft removed access to the hardware free gamma to linear and linear to gamma converters in going from XNA 3.1 to 4.0 so we actually have to make all these conversions in shader code which does cost some performance but not doing it was never an alternative visually. I will here link an example from the GPU-Gems article.

Booth these images were lit with a white light source of the same strength the left one uses linear space lighting and the right one does not. Notice the burn out areas on the right picture and also the yellowish tone (this is an artifact of making lightning in gamma space the specular will burn out to a surrounding color and not have the color of the light ). The left looks a lot more natural and error free, this technique allows us to experiment with a lot more and stronger light sources while still maintaining good balance in the image.

We’ll be back soon with a follow up article about our physically based shading solution but this article did already get longer than it was planned so we have to cut it of for now. On the same subject we are currently hard at work putting the finishing touches on and lighting two entirely new environments that will be quite different from the ones we have showed up in trailers and screenshots so far, we just want to polish it a bit more before we can start showing them off.

Share this



This is Sparta This is Sparta This is Sparta This is Sparta
July 3, 2012

Dream Build Play Tech Postmortem

by Niklas Hansson

At the end of a deadline for any project a certain amount of tiredness always arises, but more than that a feeling of disappointment “What that all we did ?”,”Couldn’t we have done more ?”. The bigger the project the more pronounced these feelings raises.

With Project Temporality we want to prove that XNA can be used to create XBLA style lush 3D environments. We hope that the steps along that path that we have trodden will inspire others, and help show the potential of XNA and managed development, you really can do whatever you put your mind to. And we hope that you do agree we reached at least a part of the way to our goal, we have a long way to go still but now it seems possible.

So we want to talk a bit about the techniques we used in PT. To make our visuals distinct and not just standard “sci-fi pretty” we had to make a lot of technical base decisions to reach our goals. One goal of our rendering technology was to try to avoid as much as possible the hardness/harshness that much real-time graphics have in their lighting and edges. This was really important to us as we wanted to have a vibrant but soft look in the game.

One of our base techniques is exponential shadow mapping, thats the reason why our shadows look soft and smooth and neither really sharp or worse pixelated as we are used to in most games. We will upload an example of the XNA shadow mapping sample ported to using exponential shadow maps later on just got to write up some extra explaining text. Exponential shadow mapping (ESM) just like variance shadow mapping builds on the idea that if we could blur the shadow map texture instead of performing blurring in the per pixel look up like in Percentage Closer Filtering(PCF) we can perform a much greater blur effect and those create smoother transitions than we could afford else. It also has a really nice property as long as you don’t update the shadow buffer you won’t need to re-perform the blurring which makes the soft shadows almost free on non updated shadow maps. Also the blurring can be done as a separable blur filter which means it’s well adapted to the strengths of the XBOX360 platform with it’s low ALU performance.

These images are booth from the XNA shadow mapping sample using a 2048×2048 resolution texture, now the sample in itself could benefit from numerous improvements but it still works out well for this comparison as they both suffer from those problems. The one difference I have added is that I allow the ESM shadow to go to black instead of just multiplying with 0.5. On the old solutions the shadow looks low-res and jagged but with ESM it looks smooth, it can be made look even better with some tweaking but this was enough for a sample. For our goal of having a soft smooth world without jaggies using ESM was a no brainer especially since it doesn’t double shadow texture usage like Variance Shadow Mapping does.

Another technique that is a big part of our look is our temporally reprojected multi-resolution screen-space ambient occlusion(SSAO) and bounced light solution. SSAO is really cool and helps a lot with visuals however it is not without it’s share of problems one being that it only picks out creeks and small scale occlusions. This mean that if there is occlusion on a larger scale the algorithm has problems handling it. This is due to the fact that you need a huge amount if samples to to get good coverage.

We solved this by performing SSAO at three different resolutions, we down-sample parts our Gbuffer to these resolutions. This allows us to use much more far reaching SSAO without running into a samples limitation. Because we get a much better cache-coherency on the GPU due to the samples still being close to each other on the texture map. We also get to scale away small ridges etc that doesn’t matter to our large scale occlusion. Then we combine the data from the 3 different resolutions to create our final ambient occlusion term.

However even with this technique it would cost to much to run it on the Xbox360 with a good sample quality, so we also use a temporal re-projection technique that tries to match a pixel on the screen with a pixel on the screen of the last frame and reuse SSAO data from that frame if possible. This allows us to use a huge amount of samples because we can amortize the cost over multiple frames. The way this work is that we save data from the last frame then for each pixel on the screen calculate a world space position. Then reproject it using last frames view and projection matrix to find a pixel on last frame screen. We then verify that it’s the same pixel by comparing the position of them(done using depth due to cost issues) and then select if we are going to use that value or not. In fact what we do is that we use the old value and a new one and rotate the samples every frame this means that even if we had only 4 samples per frame we can get 32 or even 64 sample quality and if we up our sample count we can increase the quality even more.

We hope you agree that the effect of the technique is very visible on the right part as the left one looks very flat, and I also believe you will feel the softness and the range of it as quite different compared to normal SSAO techniques. We definitely feels that this was worth the amount of time we had to spend on tweaking it and working hard to make it possible to get it running on a 360, we should mention that our technique uses both normals and depth for every pixel and could be seen as a simplified model of the Bunnel disc algorithm, it is based upon Practical SSAO but with a lot of tweaks and fixes. And of course multi resolution makes a huge difference.

Another important part of the technique is our per pixel re-lightning scheme. To show this one off i have opted to use the classical crytek spoonza as this is an extra per pixel technique the results obviously won’t be as impressive as with for example cryteks LPV technique but this is just a part of what gives PT that special look and feel. This could be done with just a small change to our SSAO code, but actually making it fit into the pipeline and work properly so it didn’t just made corners brighter was a lot of work.A naive implementation of this is actually worse than no implementation. After we have applied direct light and emissive light we do our ssao pass and while we do calculate ssao for our pixels we also calculate for each pixel how much if the other pixels light would be reflected upon this (under the assumption that the pixels are fixed area discs in 3d space)

The part of this that we are the most happy about isn’t actually the bounced red light from the draperies even though it breaks up that large shadow nicely, but we have other systems to handle that kind of bouncing in PT. It’s the small subtle details how bounced light makes the bottom of the pillars brighter than the top. Small details like the wall above the arches are brighter because light had bounced on them. You can also again see the effect of our large scale ssao on the pillars to the right side. This image is actually from quite early code but due to all the different systems working together in PT we thought using this instead better demonstrated the effect.

The final part of what gives PT it’s look is our global bounced light solution including localized and blurry reflections. Our basic engine is a fully deferred render capable of handling hundreds shadow casting lights in a level (though we learned that artistically that it’s better with a lower number) Projected texture and so on, all feed into a physically based shading model (that will be the topic of a separate post) but we aren’t using that as our primary source of light most light of what you see is bounced light either bounce specular light or bounced diffuse light.

We do this using a Box projected system of environment mapping where the map is projected upon a box shape representing the environment. For each pixel we calculate a reflection vector using that position and the eye vector. But instead of using this for our cube-map lookup we instead fine the intersection point of the box from that position using it’s vector and use that position on the box as our reflection vector. This allows us to get accurate reflection of objects that look natural like the picture below.

For blurry reflection we use a form of mip-map convolution, we basically use a similar method to ATI cube-map gen with angular extents but we use a phong lobe for the weighting of the samples when we down-sample to get a reflection that matches properly to the specular size in the game. And then when doing the texture lookup in the code we use the micro-facet distribution (roughness of the surface, similar to the old gloss term in non physically based shading models) of the pixel to select which mip-map we use. This is imperfect due to the fact that our look up is antitropic so we do some fudging and interpolation at the lower mip-maps to hide the error.


If you watch the image above you should be able to see a radical difference in the clarity of the reflection this is due to different glossiness values on the surface.

Except for this we also used a similar method for bouncing light in the scene, it is a limited method because occlusion can only happen at box boundaries but it still worked out well for us, we use the position and the normal of the object to find out where it intersects the box and then weights that vs a vector that goes from the center of the box to the object to get a final lookup position and use that value as it’s local ambient value for that pixel. To get a nice smooth light distribution we perform 5 light bounces all in a linear HDR space. Which is basically just rebuild the cube-maps but using the old cube-maps as input to the world we could do more bounces but after 5 it’s not possible to see a difference anymore. If there is an interest in this we could probably do an article for this too but it’s a bit more complex than the shadow mapping.

In the images above the left images represents the result of the direct lightning and the rest of the light comes from our bounced light representation, we are quite happy with the results which works well even in large rooms.

We hope you have enjoyed this view of what makes Project Temporality tick visually, there is of course a ton of more stuff but this is the most important systems that we also think the least amount of people are using at the moment. We wish we could have gone into more detail on some parts but then this entry would never have been finished at all.

Share this



Dream Build Play Tech Postmortem Dream Build Play Tech Postmortem Dream Build Play Tech Postmortem Dream Build Play Tech Postmortem