December 8, 2012

Sparta 3D : Physically Based Shading

by Niklas Hansson

For this entry into our this is Sparta series we are going to talk about our shading system. It’s going to be quite a bit in depth so be prepared. After this we will go back to talking a bit more about the softer parts of the game. As mentioned in the last post we are careful to do our calculations in a physically correct color space. But when we come to our shading equations we take it one step further by using a physically based BRDF(basically a shading equation) instead of one based on observation (like the phong lighting equation). For people interested in a deeper understanding than we can cover here you can either look at my Nordic Game Conference Speak on the subject or you can read Naty Hoffman’s excellent papers from siggraph 2010 Here and Here.

So what do we mean with physically based ? What we mean is that we respect the basic properties of light, for example when light hit’s a surface a part of it is reflected (this is what we call specular light, light that has never entered the material and therefore isn’t colored by it) another part of the light enters the surface and some of this is absorbed by the surface but some re-exits the surface this is the part we call diffuse light and is the part of light that takes on the color of the object, so this is the light we mostly see. Well one thing that a physically based lighting (PBL from now on) should ensure is that the amount of light that bounces from the object can’t be greater than the amount of light hitting it. Pretty obvious but not a rule in computer graphics in fact fixing so that this is true is called energy conversation and is an important part of being physically based, Neither Phong or Blinn-Phong are energy conserving in fact booth of them reflect more light than hits them under a lot of material presets.

For Blinn-phong or Phong this can be fixed by adding a normalization factor that guarantees that the amount of light reflected cannot be bigger than the incoming light.The normalization factors does this by lowering the strength of the specular light if the area reflecting light is larger and making the light stronger if the area is smaller so that the same amount of light is reflected no matter the size of the highlight assuming the surface’s substance is the same.

This all works using something called a Micro-Facet BRDF(Bi-Directional Reflectance Distribution Function). While this might sound advanced it is basically creating the surface as a multitude of facets of perfectly reflecting mirrors.The rougher the surface of the material the more different directions the mirrors point in. And the BRDF calculates which % of mirrors are pointing directly at your eye. This is what is referred to as the roughness of the material and is what decided if we have a large weak specular or a small strong.

The other value that is necessary when doing a physically based solution is a substance value, this is the reflective property of the surface which means how much light it reflects and is based on it’s real material. For example all steel has the same substance because it’s steel but some steel has a mirror like reflection and some does not and this is due to the roughness. Brushed steel has a very high roughness for example.

Of course just having energy conservation inside the specular isn’t enough what if we have a nice shiny white material then it would reflect a lot of light but a lot of light would enter and re-exit the surface meaning it still emits more light than hit it. In this case we need energy conversation between the specular and the diffuse part. This is due to the fact that light that bounced can’t enter the surface because it has already bounced, this sounds obvious but a lot of game engines doesn’t handle this properly.

However the big advantage for me with going physically based (except that the Artists saves a ton of time thanks to much easier maps to work with) is that because your values actually mean something you can use them to handle ambient light and reflections with proper blurriness too. So that reflections and specular works together. This requires that you blurs your cubemaps correctly but works like a charm and is used in engines like CryEngine3.

For our cubemaps we however went one step further. Normal cubemaps has the problem that they don’t match up with the objects reflecting in them they are just a image being reflected. We instead use a technique called Box projected cubemaps where we project the cube images on a mesh representing the room and then correctly fins the reflection intersection with that one and finds the proper pixel. This allows us to have working reflections booth glossy and non glossy essentially for free. The same technique is also used to re project bounced light inside the room on objects moving through it.

In these examples you can see the effect of the box projection in the first one I have modified the wall segments closer to me to have a blurry reflection so you can se the button in the middle of the room of them (You can also observe the visual difference compared to the last one that doesn’t feel as metallic even where we can se no reflection this is an effect of our energy-conservation between specular and diffuse light.

This next image shows the effects of box projection using different roughness values on the floor pieces.

Backing all this is a fully deferred renderer, but thanks to our use of a physically based shading model we can produce a wealth of different materials without letting our G-Buffer get out of control size wise. We hope you have enjoyed this look into the more technical side of Project Temporality and as always don’t hesitate to tell us what kind of information you are interested in hearing about.

December 5, 2012

This is Sparta

by Niklas Hansson

This is the first in what will be a series of articles talking about the tech we have built to create this game, Sparta3D. We have gotten a couple of requests to delve a bit deeper into the technical side of what makes our engine tick and how it’s creating this special look so here it is. The basics that we built Sparta 3D on is a couple of Techniques .

  • Properly gamma corrected Linear Space lightning
  • A Physically based lightning model.
  • Deferred Rendering
  • True Quadratic Light fall off
  • Box projected cube maps
  • Temporal multi resolution SSAO
  • Bounced Light Solution

All of this also boils down to a couple if philosophies with the main thing being “In computer games most lighting is hard and harsh but in real life most of it is soft”, “While it’s easy to make a movie look good with yellow and teal a more full colored spectrum is more interesting” and “The basics of a good looking image is good lighting”. So all the we have done with the engine is focused on these things to allow a soft nice environment with lush colors and with a high quality lightning that doesn’t require to much artist work to look good. This does not necessarily make it a better or a worse engine. It makes it a better engine for our specific game. This is a bit of a test run to see how much technical depth we should go into here so we’ll gauge the interest, but let’s get started on the specifics.

One of the most important things when doing 3d rendering is to get your lighting calculations to work properly. If the lighting is off great artists can work hard to salvage something but it’s always gonna be an uphill battle. Since we won’t our artists to spend their time working on creating as high a quality as possible we don’t want to force them to fight the tech, we want the tech to help them fight their battles.

One of the issues when doing lightning is Gamma, this actually comes from how the monitor is displaying your image instead of from your code but you still will have to compensate for it. Due to how old CRT monitors worked a linear change in value for the RGB channels resulted in a non linear change in value on the screen. This might be a bit confusing so I’ going to explain it using some images created by John Hable

The blue line in the middle is a linear line straight upwards which would mean that brightness would follow the RGB values. However the red line is what actually happens when you try to output an image through a computer monitor and this is due to the old CRT monitors (which all modern LCD,TFT,LED etc mimics). So what we need to do to keep data looking correctly is to first apply the yellow curve before we send it to the monitor which will apply the red curve and they will cancel each other out and everything will work out ? Well as long as you don’t make any changes to the data like blending textures or performing lighting you don’t have to think about this. Because the artist created the texture on a Screen it is already over brightened with the yellow line to look correct, the same happens if you are making a digital photo it automatically applies the yellow line to make it look correct.

Like you see in this image from John Hable some transformations is performed the mathematical transformations for most screens can be approximated as Gamma 2.2. This means that the monitor takes your input value between 0 and 1 and raises it with 2.2 this is the red curve. While raising it with 0.45 is the yellow curve. So as you can clearly see here the camera over brightens the image before saving it to disk to compensate for the screens darkening.

So why do this matter to us ? Well what happens if yo would like to add together two images that we say have 50% brightness. Well this gives an RGB value of 187 in monitor space. If we add them together we would get a RGB value of 384. Way above the 256 of clear white we expected. However if we first apply the red line to get them back from an over brightened state to the blue line we would be adding 128 with 128 and create the expected 256 (capped at 255 as a max number), afterwards we will have to apply the yellow curve again to get back to what the monitor is exspected but sicne 255 maps to 1 and 1 raised with any power is still 1 everything works out. For extra information look at GPU Gems or John Hables webpage or for more in depth the Uncharted 2 Lightning presentation.

Working in XNA sadly Microsoft removed access to the hardware free gamma to linear and linear to gamma converters in going from XNA 3.1 to 4.0 so we actually have to make all these conversions in shader code which does cost some performance but not doing it was never an alternative visually. I will here link an example from the GPU-Gems article.

Booth these images were lit with a white light source of the same strength the left one uses linear space lighting and the right one does not. Notice the burn out areas on the right picture and also the yellowish tone (this is an artifact of making lightning in gamma space the specular will burn out to a surrounding color and not have the color of the light ). The left looks a lot more natural and error free, this technique allows us to experiment with a lot more and stronger light sources while still maintaining good balance in the image.

We’ll be back soon with a follow up article about our physically based shading solution but this article did already get longer than it was planned so we have to cut it of for now. On the same subject we are currently hard at work putting the finishing touches on and lighting two entirely new environments that will be quite different from the ones we have showed up in trailers and screenshots so far, we just want to polish it a bit more before we can start showing them off.

November 30, 2012

A look at our Cinematics Editor

by groogy

Well if you want to make a living out of creating a game you have to sell it, make people look at it and get them interested enough to consider buying it. The sad truth about making a game is that you have to present it in a light that makes it look exceptional and pretty. At this first stage off attracting the customers eye the gameplay doesn’t matter yet. In order to make them look at your game and remember it, you have to make a good strong first impression. And you can only do that visually, with screenshots and trailers.

So last week I’ve been working on a cinematic editor for us to use in order to let our artists create something really awesome in order to make people excited about the game. The previous trailers have all been done by recording while walking around in the game. Now the artists have the possibility to create a cut-scene, place key frames for the camera and tweak post processing effects. Of course this cut-scene functionality will be used in-game as well.

The Cinematic Editor

The artists can manipulate the camera by simple number crunching in the panel to the right. But they can also enable free-flight mode and physically fly around to place they keyframes exactly where they want to. First I used N-order Bézier curve in order to make the flight smooth. Bézier have the property of giving really nice curves but unfortunate that they are not guaranteed to cross any more keyframes than the first and last ones. You can work with it, having the points act as “forces” pulling the curve instead of being physical positions. But it’s harder to work with. So we changed to an algorithm that is simpler to work with because it guarantees that the curve intersect every point, the Catmull-Rom spline .

In the editor in addition to camera keyframes we also support interpolation between different field of views, depth of field values, HDR exposures and the color gradings. Interpolating color grading is probably the effect that gave the most visual impact. Here is an small example of the system in action,all coder made of course

A lot of work were put into making this tool easy to use. Like the fact that they can set everything up using a free-flight camera so they can see exactly how everything will look at this current frame. That was probably what got most work on it in the entire editor. The actual implementation of the cinematic was straightforward with simple interpolation between set key-frames which is provided by the artist with the already spoken of tools.

Other than that it isn’t much special. You create a look at matrix and shader variables based on the values given by the artists and you provide the tools they need so it becomes easier for them to make awesome stuff. I can’t wait till I see them start using my tool :)

Also if you like what you see please go to http://www.indiedb.com/games/project-temporality and vote for us as the Indie Game of the year for 2012.

November 16, 2012

God Rays

by Niklas Hansson

For this week we have something exciting to share with you if you have watched our windows earlier they have been kinda boring a yellowish light only that you can’t see through. This was never intended to be a permanent solution but rather a”Well it’s better than it being black” solution. But it stuck around for surprisingly long. We recently started doing a bit of work where we cast rays from the camera through the corners of the window and use this to texture map a quad with the directions from the sky box. There was some issues with the normal lightning kicking in too because it’s a deferred rendering engine but they where solvable.

So after this we had our artists go in and add some really nice volumetric lightning effects by using additive polygonal planes. This started to looking really nice but to try to enhance the effect we decided to tru to create an implementation of crepuscular rays aka God Rays that we hoped would do two things. Once is filling out the artist made volumetric rays to create an even stronger feeling when you where looking into a window, But also to get some actual coloring depending on the light on the outside in the effect.

The basic implementation was a simple matter it’s basically a biased radial blur with some occlusion added in, very similar to the Kenny Mitchell article. However we apply the area to blur quite differently. We use all our windows as sun colors and even add in fr each window an approximation of how much sunlight would reach it even if the sun wasn’t visible. we also had to do some nice clipping of values to create a consistent look.

 

We are also working very hard on getting two of our new environments up and ready for use in real levels and they are looking really promising. And a lot of different game play tweaks and system changes, things really are moving well along and we hope to have even more exciting news to share with you shortly.

November 9, 2012

Convex Hull mesh generation

by groogy

Been working the last week on a convex hull mesh generation algorithm called Quickhull. The usage of these would be mostly for the camera so that it got a smoother movement along walls since a lot of our walls in Project Temporality have bumps on several spots. Without these the camera would jump around a lot in order to not be placed “outside” the walls. The basic idea of generating a convex hull is pretty simple really and when you get it right it’s very little things that can actually go wrong with it. So it’s a pretty rewarding and fast technique to do.

Convex hull mesh generation
Green represents the original collision hull, blue the convex hull. The white lines are the faces normals and the pink cross is the centroid of the convex hull.

As the name of the algorithm implies, it’s based on the common quicksort. The obvious difference is that instead of sorting data you are trying to wrap the given data in a hull. While I searched trying to find any working algorithm I found that all of them had breaking bugs in them or they broke down with an artist provided mesh which made them totally unusable. This caused some pain for me since that meant I had to implement it myself from the start.

It might be based on quicksort but since this is in three dimensions it isn’t as easy as just dividing the vertices in half like you do in quicksort or quickhull for two dimensions. Instead you have to generate a simplex from four points in the point cloud. Here’s what some of the implementations did wrong that I found, they took the first four points in the input list. This might work for random point clouds which are often used to demonstrate the technique but not in an actual use-case with a mesh. The problem that occur is that these four vertices most often create a plane together using two triangles. Well you are not going to get a simplex from that and the end result is a very wonky looking convex hull.


The mesh to the left is the original algorithm I found while the right is my implementation.

As you can see the left one does not work at all, that’s not a simplex, that gives you a plane, an incorrect plane at that as well. My version is the one you should always implement as it gives you a guaranteed simplex if it is possible given the input data. It felt like a ‘duh’ moment when I read the other implementations since this can also fail for them. All you have to do to get a proper simplex is find a triangle, if you have a mesh as input just use the first triangle otherwise use the three first points. The plane these three points or triangle create, do a test to find the first point that is not on the plane. This point will be the fourth point you use to create the simplex. If you have no fourth point that is not on the plane then I’m sorry but the mesh you have is already as convex it will ever be because the entire mesh is a plane. :)


Left is generated with the invalid simplex and right is generated with the correct simplex

When you finally got your initial simplex. That’s when you really have to start playing around with mathematics. The algorithm is based on that each face of the current mesh(which starts as the initial simplex) has a list with points that are in-front of it and not behind it. What you want to achieve is to empty these lists. So you iterate trough the faces until you eventually run out of faces. But during each iteration you take the current face’s vertex that is furthest away from it in it’s outside list, find the edges that let you connect with that point and create a new triangle connected with the neighbours of the current face. Assign new outside points to this newly made face and then at last invalidate the current face so that it is not part of the mesh any more. And voila! You have extended your mesh to cover that one furthest point. Now it just has to be done with the remaining faces and the new ones that are created and eventually you will encapsulate all the points completely in a convex hull.

It’s pretty simple and straightforward when you take a step back and just look at it with a “Alright, what is it that we are actually trying to do here?”. The biggest problem I found was that I had to write a lot of code for edge detection and new triangle creation before I could actually test if anything worked. So I always had a “I hope I didn’t screw up big time now” and of course I did. So in retrospect I should have tried isolated out each separate part and really tested so that the mathematics worked as I intended with data I knew the result of. I for instance got something as simple as side-detection of faces wrong and didn’t know it and was a major delay as that was the last thing I looked at. I had to start writing down the formulas step by step, draw the result on a paper and try to see what was actually happening before I saw that I had done a simple but fatal error with the plane dot test to see which side a point was on. Kind of embarrassing really.

In conclusion, mathematics is a cruel mistress but so rewarding when you finally treat her right.

November 2, 2012

Deferred Decals in Project Temporlity

by Niklas Hansson

One of the big issues with being a smaller independent studio and making a 3d game is the staggering amount of content that is necessary to make a good looking game. Not only does every single piece of content need to be of a very high quality but you will also need a ton of it and often you need a lot of small variations to break up the look of the world and not let the user get bored at watching exactly the same models again and again.

Here at Defrost we built our tech targeting the issue from the very first day, it’s one of the reasons we decided to build our own. We wanted an engine that allowed artists to create good looking models in as little time as possible, so that we can maximize their efforts. A lot of different decisions stemmed from this like implementing a bounced light solution to allow easy lighting of rooms and also to break up the looks visually with all the extra detail, a multi resolution SSAO implementation, adapting a physically based lighting model and more. We will hopefully have time to cover all of these later on but today we are going to talk about a recent addition that we feel can make a world of difference to the overall look of the game.

Decals, they are well known technique you can use them to create bullets holes, put up posters etc and generally add a bit of clutter. For us in Project Temporality we where looking for a way to create unique visual details in different rooms without putting the artist through the hell of actually making all that unique details. So we turned to decals, we was inspired by the videos of how artists easily stamped out unique looking environments for rage and wondered if we could do something like that. Combining that idea with the memory of the normal mapped craters we created in World In Conflict at Massive we decided to try making a detail system that overrides any data in the base model so that we can completely change the material and look and feeling and that was easy to apply to any surface.

Normal decals where you take a polygon and cut it up to match the underlaying geometry didn’t work out for us because we have a lot of details in the models instead of in the normal map so we looked to deferred decals instead. Basically it’s a deferred rendering technique where you render a box and for any position inside that box finds the position in a decal space decided by the box and make a look up into a texture map, instead of going for the volume decal style like Humus we decided that 2D decals would be good enough, after all if it works for cryengine it works for us. After that it was easy to get the basic code up and running. XNA limitations with depth buffer and the fact that the xbox loses the data in rendertargets when they aren’t bound was kinda hurtful. In the end we got it to work with a decent solution but if XNA just would allow us to bind and unbind rendertargets individually for all the different slots this wouldn’t even have been a problem. While XNA theretically can do the right thign on their side with it’s declarative API it was never optimized to the level that it did it :/ But then again implementing RGBM fr HDR in 3.1 had it’s problems too.

Obviously all in here is coder art and not meant to be real content we are just playing around with the technique. One of it’s nice properties that does not show up in these screens are it’ ability to wrap around corners and bent shapes quite well. In the image here below you can clearly see the white box which is the primitive we render to project the decal. Thanks to XNA removing the border sample mode in 4.0 our artists needs to make certain that the border of the texture has a black alpha value :/

When working on decals we thought that it would have been a nice idea to be able to add waters puddles, oil slips etc into the game too with decals, however fluids works quite differently physically compared to solid objects and since this is a deferred rendered we cant just add transparent objects at the end due to lighting issues. In the end we opted to render the fluids into the gbuffer too so we could resolve lighting on them properly. We do this by blending the material of the fluid with the material below it and the blending factor is based on how much light will reflect from the fluid at the current viewing angle compared to it this allows us to model physically very different fluids quite effectively. It is still a hack thought, but thanks to us resolving environmental lighting in the Gbuffer pass we can get quite close to a realistic result this shows well on this coder made water puddle in that when viewed form above the water is mostly transparent but then viewed from a low angle you can see a clear reflection. For the player this means they show clear reflections at a distance but he can see through them when he comes closer just like with real water.

This was the first of our weekly updates (we might try to get even more in though) so every Friday from now on will be update day and we will try to bring you more information about Project Temporality and what we are working on. If you want more frequent news subscribe to us on facebook and twitter.