First Person Shooter
Procyon
This the latest full game project that I contributed to at the Game Assembly, built in the same engine as Spite. This project I focused mostly on our graphics engine, making many different improvements in it's features and performance. I go more in depth about base of the rendering engine in the project entry on Spite: Serpentide.
The deferred rendering method is not suited to rendering transparent objects as lighting is only applied to the elements of the gbuffer, which filters out geometry behind other objects. To render windows in our engine I chose to implement a forward renderer specifically to render transparent objects on to the rendered scene.
Each object culls the lights that affect it and then use them to render the object through the same pbr functions as solid objects. Then the shader samples the texture generated gbuffer, this will act as the refracted light of the object. It is possible to do screen space refraction at this stage, which I implemented as shown below. However, this was inconsistent and did not produce desirable results for gameplay. We opted to instead simply use the linearly sampled value. A reflective value is also selected, which similarly used to employ a screen space reflection on to the generated gbuffer. This was also scrapped, instead the skybox is used to sample a reflective value.
The three generated colors are then blended using a fresnel function as described in chapter 19 of the book GPU gems 2.
Transparent Object Rendering
Each decal is stored as three components, it's transform, as a 4 by 4 matrix, it's size in each direction, as well as it's textures. When being drawn, an inside-out cube is set as the geometry for the vertex shader, which is transformed and projected like a model, allowing the rasterizer to cull away a majority of the screen before the pixel stage. The cube is inside out so that a view inside the cube may still see the decal. Then in the pixel shader stage, each pixel the cube covers on the screen is then tested with a point vs OBB check, sampling from the position texture in the previously generated gbuffer. If the point is outside the cube, it is discarded,
If the pixel passes the point vs OBB stage the textures are applied to the appropriate texture in the gbuffer by taking the local position in the OBB's x and z directions and transforming them in to UV coordinates. Each texture is also controlled by bit flags stored in the decal. Albedo and material are simply sampled form the texture and then applied. Normals are applied by by generating a TBN matrix, using the transform of the decal as normal, tangent and bitangent.
Deferred Decals
From the beginning of the project we knew we wanted to be able to apply decals to the environment geometry to allow for more detail work, as well as to create bullet impacts. Because the engine uses deferred rendering, the decal application process is slightly easier since the decals won't have to be included when when rendering each object like one would in forward type renderers.
When planning our first gold sprint, we noted that the lighting could feel flat and that some objects would meld together color wise. To combat this I implemented a version of the SSAO effect popularized by Crysis. First, I researched other implementations of the algoritm, then set out to make my own implementation.
The configuration for the shader takes the gbuffer generated position and normal textures, and outputs to the lit texture generated after the light pass. Additionally, it takes a 'to camera' and a 'to projection' matrix as parameters in a constant buffer, along side an sampling kernel of 64 float4. At the beginning of runtime, the sampling kernel is generated by generating a random point with a positive y, then normalizing it, then multiplying it with a random scalar. This gives a random spread of points flowering out from the origin.
When the shader executes, it goes through each pixel sampling the normal and position texture for that pixel. Then it constructs a transform the pixel, using the normal as the y direction, constructing the other directions of the matrix by taking the cross product with a reference vector. This introduces some chaos in to the algorithm, that is noticeable on fast moving objects, such as the players hand in Procyon. However, this is maxed well by the blurring stage and other movement. Then the shader loops through each point in the sampling kernel, using the pixels matrix to transform the sample point in to world space. Then the sample point is transformed in to camera space and then projection space, providing a new sample spot from the position texture. Once sampled, the depth of the point and the sampled depth is compared. if the sampled depth is closer than the point is, an obscurance factor is increased. A smooth range check is also in place to make sure the sampled space is within a distance threshold to the position of the pixel, otherwise the shader would outline all objects with something behind it. Once each point in the kernel is processed, the resulting obscurance is divided by the size of the kernel and used to blend in darkness on to the lit texture.
Screen Space Ambient Occlusion
For this project I made a text rendering tool. The renderer reads font files generated by 'msdfgen' by Chlumsky (GitHub), and fully utilizes the multichannel signed distance field. When a text object is created, the text factory does the following:
-
Find all the characters in the input string within the generated font atlas. If one is missing it does not change the text and gives the user a warning. Each character's data is added to a vertex and index buffer.
-
The vertex and index buffer are mapped in to DX11 and then a text rendering shader configuration state is set as the active state. An output texture is set as the render target which can then be used by a sprite or decal.
-
The renderer makes a draw call to draw to the texture.
These steps can then be repeated whenever the text is updated, and thus does not redraw the text every frame.