-
Notifications
You must be signed in to change notification settings - Fork 1
Learn DirectX 11
The sole purpose of this project is learning DirectX 11 api. Practical Rendering and Computation with Direct3D 11 book is used as the main source of knowledge. The second part of the book is composed of various rendering techniques and examples. Some of these examples will be implemented in the project (from very basic up to some advanced techniques). The implementation process follows the structure of the book.
- Chapter 8. Mesh Rendering
- Chapter 9. Dynamic Tessellation
- Chapter 10. Image Processing
- It seems like I am not going to implement the rest of the examples from the book. The main reason is that I got confident enough with dx11 and want to implement more advanced techniques by myself.
The example is pretty simple but it is a good point to start implementing more sophisticated techniques. There are a cube and directional light source on the scene.
The main goals are:
- Initialize DirectX device, context and swap chain.
- Load geometry and texture data and upload the data into the GPU.
- Load and compile vertex and pixel shaders.
- Set up the pipeline and draw the scene.
The example uses famous Bob Lamp
model which has 1 animation. Also there is a directional light source on the scene.
Vertices of such animated model has the following format: position, normal, texture coordinates, 4 bone indices and 4 bone weights. Vertex shader gets appropriate bones' matrices from StructuredBuffer<float4x4>
by bone indices and uses bone weights to interpolate the outcome vertex position.
Actually the vertex shader is pretty simple. The main challenge is to load a model with its animations, compose vertex data (including bone indices and weights) and implement animation algorithm which computes bone matrices for each frame preserving bone hierarchy.
The original example from the book processes an animated mesh. Vertex shader does vertex skinning and then tessellation shaders tessellate skinned vertices (control points). The implementation skips vertex skinning part (see previous example). There are no vertex and index buffers. Vertex shader creates 2 triangles which form a quad and passes 2 patches (3 control points each) to the hull shader. Hull shader conveys control points’ attributes and selects the maximum inner/outer tessellation factors. There are no any smart LOD-based decisions made (naive implementation). Domain shader gets patches, samples the displacement map and emits new vertices. In addition a normal map is used to lit the displaced surface properly.
The terrain geometry is represented by a flat grid on the XZ plane in 3D space. The majority of the work is been done in the hull shader constant function, which determines the LOD for the current patch (a cell of the terrain grid), using the patch being rendered, as well as its four immediate neighbors. The main hull shader acts as a simple pass-through of the four corner vertices for the current patch. The domain shader takes the tessellated locations and generates the actual terrain geometry that will finally be rasterized.
The example is an enhanced version of the terrain tessellation example. The main idea is that the the camera distance value is not a completely reliable measure of the tessellation. Some patches have to be heavily tessellated due to their geometry complexity. To estimate the geometry complexity for each patch we can compute some kind of smoothness factor of the patch. This computation step is done once by a compute shader. The shader takes the height map as its input and outputs the resulting 2D texture which contains smoothness(coplanarity) values for each patch. These values are used by the hull shader’s constant function to compute the tessellation factors.
The example implements a compute shader which takes an input image and filters it using the gaussian filter. The gaussian filter is a separable filter which is exploited in the example. First the input image is filtered horizontally and then vertically. The input texture sampling direction is controlled in the compute shader by the g_offset_dir
parameter.
The example implements a compute shader which takes an input image and filters it using the bilateral filter. The bilateral filter is not a separable filter which means it's not mathematically correct to use the two pass approach from the gaussian filter example. However, in many cases it is still ok to use a separable implementation which is done in the example.