Skip to content

PostProcess

Chuck Walbourn edited this page Aug 11, 2022 · 13 revisions
DirectXTK

Post-processing is a common technique applied to 3D rendering to achieve various effects. It's image-based, so it's performance is based on the size of the 2D render target rather than the complexity of the scene. The post-processing implementation in the DirectX Tool Kit performs common effects like monochrome and bloom, as well tone mapping which is essential to High Dynamic Range (HDR) rendering.

  • BasicPostProcess supports post-processing that takes a single input texture such as monochrome conversion or blurring.
  • DualPostProcess supports post-processing that operates on two images such as merging/blending.
  • ToneMapPostProcess supports tone-map operations for HDR images such as the Reinhard operator. It also supports the HDR10 signal preparation needed for true 4k UHD wide color gamut rendering.

Related tutorial: Using HDR rendering

classDiagram
class IPostProcess{
   <<Interface>>
   +Process()
}
class BasicPostProcess{
    +SetSourceTexture
}
IPostProcess <|-- BasicPostProcess
class DualPostProcess{
    +SetSourceTexture
    +SetSourceTexture2
}
IPostProcess <|-- DualPostProcess
class ToneMapPostProcess{
    +SetOperator
    +SetTransferFunction
}
IPostProcess <|-- ToneMapPostProcess
Loading

Header

#include "PostProcess.h"

Initialization

Construction requires a Direct3D 12 device, a effect selection, and state description:

RenderTargetState rtState(m_deviceResources->GetBackBufferFormat(),
    m_deviceResources->GetDepthBufferFormat());

postProcess = std::make_unique<BasicPostProcess>(device, rtState, BasicPostProcess::Sepia);

Process relies on the correct render target already being set on the context, as well the correct viewport and scissor rects. The source texture(s) must also be in the D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE state.

Usage

To make use of post-processing, you typically render the scene to a offscreen render texture.

auto heapProperties = CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT);

D3D12_RESOURCE_DESC desc = CD3DX12_RESOURCE_DESC::Tex2D(DXGI_FORMAT_R16G16B16A16_FLOAT,
    width, height,
    1, 1, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_RENDER_TARGET);

D3D12_CLEAR_VALUE clearValue = { DXGI_FORMAT_R16G16B16A16_FLOAT, { 0.f, 0.f, 0.f, 0.f } };

device->CreateCommittedResource(&heapProperties, D3D12_HEAP_FLAG_ALLOW_ALL_BUFFERS_AND_TEXTURES,
        &desc,
        D3D12_RESOURCE_STATE_RENDER_TARGET, &clearValue,
        IID_PPV_ARGS(sceneTex.ReleaseAndGetAddressOf()));

device->CreateRenderTargetView(sceneTex.Get(), nullptr,
    rtvDescriptors->GetCpuHandle(RTDescriptors::SceneRT));

device->CreateShaderResourceView(sceneTex.Get(), nullptr,
    resourceDescriptors->GetCpuHandle(Descriptors::SceneTex));

Instead of rendering to the usual render target that is created as part of the DXGI swap chain, you set the offscreen texture as your scene render target:

auto rtvDescriptor = rtvDescriptors->GetCpuHandle(RTDescriptors::SceneRT);
CD3DX12_CPU_DESCRIPTOR_HANDLE dsvDescriptor(
    dsvDescriptorHeap->GetCPUDescriptorHandleForHeapStart());

commandList->OMSetRenderTargets(1, &rtvDescriptor, FALSE, &dsvDescriptor);
commandList->ClearRenderTargetView(rtvDescriptor, color, 0, nullptr);
commandList->ClearDepthStencilView(dsvDescriptor, D3D12_CLEAR_FLAG_DEPTH, 1.0f, 0, 0, nullptr);

Then you render the scene as normal. When the scene is fully rendered, you then change the render target and use the previously generated render texture as your source texture (and you don't use a depth/stencil buffer for the post-processing).

{
    auto barrier = CD3DX12_RESOURCE_BARRIER::Transition(sceneTex.Get(),
        D3D12_RESOURCE_STATE_RENDER_TARGET,
        D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, 0);
    commandList->ResourceBarrier(1, &barrier);
}

{
     CD3DX12_CPU_DESCRIPTOR_HANDLE rtvDescriptor(
        rtvDescriptorHeap->GetCPUDescriptorHandleForHeapStart(), backBufferIndex,
        rtvDescriptorSize);
    commandList->OMSetRenderTargets(1, &rtvDescriptor, FALSE, nullptr);
}

postProcess->SetSourceTexture(resourceDescriptors->GetGpuHandle(Descriptors::SceneTex),
    sceneTex.Get());
postProcess->Process(commandList);

Note you have to transition the scene texture state from render target to pixel shader source above, which needs to be returned back to the render target state before the next frame is rendered.

In some cases, you will perform several post-processing passes between various off-screen render targets before applying the final pass to the swapchain render target for presentation.

You can make use of the RenderTexture helper to manage the offscreen render target.

Interface

The post-processing system provides a IPostProcess interface to simplify use. The only method in this interface is Process which is expected to execute the post-processing pass with the result placed in the currently bound render target.

void Process(ID3D12GraphicsCommandList* commandList);

Example

A Bloom or Glow post-processing effect can be achieved with the following series of post-processing passes:

ppBloomExtract = std::make_unique<BasicPostProcess>(device, rtState,
    BasicPostProcess::BloomExtract);

ppBloomBlur = std::make_unique<BasicPostProcess>(device, rtState,
    BasicPostProcess::BloomBlur);

ppBloomCombine = std::make_unique<DualPostProcess>(device, rtState,
    DualPostProcess::BloomCombine);

// The scene is rendered to a render texture sceneTex with the
// SceneTex SRV descriptor

// blur1Tex, blur1Tex, Blur1RT is a render texture typically half-sized
// in both width & height of the original scene to save memory

// Blur2Tex, blur2Tex, Blur2RT is another half-sized render texture

{
    auto barrier = CD3DX12_RESOURCE_BARRIER::Transition(sceneTex.Get(),
        D3D12_RESOURCE_STATE_RENDER_TARGET,
        D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, 0);
    commandList->ResourceBarrier(1, &barrier);
}

// Pass 1 (scene -> blur1)
ppBloomExtract->SetSourceTexture(resourceDescriptors->GetGpuHandle(Descriptors::SceneTex),
    sceneTex.Get());
ppBloomExtract->SetBloomExtractParameter(0.25f);

auto blurRT1 = rtvDescriptors->GetCpuHandle(RTDescriptors::Blur1RT);
commandList->OMSetRenderTargets(1, &blurRT1, FALSE, nullptr);

auto vp = m_deviceResources->GetScreenViewport();

SimpleMath::Viewport halfvp(vp);
halfvp.height /= 2.f;
halfvp.width /= 2.f;
commandList->RSSetViewports(1, halfvp.Get12());

ppBloomExtract->Process(commandList);

{
    auto barrier = CD3DX12_RESOURCE_BARRIER::Transition(blur1Tex.Get(),
        D3D12_RESOURCE_STATE_RENDER_TARGET,
        D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, 0);
    commandList->ResourceBarrier(1, &barrier);
}

// Pass 2 (blur1 -> blur2)
ppBloomBlur->SetSourceTexture(resourceDescriptors->GetGpuHandle(Descriptors::Blur1Tex),
    blur1Tex.Get());
ppBloomBlur->SetBloomBlurParameters(true, 4.f, 1.f);

auto blurRT2 = rtvDescriptors->GetCpuHandle(RTDescriptors::Blur2RT);
commandList->OMSetRenderTargets(1, &blurRT2, FALSE, nullptr);

ppBloomBlur->Process(commandList);

{
    CD3DX12_RESOURCE_BARRIER barriers[2] =
    {
        CD3DX12_RESOURCE_BARRIER::Transition(blur1Tex.Get(),
        D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE,
        D3D12_RESOURCE_STATE_RENDER_TARGET, 0),
        CD3DX12_RESOURCE_BARRIER::Transition(blur2Tex.Get(),
        D3D12_RESOURCE_STATE_RENDER_TARGET,
        D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, 0),
    };
    commandList->ResourceBarrier(2, barriers);
}

// Pass 3 (blur2 -> blur1)
ppBloomBlur->SetSourceTexture(resourceDescriptors->GetGpuHandle(Descriptors::Blur2Tex),
    blur2Tex.Get());
ppBloomBlur->SetBloomBlurParameters(false, 4.f, 1.f);

commandList->OMSetRenderTargets(1, &blurRT1, FALSE, nullptr);

ppBloomBlur->Process(commandList);

{
    auto barrier = CD3DX12_RESOURCE_BARRIER::Transition(blur1Tex.Get(),
        D3D12_RESOURCE_STATE_RENDER_TARGET,
        D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, 0);
    commandList->ResourceBarrier(1, &barrier);
}

// Pass 4 (scene+blur1 -> rt)
ppBloomCombine->SetSourceTexture(resourceDescriptors->GetGpuHandle(Descriptors::SceneTex));
ppBloomCombine->SetSourceTexture2(resourceDescriptors->GetGpuHandle(Descriptors::Blur1Tex));
ppBloomCombine->SetBloomCombineParameters(1.25f, 1.f, 1.f, 1.f);

auto rtvDescriptor = deviceResources->GetRenderTargetView();
commandList->OMSetRenderTargets(1, &rtvDescriptor, FALSE, nullptr);

commandList->RSSetViewports(1, &vp);

ppBloomCombine->Process(commandList);

{
    CD3DX12_RESOURCE_BARRIER barriers[2] =
    {
        CD3DX12_RESOURCE_BARRIER::Transition(blur1Tex.Get(),
            D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE,
            D3D12_RESOURCE_STATE_RENDER_TARGET, 0),
        CD3DX12_RESOURCE_BARRIER::Transition(blur2Tex.Get(),
            D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE,
            D3D12_RESOURCE_STATE_RENDER_TARGET, 0),
    };
    commandList->ResourceBarrier(2, barriers);
}

Remarks

Because DirectX 12 Pipeline State Objects (PSOs) are immutable, all options must be provided to the constructor.

Threading model

Creation is fully asynchronous, so you can instantiate multiple effect instances at the same time on different threads. Each instance only supports drawing from one thread at a time, but you can simultaneously post-process on multiple threads if you create a separate effect instance per command list.

State management

When Process is called for post-processing it will set the states needed to render including the root signature and the Pipeline State Object (PSO).

Further reading

Video post-processing

For Use

  • Universal Windows Platform apps
  • Windows desktop apps
  • Windows 11
  • Windows 10
  • Xbox One
  • Xbox Series X|S

Architecture

  • x86
  • x64
  • ARM64

For Development

  • Visual Studio 2022
  • Visual Studio 2019 (16.11)
  • clang/LLVM v12 - v18
  • MinGW 12.2, 13.2
  • CMake 3.20

Related Projects

DirectX Tool Kit for DirectX 11

DirectXMesh

DirectXTex

DirectXMath

Tools

Test Suite

Model Viewer

Content Exporter

DxCapsViewer

See also

DirectX Landing Page

Clone this wiki locally