Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: Offscreen Rendering & Screenshots #1845

Merged
merged 16 commits into from
Jun 27, 2023

Conversation

bungoboingo
Copy link
Contributor

@bungoboingo bungoboingo commented May 11, 2023

This PR adds support for offscreen rendering in Iced! 🎉

screenshot_demo.mov

📸 Users can now capture screenshots 🖼️ using the window::screenshot() command.

fn update(&mut self, message: Self::Message) -> Command<Self::Message> {
    match message {
        Message::Screenshot => {
            return iced::window::screenshot(Message::ScreenshotData); //new!
        }
        //...

The window::screenshot command requires a Message which takes a Screenshot. This is a new struct exposed in the iced_runtime crate which contains the RGBA bytes of the screenshot, in addition to its size.

For now this command is limited to the entire viewport, with an optional crop() method available which will crop the bytes to the specified Rectangle.

cropping.mov
let cropped = screenshot.crop(Rectangle::<u32> {
    x: 0,
    y: 0,
    width: 500,
    height: 500,
});

This method returns a Result<Screenshot, CropError> for situations where the cropped region is out of bounds, or is not visible.

🎉 Offscreen rendering capabilities are now added to both wgpu and tiny_skia!

Rendering offscreen is now exposed in the Compositor interface with a render_offscreen function. The outputted texture data is always guaranteed to be in RGBA format in the SRGB color space for compatibility with image libraries & iced images, and to simplify the Screenshot struct with a predetermined byte order & pixel size.

🔢 Implementation details

For wgpu, performing offscreen rendering was fairly straightforward. I create a texture based on the viewport size & the chosen texture format, and then pass that into the backend with a new backend.offscreen method. This method performs a normal render pass on the new texture, and optionally runs a tiny compute shader after drawing all primitives that converts it to wgpu::TextureFormat::Rgba8Unorm with a manual srgb conversion due to wgpu::TextureFormat::Rgba8UnormSrgb not being a supported format for a storage texture. The whole process on my (admittedly very powerful!) test suite of GPUs (AMD, NVIDIA, M1) took about ~2µs in the test example. The only added overhead over a normal render pass is the compute shader to convert to rgba.

For tiny-skia this was very straightforward, the same as a normal present call but rendering to a new buffer vs the surface buffer. Our backend returns ARGB format, so just had to do some bit shiftin' and we're all good.

🤔 Outstanding API questions

  1. It was discussed whether it's a cleaner API to do a window::screenshot command which optionally can be cropped, or a window::screenshot(area) command which can either be Fullscreen or Region, which would be a rectangle. Obviously there is a performance hit when rendering the whole window to an offscreen buffer vs a potentially very small region if only a small portion is desired, but the execution is so fast it's almost negligible. I'm curious as to people's thoughts on this!

  2. The offscreen render capabilities are only exposed at the moment through this window::screenshot command. The intent was to implement the capabilities for offscreen rendering in this PR and expand upon it further in later iterations if new applications of offscreen rendering are needed. Curious to see if anyone has any thoughts on how else this might be exposed!

🧪 Testing

Tested wgpu & tiny-skia implementations on MacOS + M1, Linux + PopOS + AMD, Windows 10 + Nvidia combos. Further testing would be greatly appreciated!

Feedback very much welcome! This was my first time working this deeply with textures on the GPU so might have made some rookie mistakes 😉 I also know that there is an open draft PR for offscreen rendering but it hasn't been updated in a few years so thought I'd take a crack at it.

(reopened from 1783 which was auto closed due to advanced-text merge)

@bungoboingo bungoboingo changed the title [Feature] Gradients for Backgrounds Feat: Offscreen Rendering & Screenshots May 11, 2023
@hecrj hecrj added feature New feature or request rendering shell labels May 11, 2023
@hecrj hecrj added this to the 0.10.0 milestone May 11, 2023
@hecrj hecrj force-pushed the feat/offscreen-rendering branch from 778153c to 8820583 Compare June 6, 2023 13:51
Comment on lines 1 to 22
@group(0) @binding(0) var u_texture: texture_2d<f32>;
@group(0) @binding(1) var out_texture: texture_storage_2d<rgba8unorm, write>;

fn srgb(color: f32) -> f32 {
if (color <= 0.0031308) {
return 12.92 * color;
} else {
return (1.055 * (pow(color, (1.0/2.4)))) - 0.055;
}
}

@compute @workgroup_size(1)
fn main(@builtin(global_invocation_id) id: vec3<u32>) {
// texture coord must be i32 due to a naga bug:
// https://github.com/gfx-rs/naga/issues/1997
let coords = vec2(i32(id.x), i32(id.y));

let src: vec4<f32> = textureLoad(u_texture, coords, 0);
let srgb_color: vec4<f32> = vec4(srgb(src.x), srgb(src.y), srgb(src.z), src.w);

textureStore(out_texture, coords, srgb_color);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't work with the web-colors feature for similar reasons to #1885.

I think we should avoid doing any color conversions in shader code. These should be handled by the GPU automatically.

I believe we can rewrite this using a render pipeline so we can easily control the output format from Rustland.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will look into it, pretty sure I couldn't use a srgb texture format for a storage texture so it will have to be a render pipeline vs compute, yeah.

Copy link
Contributor Author

@bungoboingo bungoboingo Jun 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I reimplemented this with a RenderPipeline, I now remember when I first tried to implement this using a RenderPipeline; the issue is that the RenderColorPassAttachements TextureView (which in this case must be Rgba8UnormSrgb) must be the same format as any render attachment textures (e.g. the frame's texture, which on my M1 is Bgra8UnormSrgb), which makes it entirely useless for converting between texture formats, unless I'm missing some hidden way to do this with a RenderPipeline. This is why I ended up using a ComputePipeline!

I could pass in a flag to the computer shader as a uniform value to flag whether or not to convert to sRGB..?

Copy link
Member

@hecrj hecrj Jun 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I follow. I am fairly certain I have sampled textures with a different format than the output format in a render pipeline. It's the whole point of a sampler.

glyphon uses a masked texture with a single channel for most glyphs, for instance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might have a different way to do it using a render pipeline, I'm poking around with it this morning.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nvm i was just being a pepeg, should be good now 👍

Copy link
Contributor Author

@bungoboingo bungoboingo Jun 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking another look, I think now I can just re-use the existing blit.wgsl shader we have for multisampling; will update this accordingly.

@bungoboingo
Copy link
Contributor Author

Updated in d955b34 to just use the existing blit.wgsl that we use for msaa. Nice and simple!

Copy link
Member

@hecrj hecrj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Thank you!

Just moved a couple things around to simplify some APIs, but everything seems to work! Let's merge! 🥳

@hecrj hecrj enabled auto-merge June 27, 2023 18:30
@hecrj hecrj merged commit f696626 into iced-rs:master Jun 27, 2023
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request rendering shell
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants