Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for importing external buffers #3355

Merged
merged 5 commits into from
Jul 7, 2023

Conversation

AdrianEddy
Copy link
Contributor

Checklist

  • Run cargo clippy.
  • Run RUSTFLAGS=--cfg=web_sys_unstable_apis cargo clippy --target wasm32-unknown-unknown if applicable.
  • Add change to CHANGELOG.md. See simple instructions inside file.

Connections
Related to #3145 and #2320

Description
This PR introduces create_buffer_from_hal in addition to the existing create_texture_from_hal, and also adds buffer_from_raw to Dx12 and Metal backends.

These new functions are mostly copy-paste from create_texture_from_hal, with minor adjustments for buffers instead of textures.

I also wanted to add buffer_from_raw to Vulkan, but I'm not sure what should be used in block variable ( Mutex<gpu_alloc::MemoryBlock<vk::DeviceMemory>>). How does the MemoryBlock work here if we provide external memory? Should we also get vk::DeviceMemory in buffer_from_raw argument and somehow fit that to the MemoryBlock struct?

Testing
I tested the external Dx12 buffer with rendering over normal wgpu::Texture and then using encoder.copy_texture_to_buffer and copy_buffer_to_texture and rendering that buffer in an external application. Everything worked as expected.

@cwfitzgerald
Copy link
Member

I also wanted to add buffer_from_raw to Vulkan, but I'm not sure what should be used in block variable ( Mutex<gpu_alloc::MemoryBlockvk::DeviceMemory>). How does the MemoryBlock work here if we provide external memory? Should we also get vk::DeviceMemory in buffer_from_raw argument and somehow fit that to the MemoryBlock struct?

I think this just needs to be optional as it isn't our responsibility to free it or manage its memory at all.

This PR generally looks fine - but there doesn't seem to be a reasonable way of actually using the buffer you import. Specifically the first time it's used in wgpu-land we'll issue a transition from UNDEFINED and the contents of the buffer are now moot. Additionally there's no way to specify barriers from transitioning control to and from wgpu-land.

@AdrianEddy
Copy link
Contributor Author

Specifically the first time it's used in wgpu-land we'll issue a transition from UNDEFINED and the contents of the buffer are now moot.

right, so we should assume that externally imported buffers are already initialized and ready to use. I now changed that by using BufferInitTracker::new(0) which I believe means "There's 0 bytes left to initialize"

I also added buffer_from_raw to the Vulkan backend and tested it. I made block optional, and this works, but it makes these buffers unable to be mapped by wgpu. This may be fine for some use cases, but may not be for other cases. We could made it work by adding Option<vk::DeviceMemory> as a second argument, but I was unable to figure out how to fit that with gpu_alloc and it's private MemoryBlock::new function. This is fine for my case, as I'm only interested in GPU-only interoperability, and if I wanted GPU-CPU interop, I would just use normal wgpu.create_buffer

I tested it using Metal, Dx12 and Vulkan backends and it all worked as expected.

Additionally there's no way to specify barriers from transitioning control to and from wgpu-land.

In which cases this is needed? Do we need to have a way to specify barriers for this feature to be merged?

@cwfitzgerald
Copy link
Member

Have you run sync validation against your example? Unless I'm missing some component of this, which is possible, we should end up issuing a weirdly meaningless barrier. I think in practice it might work, but it is likely invalid.

After https://github.com/gfx-rs/wgpu/pull/3355/files#diff-90e54105789cf7268e0acbfd914035fccb5967895c81d21ac60f94af6b439edaR4020 the buffer is stored with no state in the tracker. This is how new buffers are made in wgpu as well, so that's fine. But on first usage, we'll issue a barrier for {empty}..UNIFORM or whatever and I'm not positive that won't invalidate the contents of the buffer (or is generally invalid, as the previous usage was something else, not empty). See https://github.com/gfx-rs/wgpu/blob/master/wgpu-hal/src/vulkan/conv.rs#L516-L567 for how we map our usages to barrier usage

@AdrianEddy
Copy link
Contributor Author

I have to dig deeper into the tracking and barrier stuff, I don't have the full picture of that process and something weird might be going on here.

It works correctly for my case with Metal/Vulkan/Dx12, but on Metal and Dx12 I'm getting much worse performance with buffers than textures (same data, buffer ~20ms/frame vs texture ~10ms/frame). Vulkan is fast (3ms) with both buffers and textures. I'm not sure if it has to do with these barriers and tracking, but it might, so I need to investigate that.

If you have any ideas or hints please let me know

@i509VCB
Copy link
Contributor

i509VCB commented Feb 7, 2023

My comment does not need to be addressed in this pull request, I believe it would also probably be useful to replicate something like #3019 for external buffers. In particular on Linux a VkBuffer can be shared between processes using external memory handles.

Copy link
Member

@cwfitzgerald cwfitzgerald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks for the patience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants