-
Notifications
You must be signed in to change notification settings - Fork 186
Sketch of the proposed buffer mapping change #13
Conversation
cc @kvark |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for expressing the proposed API changes in code!
I have the following questions/concerns:
-
Basically, this would introduce another state of the buffer between GPU used and CPU mapped. That state is reached by a callback, which I suppose only fires when the GPU is done with the resource. What is the point of delaying the very act of mapping in this case? If the user needs it to be mapped, they might as well just get it mapped. If the user needs to do some more GPU stuff with it, then this would conflict with the user having a "mappable" buffer on their hands.
-
What use cases are going to become simpler when using the proposed API?
-
moving Buffer in and out is likely to introduce friction for the users
/// Note: This could be Device::map_buffer_write(&self, buffer: &mut WriteMappableBuffer) -> &[u8] | ||
/// The important part is that this borrows both the mappable buffer and the device so that | ||
/// the mapped memory doesn't outlive the device nor the buffer. | ||
pub fn map(&mut self, device: &Device) -> &mut[u8] { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That also would prevent any submissions of command buffers, since get_queue
is &mut
today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I suppose it is not worse than what we have today since the current callback can't access the deivce and do any submission either.
Do you think it would be possible for the callback to access the device mutably or would it be complicated/unsafe to do so?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feels complicated/unsafe to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok fair enough. Then I suppose passing it as non-mutable isn't worse than what we currently have, right?
Most of my answers below revolve around two aspects:
Having it mapped works just as well from the point of view of the user. I was worried that carrying around mapped memory would be hard to implement safely. For example what happens if the deivce is dropped while the buffer is mapped and directly accessible? That's why I initially went for "mappable" where mapping hapens lazily and borrows the device. You actually know what's under the hood so if you say "mapping the buffer eagerly is fine and we don't need to borrow the device to keep things safe" then I'm all for it.
This is the point. In order to get a access to buffer memory on the CPU without stalling you need to stop using it on the GPU. Once you get your mapped buffer, if you want to use use it on the GPU you need to stop accessing it on the CPU. These are two very distincts states which are, in my opinion, well described with separate types and move semantics. This is a case where rust's unforgiving type system helps with expressing the "right" way to juggle between CPU and GPU access rather than adding friction.
Currently you can't map a buffer synchronously except if you are creating it.
This proposal makes it possible and rather simple to build a pool of buffers that are ready to be accessed on the CPU which I believe is not possible right now. Such a pool would provide you with access to the buffer without a closure that consumes its environment,
In my opinion the largest source of friction isn't moving the buffer, it is the constraints that come from only being able to access the memory from a On the other hand if we were to not move the buffer:
In my opinion, the friction of moving the buffer in and out is actually lower than having to manually track whether it is waiting to be mapped or whether mapping has already been done, the content we want is in the buffer and it can be accessed on the GPU. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed over the voice, agreed on this path to proceed. Main benefits:
- escaping the closure context
- ease of pooling
No wgpu-native changes are needed.
I've taken a look at this branch after proposing changes with this api in #76, and |
Since this is meant for wgpu-rs, we are free to make wgpu-native exposing any required apis under "not(feature = "remote")" configuration (as Rust apis, not C).
… On Sep 14, 2019, at 12:22, Lachlan Sneff ***@***.***> wrote:
I've taken a look at this branch after proposing changes with this api in #76, and wgpu-rs has changed so much since this, I don't think the code in this pr is reusable. Additionally, wgpu-native doesn't expose any way to actually implement this functionality, I believe.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
A change along the line of this PR has landed in #344 |
Motivations for this proposal are outlined in #9.
In a nutshell, rather than asynchronously requesting a buffer to be mapped, the user would asynchronously request a mappable version of the buffer (but not necessarily mapped yet) which they will be able to hold on to and map synchronously whenever they need.
Unmapping the mappable buffer turns it back into a regular
Buffer
which can be used again in command queues and the likes.I belive that an API of this form (I am sure some changes would be preferable if not required) would signifcantly improve ergonomics around mapping buffers over the current API.