-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposals for custom shaders/rendering #903
Comments
Just out of curiosity what would this allow a developer to do? Or maybe a better way to ask that is if you were to create an example for this what would that example show? Why would someone want this? It sounds cool, I think I am just missing the point |
It would allow a developer to directly draw GPU rendered content for things that would be inefficient or awkward to express using iced primitives. For complex content, this can be a big performance improvement. In my case, I have custom charting shaders that I would like to render as part of a UI. In the comment by hecrj linked above, he mentions custom 3d graphics as a potential use case. The example would likely just have a custom shader that draws some simple pattern or figure as the contents of a widget. It would be similar to the |
I prefer a special texture widget that can display the GPU buffer so that we can draw whatever we want to the offscreen buffer and then display it as a simple image. |
I think that is exactly the question this issue is trying to determine how to solve. How do you expose a custom image to the library user in such a way that they can efficiently draw on it? There are a couple ways "custom" widgets are currently exposed in iced, including the Abstracting over GPU operations is very tricky, as efficiently exposing the details of the GPU operations without depending on the specifics of driver, hardware, or library is difficult. This issue proposes several ways of wiring through the backend-specific instructions for custom rendering without CPU overhead. It is almost certain that this function must be exposed in a backend-specific way (or the feature is equivalent to abstracting over all backends, which iced already does not do; see, for example, the backend-specific code in In such a scenario the application is likely already I think the fundamental problem is how to associate each custom texture/buffer with a particular widget without leaking the texture/buffer type (which is backend specific) into the non-backend portions of the API. If you see a better way to do this than the approaches I suggest above, please let me know. It's possibe I've overlooked an approach. It's possible that we could simply take the approach in #1003 and formalize it a bit, providing at a minimum some helper functions or hooks in the wgpu renderer that allows the custom rendered widget implementer to easily access an appropriately sized texture (along with the necessary state, such as a |
Iced may not know anything about additional rendering passes. After all rendering loop may look like:
Update: It even could be |
Just here to express my interest in this. I have a lot of custom rendering in my app because I need instanced rendering, and some animations. My solution to this is to simply drive the whole app myself, and use iced as a side thing that renders UI on top of my wgpu rendering. I would prefer iced to drive my app, handle pages/scenes, layouts and all of that, and my custom rendering can be easily encapsulated in widgets. I don't want to reinvent the wheel, and write my own scene handling, layouts, async task handling, event handling, etc, if iced can do all of that with a nice architecture already. |
Hi! Thanks for the wonderful UI library.
I would like to add support for custom rendering of a widget (i.e. not just a custom widget made of existing primitives, but a widget that uses a custom shader to draw itself on the screen). I have experimented with a couple ways to achieve this, but would appreciate some feedback on the potential approaches before I commit to fully implementing any of them, as you may have an opinion on which is the best fit for this project.
The only discussion of custom rendering that I have been able to find is here: #32 (comment) but perhaps I've missed some discussion elsewhere.
I plan to do this custom rendering using
wgpu
(and thereforeiced_wgpu
), but as you will see, some of the changes would reach into the lower level crates.The general approaches I have been able to come up with, in order of most complete/invasive to least are:
1. Genericize
iced_graphics::Primitive
Genericize
iced_graphics::Primitive
with a type (used for a newCustom
primitive variant) which can be specified by the downstream application in its creation of a customiced_graphics::Renderer
. When no custom rendering is required, this type would be set to the unit type. This would have a very far reaching impact on iced, as the generic parameter would cascade into pretty much everything iniced_graphics
and downstream (as well as probably breaking (in a relatively shallow way) existing custom widgets). It's possible some of the breakage could be minimized by extracting a lower-level type e.g.GenericPrimitive<CustomType>
and leavingPrimitive
as is. The internal APIs could would be then be written in terms ofGenericPrimitive<CustomType>
but the external API would use the non-genericPrimitive
(which would map toGenericPrimitive<()>
).The
wgpu::Backend
would then have a genericCustomPipeline
(with would havedraw
function which takes a command encoder, output target, staging belt, etc.) which it would invoke with theCustom
primitives. Alternatively, a completely separate backend could be implemented that delegates towgpu::Backend
(though likely that would require making more of thewgpu::Backend
public to allow the custom primitives to be drawn in the correct layers.2. Genericize via
Renderer::Output
Leave
Primitive
as it is and do all the genericization via theRenderer::Output
. Implementing a custom renderer requires implementing theRenderer
s for all the different widgets, which could be eased by genericizing most of the defaulticed_graphics
widgetRenderer
s to outputRenderer::Output
instead of(Primitive, mouse::Interaction)
via aInto
which can be a no-op in the case of a default renderer and must have a conversion implemented for a custom renderer.This gets tricky (or impossible) to genercize for some widget types (e.g. container types like
Row
, or anything which needs to have widget logic that interacts withRenderer::Output
concretely), but again someInto
magic may let the custom renderer reuse as much of the existingiced_graphics
renderers as possible.3. Introduce a non-generic, non-type-safe custom primitive handle
Drop the type-safety for the custom renderer by introducing a non-generic variant
Primitive::Custom
which would hold astruct CustomPrimitiveHandle(u64)
and an innerPrimitive
. The meaning of this handle would up to the custom widget and its custom renderer. These primitives would be passed to aCustomPipeline
(as described in 1) which could then render them as it sees fit. This approach avoids the generics from the custom renderer leaking into the rest of the code, but has the disadvantage that the custom renderer must express all of its primitive behavior via this integer handle (likely by looking up the primitive via its handle somewhere in the custom renderer).4. Add an introspection API to assist in locating external rendering passes
Don't try to integrate the custom rendering directly in the iced rendering calls, but instead provide an "introspection" API which can allow an external application to look up the position of a given widget or its primitives. The application can then use this information to know exactly where to render its custom graphics given the rest of the UI layout. Of course, this doesn't play well with heavily layered user interfaces (as we can't interleave the custom draw calls with a single iced draw call), but one could still draw some widgets below and some widgets above the custom widgets by issuing multiple iced draws for different sets of widgets.
Solutions 1-3 would also prompt the question of whether the custom renderer should be included in
iced::Application
andiced::Sandbox
(which would complicate their interfaces). It might be better to avoid exposing the custom renderer there and instead require that anyone who wants to use a custom renderer must also implement a custom application (like theintegration
example).Are any of these solutions something that you think is worth considering for iced? If so, I can draft a PR to make the proposal more concrete than this wall of text.
The text was updated successfully, but these errors were encountered: