Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feedback on migration to trait-based design #3626

Open
madsmtm opened this issue Apr 12, 2024 · 21 comments
Open

Feedback on migration to trait-based design #3626

madsmtm opened this issue Apr 12, 2024 · 21 comments
Labels
S - meta Project governance

Comments

@madsmtm
Copy link
Member

madsmtm commented Apr 12, 2024

Thank you for your interest in expressing your feedback on Winit v0.30!

Background

Winit is moving towards a trait-based API for several reasons, see #3432 for details. This has quite a large impact on how you structure your code, and while the design is not yet optimal, and will need additional work, we feel confident that this is the right way forwards, so in v0.30, we've tried to provide a somewhat gentle update path.

Feedback

We're aware that the API is more verbose, that is to be expected when implementing an entire trait instead of letting the compiler figure out the type annotations of a closure, but we're interested in knowing what kinds of code is made harder or impossible with this new design? Were you using some pattern before that you can no longer do, or no longer easily do? Please state so in a comment on this issue!


In this release, we also changed how windows are created, but that is not strictly part of this issue - please submit a new issue if you're having trouble with that. Note that we're well aware that Option<Window> is a bit cumbersome, in the future, managing the state for windows and creating/destroying them at the right times will likely become easier to do, see #2903 for some of the progress on that.

@madsmtm madsmtm added the S - meta Project governance label Apr 12, 2024
@XavierCS-dev
Copy link

I used a mut closure as the users main loop for my game engine, but with this new setup, I am struggling to figure out how to set it up properly without also killing performance, any pointers? I only updated due to the issues with exclusive full screen on 0.29

@kchibisov
Copy link
Member

Please look into CHANGELOG and migration guide in it. Closure is just a struct where each field is a capture variable. It's also not enforced.

@XavierCS-dev
Copy link

XavierCS-dev commented Apr 27, 2024

Please look into CHANGELOG and migration guide in it. Closure is just a struct where each field is a capture variable. It's also not enforced.

Thanks, I managed to figure it out

Call with

(self.mut_closure)(args)

in about_to_wait()

EDIT: Thanks for your work on this project I do appreciate the updates and fixes

@XavierCS-dev
Copy link

I am also finding I will need to restructure very significant parts of the engine if I am to use the new way of initialising windows. Currently, I use an app struct which contains the main engine state struct, however to initialise wgpu, the window is needed, however you can't get the window until during the event loop so wgpu can't be initialised until then, which means the user can't directly get access to the app struct, and can only get a mutable reference to it. I will also need to write and implement ApplicationHandler individually for each rendering backend.

@kchibisov
Copy link
Member

But we just swapped Window::new(event_loop) to event_loop.create_new_window(), so it's the same as well. The only difference is that you need loop running to have non deprecated code, but you can not really work on android, etc if you don't do so anyway, so sounds like a benefit for a cross platform GPU stuff.

@XavierCS-dev
Copy link

But we just swapped Window::new(event_loop) to event_loop.create_new_window(), so it's the same as well. The only difference is that you need loop running to have non deprecated code, but you can not really work on android, etc if you don't do so anyway, so sounds like a benefit for a cross platform GPU stuff.

This is true, I would also rather not use deprecated code. I managed to get it working, very messy but I can fix it later. I am still finding that Exclusive full screen still doesn't work on Hyprland, although I could just be doing it wrong, and besides that is out of scope for this thread I believe.

@kchibisov
Copy link
Member

Docs clearly say that exclusive fullscreen doens't exist on Wayland.

@XavierCS-dev
Copy link

Docs clearly say that exclusive fullscreen doens't exist on Wayland.

It seems I didn't understand the difference between borderless and Exclusive. The borderless mode seems to allow window sizes and resolutions different from the display's resolution, not what I expected from a borderless mode but I guess it is possible.

@kchibisov
Copy link
Member

exclusive means that you get exclusive control. As in you can also modeset, as in change monitor resolution. Borderless just ordinary fullscreen one.

@tgross35
Copy link

tgross35 commented Apr 29, 2024

I do think this is a much cleaner interface, but I really struggled getting something up and running with wgpu. The wgpu Surface typically borrows the Window which is easy with the event loop, but turns out to be self-referential when you need to construct the window within the resumed function. This was kind of a pain to get things working, but I wound up with something like this:

struct App {
    window: Option<Arc<Window>>,
    gfx_state: Option<GfxState>,
}

impl ApplicationHandler<MyUserEvent> for State {
    fn resumed(&mut self, event_loop: &ActiveEventLoop) {
        let win_attrs = Window::default_attributes()
            .with_title(/* ... */)
            .with_inner_size(/* ... */);
        let window = Arc::new(event_loop.create_window(win_attrs).unwrap());


        let gfx_state = GfxState::new(Arc::clone(&window));
        window.request_redraw()

        self.window = Some(window);
        self.gfx_state = Some(gfx_state);
    }

    fn window_event(&mut self, event_loop: &ActiveEventLoop, window_id: WindowId, event: WindowEvent) {
        // ...
        self.gfx_state.as_mut().unwrap().paint() // paint is 
    }
}

struct GfxState {
    device: wgpu::Device,
    surface_desc: wgpu::SurfaceConfiguration,
    surface: wgpu::Surface<'static>, // important part: 'static
    // ... all the other queues / buffers / etc needed
}

impl GfxState {
    fn new(window: Arc<Window>, app_state: &mut AppState) -> Self {
        let instance = wgpu::Instance::default();
        let size = window.inner_size();

        // important part: we can create a `wgpu::Surface<'static>` from an `Arc`
        let surface = instance.create_surface(window).unwrap();

        // ... configure pipelines

        Self { surface, /* ... */ }
    }

    fn paint(&mut self, state: &mut AppState) {
        // perform the rendering
    }
}

This could be slightly simplified by putting Arc<Window> and GfxState in a child struct within the same option. Does anyone have a better pattern?

@ifsheldon
Copy link

Based on @tgross35 comment, another issue comes up, which is async. Working with wgpu, we have to deal with some Futures like below

let instance = wgpu::Instance::default();
let surface = instance.create_surface(window).expect("Failed to create surface");
// need adapter to create the device and queue
let adapter = instance
    .request_adapter(&wgpu::RequestAdapterOptions {
        power_preference: wgpu::PowerPreference::default(),
        force_fallback_adapter: false,
        compatible_surface: Some(&surface),
    })
    .await
    .unwrap();
let (device, queue) = adapter
    .request_device(
        &wgpu::DeviceDescriptor {
            label: None,
            required_features: wgpu::Features::empty(), //The device you have limits the features you can use
            required_limits: wgpu::Limits::default(), //The limits field describes the limit of certain types of resource we can create
        },
        None,
    )
    .await
    .unwrap();

Unless the trait methods are async, we must use a blockon from an async executor. If winit internally also uses an async executor, then we may have two different async executors in use, which may be bad.

@tgross35

This comment was marked as duplicate.

@lukexor
Copy link

lukexor commented Apr 29, 2024

On Web targets, async with wgpu is extra problematic as wasm_bindgen_futures::spawn_local doesn't support returning any results so it's not clear how to get the resolved futures back to the main thread of execution without passing them through channels or static mutexes. More details here: #3560

imrn99 added a commit to LIHPC-Computational-Geometry/honeycomb that referenced this issue May 3, 2024
imrn99 added a commit to LIHPC-Computational-Geometry/honeycomb that referenced this issue May 6, 2024
* chore: bump render deps versions

* refactor: replace EventLoopWindowTarget by ActiveEventLoop

* refactor: remove wasm32 target code

* refactor: update window init using deprecated method

I will further update it later, this is just enough for the crate to
compile

* refactor: add default compilation option to the state creation

* feat: add new basic structures

based on rust-windowing/winit#3626 (comment)

* fix: revert an IDE-induced rename

* fix: revert idem

* feat: complete GfxState constructor

* refactor: make GfxState::new synchronous using pollster

* feat: impl ApplicationHandler for App

* feat: add necessary update methods to gfx state

* fix: add handle vertex building code

* chore: cleanup imports & re-exports

* refactor: move shader_data & camera modules

* fix: update render examples

* chore: add cusotm lints to honeycomb-render

* fix: process some lints

* fix: add back the possibility to run default examples
@hecrj
Copy link
Contributor

hecrj commented May 7, 2024

In iced, we leverage async channels and await to write the event loop using an actual loop.

Why? Because with a loop, the borrow checker can infer a lot more. For instance, we can safely keep user interfaces (which borrow from application state) alive until a specific event happens. This is not possible with a trait-based approach (not even the closure approach) because it would entail self-references. I shared some more details about this approach in this PR: iced-rs/iced#597.

Therefore, the trait-based approach is just a bit of additional unnecessary complexity for iced. We will end up rewrapping each event in the trait methods and sending them through a channel anyways. For this reason, we will be using the deprecated API for now, which will save us the unnecessary match and rewrapping.

Is there any particular reason the closure approach is being deprecated? It seems quite strange, since the new API is actually relying on the deprecated one internally. Why not keep the original alive and offer the new one as a convenient alternative?

@kchibisov
Copy link
Member

We need return values and sync callbacks. Some APIs mandate return to do e.g. hittest, sync resize, etc. If you do that async you basically blow up and not sync with the windowing systym resulting in visual artifacts, etc. callbacks don't scale at all as well and you can not build extensible systems like you can with extension traits.

The deprecated APIs only here to give some leeway, they'll be gone this week or so from master.

Also, the only difference is that there's no async keyword in traits, but all winit traits are async by matter of operations that are planned. Some can callback to you, some will deliver resources, it's just, you can not make them async/await because you must preserve the order.

@hecrj
Copy link
Contributor

hecrj commented May 7, 2024

I see. Return values do seem quite handy.

If the deprecated APIs are going away, I'll switch to the trait-based approach.

iced's event loop isn't really concurrent. The event loop future is polled manually after an event is sent. We only leverage async to create a resumable loop, but every event is still processed in order in a blocking manner; and a proper request/response protocol can be built through the channels.

@kchibisov
Copy link
Member

There will be async shim for the current trait, because web needs it, since it can not block and in the future there could be an option to write your own backend/loop structures.

The traits are also needed to split winit, which you can not do with closures really.

zierf added a commit to zierf/wgpu_render that referenced this issue May 16, 2024
@thinnerthinker
Copy link

If anyone is interested in my pattern of integrating 0.30.0 into a cross-compiled wgpu app, check out https://github.com/thinnerthinker/sursface/blob/main/examples/src/hello_triangle/main.rs

@mickvangelderen
Copy link

Should run_app take A instead of &mut A? This would allow using a wrapper type without introducing another level of redirection. ApplicationHandler<T> is already automatically implemented for &mut A when A: ApplicationHandler<T>.

#[inline]
#[cfg(not(all(web_platform, target_feature = "exception-handling")))]
pub fn run_app<A: ApplicationHandler<T>>(self, app: &mut A) -> Result<(), EventLoopError> {
    self.event_loop.run(|event, event_loop| dispatch_event_for_app(app, event_loop, event))
}

@madsmtm
Copy link
Member Author

madsmtm commented Sep 23, 2024

Should run_app take A instead of &mut A?

Yes, and it was done in #3721

@casey
Copy link

casey commented Oct 12, 2024

One slightly annoying thing with the new trait is it makes error handling a bit harder. Using the old interface, I had a single handle_event function which could produce an error, which I would handle. Using the new trait-based interface, the event handling functions don't return a Result, so I have to perform error handling in each. (Which means, if I want to use ?, I have to wrap each in an inner function that can return a Result.)

One way to make this more ergonomic would be something like this:

pub trait ApplicationHandler<T: 'static = (), E> {
    fn handle_error(&mut self, error: E);
    
    fn new_events(&mut self, _: &ActiveEventLoop, _: StartCause) -> Result<(), E>
    …
}

ApplicationHandler now has a generic E error type, which represents a user error, and trait methods now return Result<(), E>.

If a trait method returns an error, then handle_error is invoked with that error.

This would allow all user functions to return an error, and allow the user to implement custom error handling in a single place, namely the handle_error method.

Edit: I think this is mostly a dupe of #3692.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S - meta Project governance
Development

No branches or pull requests

10 participants