Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

web: memory constantly grows then shrinks for seemingly no reason #909

Closed
teh-cmc opened this issue Jan 25, 2023 · 4 comments · Fixed by emilk/egui#2820
Closed

web: memory constantly grows then shrinks for seemingly no reason #909

teh-cmc opened this issue Jan 25, 2023 · 4 comments · Fixed by emilk/egui#2820
Labels
📉 performance Optimization, memory use, etc 🔺 re_renderer affects re_renderer itself 🕸️ web regarding running the viewer in a browser

Comments

@teh-cmc
Copy link
Member

teh-cmc commented Jan 25, 2023

Look at the memory panel if both of these examples.

Native behaviour (cargo r --release --features web -- examples/out/avocado.rrd):

23-01-24_17.56.22.patched.mp4

Web behaviour (cargo r --release --features web -- --web-viewer examples/out/avocado.rrd):

<screwed up the recording, gotta make a new one>

@teh-cmc teh-cmc added 🕸️ web regarding running the viewer in a browser 📉 performance Optimization, memory use, etc labels Jan 25, 2023
@emilk
Copy link
Member

emilk commented Jan 26, 2023

We should be able to debug this using the RERUN_TRACK_ALLOCATIONS feature (though some work may be needed to get it working on web)

@emilk
Copy link
Member

emilk commented Mar 15, 2023

The saw-toothing is wgpu-related. Allocation callstacks

wgpu_hal::gles::device::<impl wgpu_hal::Device<wgpu_hal::gles::Api> for wgpu_hal::gles::Device>::create_buffer
wgpu_core::device::queue::prepare_staging_buffer
wgpu_core::device::queue::<impl wgpu_core::hub::Global<G>>::queue_write_buffer
<wgpu::backend::direct::Context as wgpu::context::Context>::queue_write_buffer
<T as wgpu::context::DynContext>::queue_write_buffer
wgpu::Queue::write_buffer
egui_wgpu::renderer::Renderer::update_buffers
<eframe::web::web_painter_wgpu::WebPainterWgpu as eframe::web::web_painter::WebPainter>::paint_and_update_textures
eframe::web::backend::AppRunner::paint
eframe::web::events::paint_and_schedule

@emilk
Copy link
Member

emilk commented Mar 15, 2023

I try this:

cargo r -p raw_mesh -- --save ../lego.rrd
cargo r -p rerun --features web_viewer -- --web-viewer ../lego.rrd

And I see this:

image

after 6 minutes:
image

(the gap is me putting the browser in the background for a while)

@Wumpf
Copy link
Member

Wumpf commented Mar 20, 2023

We have a decent picture of where these come from by now:

  • the bug can't be reproduced with a native GL renderer since the allocations we're seeing are "buffer backing memory" of wgpu::Buffers which is only needed when mapping buffers is not possible
  • it seems that the memory is exclusively from temp buffers on wgpu::Queue which are created on every write_buffer/write_texture call. This implies that any use of our CpuWriteGpuReadBelt is reducing the issue. However, we can't avoid it entirely right now since egui is still using the queue for uploading data every frame
  • the rise in (average) memory usage over time is merely egui using more memory every frame since the memory graph has more and more lines over time. The easiest way to demonstrate this is by panning in the panel so that egui clips all lines, saying there for a while and then going back: Without the lines, the memory spikes stay lower.
  • we do not know why the temp buffers are freed in bursts. Wgpu has no mechanism for delaying the buffer garbage collection. This directly originates from submission indices being thought of as in use or not in use
    • i.e. why does it go up and down if there is a constant stream of new frames?
    • it seems that the WebGL backend reports frames done (i.e. value on the signal handler) only every few frames
    • it should be possible to optimize the WebGL backend to discard the backing memory once it is known that a user won't use a buffer anymore (i.e. much earlier than when it is known that the gpu is no longer using a buffer)

Further investigation is needed, but with what we know the issue seems fairly contained, and not as worrisome as originally thought, especially if we continue to use the CpuWriteGpuReadBelt everywhere.

@Wumpf Wumpf removed their assignment Mar 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
📉 performance Optimization, memory use, etc 🔺 re_renderer affects re_renderer itself 🕸️ web regarding running the viewer in a browser
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants