Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Render points with per-fragment depth, improving visuals for points intersecting with other geometry #6508

Closed
wants to merge 4 commits into from

Conversation

kpreid
Copy link
Collaborator

@kpreid kpreid commented Jun 5, 2024

What

Each point in a point cloud is now rendered with depth that is approximately a full sphere, rather than a flat camera-facing billboard. This is a very approximate calculation, but one which is good as long as the sphere is not too off-axis; it's the same kind of approximation as already used by the brightness shading of the points. The primary goal of this change is to make “ball and stick” constructions of point and line entities look better.

Here are screenshots of the “helix” demo before and after this change. Notice that lines no longer extend to the centers of the points, the gray point on the right looks more like a bead on the line, and the gray point on the left is clipped by intersecting the red point.

Screenshot 2024-06-05 at 09 19 58
Screenshot 2024-06-05 at 09 20 14

I intended this algorithm to be equally applicable to lines (making them render as cylinders), but that is not implemented yet.

At least on my machine, it has no observable increase in frame time when testing with demos such as the open_photogrammetry_format example. However, I haven't tested extreme overdraw conditions.

Checklist

  • I have read and agree to Contributor Guide and the Code of Conduct
  • I've included a screenshot or gif (if applicable)
  • I have tested the web demo (if applicable):
  • The PR title and labels are set such as to maximize their usefulness for the next release's CHANGELOG
  • If applicable, add a new check to the release checklist!

To run all checks from main, comment on the PR with @rerun-bot full-check.

Each point in a point cloud is now rendered with depth that is
approximately a full sphere, rather than a flat camera-facing billboard.
This is a very approximate calculation, but one which is good as long as
the sphere is not too off-axis; it's the same kind of approximation as
already used by the brightness `shading` of the points. The primary goal
of this change is to make “ball and stick” constructions of point and
line entities look better.

I intended this algorithm to be equally applicable to lines (making them
render as cylinders), but that is not implemented yet.
@abey79 abey79 added 🔺 re_renderer affects re_renderer itself include in changelog labels Jun 5, 2024
@abey79
Copy link
Member

abey79 commented Jun 6, 2024

@rerun-bot full-check

edit: that fails on external contributor PR

@abey79
Copy link
Member

abey79 commented Jun 6, 2024

@kpreid thanks for the contribution—looks great! I'll leave it to our GPU expert @Wumpf to review it, but this will likely have to wait until next Tuesday when he's back to office.

@emilk emilk requested a review from Wumpf June 10, 2024 07:47
Copy link
Member

@Wumpf Wumpf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much for your contribution!

I previously considered implementing this but got too nervous about the performance implications: Today, we have users complaining about extremely slow point cloud rendering due to overdraw. This typically happens in "degenerated" situations when zooming out on a large point cloud, so it's a bit of a corner case, but my fear is that changing depth proliferates the problem or at least makes it show up for hidden point clouds when it previously wouldn't. All this is highly speculative, I never got around to doing a proper test series. (Also, re_renderer today has almost no guarantees on draw order, so it may well make little difference.)

It does look a lot better for large spheres, so I'd love to get this in (with suggested code re-use and ability to revert to flat), but a more little bit more thorough round of testing with large point clouds with both small & big points is in order. I'm curious in particular about apple silicon & desktop gpu performance under high overdraw. Happy to assist with this.
Note that the timings the viewer shows optionally at the top are only cpu timings, so we have to be careful to check the whole frame time here which can be done via puffin (we haven't done much gpu profiling overall so far; I have adding wgpu-profiler and overall gpu timing numbers on the backlog for very long now)
Depending on the results we'll decide if we can leave this always on & leave optimization for the future or we already need to do something like limiting this feature to certain point clouds.

@@ -120,6 +133,7 @@ fn vs_main(@builtin(vertex_index) vertex_idx: u32) -> VertexOut {
out.world_position = quad.pos_in_world;
out.point_center = point_data.pos;
out.picking_instance_id = point_data.picking_instance_id;
out.sphere_radius_projected_depth = sphere_radius_projected_depth(point_data.pos, world_radius);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this only makes sense when we're actually rendering a sphere. Right now, we use this shader for both flat circles and spheres.
All of the sphere calculation & depth adjustment has to be conditional on has_any_flag(batch.flags, FLAG_DRAW_AS_CIRCLES) otherwise we might get weird artifacts.

Since the threat of arbitrary depth changes will disable early z and hierarchical depth, I think we should ideally use different shaders for both. Implementing that is fairly cumbersome, so if we can prove it's not a big deal I'd rather not go there. However, users previously reported issues for high pointcloud overdraw (especially on zoomed-out point clouds) and this is likely to make it worse.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the threat of arbitrary depth changes will disable early z and hierarchical depth,

I believe that early Z is already disabled because the fragment shader uses discard. I'm not familiar with optimizations based on hierarchical depth so I can't comment on that.

Copy link
Member

@Wumpf Wumpf Jun 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I think you're right. I'm a bit hazy on the details but since depth test+write is supposed to behave like an atomic operation, discard would break that already. Found this confirmed here https://stackoverflow.com/a/68602660

In a way this is great news: what we're doing here won't affect performance nearly as much as I feared (on paper) and right now we only care about (how much) we regress 🙂

Good you bring up hierarchical z: For hierarchical z to still work in some fashion, we should use conservative depth which is quite easy :) https://docs.rs/wgpu/latest/wgpu/core/naga/enum.ConservativeDepth.html
Unfortunately this hasn't made it into the wgsl spec yet, it's just a Naga extension. So unless this is just drop in and won't cause issues on web (don't know if that's the case!) we should just use it. Otherwise a todo note will suffice

@@ -0,0 +1,52 @@
#import <../global_bindings.wgsl>

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a pretty neat estimate and I appreciate how well you documented all of this, but ...

To get it right, we'd need to, essentially, compute the ray-tracing of a sphere.

we're already doing literally that. It's a bit burried and we need to get the results of it out (or rather pass it in) but we do so as part of the coverage estimation
The calls go as: coverage -> sphere_quad_coverage -> ray_sphere_distance
With that have the exact world position already. We can project it then and use the depth from there
Could coverage computation use something cheaper instead? Maybe! But we use this to draw accurate spheres there in the first place, so as things stands it would be silly to do an approximation here after the fact when we already have everything we need.

Wumpf added a commit that referenced this pull request Jul 24, 2024
…`Ellipsoids`. (#6953)

### What

With these changes, `Boxes3D` and `Ellipsoids` can now be viewed as
solid objects. This is part of the work on #1361 and the mechanisms
added here will generalize to other shapes.

* Add new component type `SolidColor`. This is identical to `Color`
except that, on shapes where it applies, it sets the color of surfaces
instead of lines. Whether its color is transparent controls whether the
surfaces are drawn at all.
* Add `SolidColor` to the `Boxes3D` and `Ellipsoids` archetypes.
* Add support for solid colors to those archetypes’ visualizers.
* Add `proc_mesh::SolidCache` to cache the calculation of triangle
meshes.
* Add `proc_mesh::ProcMeshKey::Cube` to allow the cached mech mechanism
to generate solid cubes.

![Screenshot 2024-07-20 at 17 36
01](https://github.com/user-attachments/assets/ab6b2d1b-20d0-471c-ba49-25d4e10638ea)
![Screenshot 2024-07-20 at 17 35
12](https://github.com/user-attachments/assets/2d6ce740-5bd5-4475-a018-4d286adf2c5b)

Future work (things this PR *doesn't* do):

* Viewer UI needs a version of the color picker that lets you pick
"fully transparent".
* The line renderer does not play well with adjacent faces, causing line
width to be halved some of the time. This could be fixed with something
like #6508, simple depth offsets, or something else.
* Getting naming and semantics right:
* Should the `colors` of `Boxes3D` be renamed `line_colors`, or
something like that? I think so, but that would be a breaking change, so
I didn't in this PR.
    * Should `Color` be renamed?
* Should there be all 3 of `SolidColor`, LineColor`, and `Color` with
some override policy between them?
    * Should `SolidColor` be called `SurfaceColor` instead?
    * How should `SolidColor` interact with annotation contexts?
* Figure out whether instanced meshes are the right choice for
performance.

### Checklist
* [X] I have read and agree to [Contributor
Guide](https://github.com/rerun-io/rerun/blob/main/CONTRIBUTING.md) and
the [Code of
Conduct](https://github.com/rerun-io/rerun/blob/main/CODE_OF_CONDUCT.md)
* [X] I've included a screenshot or gif (if applicable)
* [x] I have tested the web demo (if applicable):
* Using examples from latest `main` build:
[rerun.io/viewer](https://rerun.io/viewer/pr/6953?manifest_url=https://app.rerun.io/version/main/examples_manifest.json)
* Using full set of examples from `nightly` build:
[rerun.io/viewer](https://rerun.io/viewer/pr/6953?manifest_url=https://app.rerun.io/version/nightly/examples_manifest.json)
* [X] The PR title and labels are set such as to maximize their
usefulness for the next release's CHANGELOG
* [X] If applicable, add a new check to the [release
checklist](https://github.com/rerun-io/rerun/blob/main/tests/python/release_checklist)!
* [X] If have noted any breaking changes to the log API in
`CHANGELOG.md` and the migration guide
* The `Ellipsoids` archetype has a renamed field but that isn't released
yet, so doesn't need noting.

- [PR Build Summary](https://build.rerun.io/pr/6953)
- [Recent benchmark results](https://build.rerun.io/graphs/crates.html)
- [Wasm size tracking](https://build.rerun.io/graphs/sizes.html)

To run all checks from `main`, comment on the PR with `@rerun-bot
full-check`.

---------

Co-authored-by: Andreas Reich <r_andreas2@web.de>
@teh-cmc
Copy link
Member

teh-cmc commented Oct 23, 2024

This has been long blocked -- marking as draft to unclog the review queue until it's active again.

@teh-cmc teh-cmc marked this pull request as draft October 23, 2024 07:07
@Wumpf Wumpf self-assigned this Nov 15, 2024
@Wumpf
Copy link
Member

Wumpf commented Nov 15, 2024

Now that we have spheres this is much less important. But I kinda still want it.

I'll do the code sharing noted here #6508 (comment)
.. and tryyyyyy to do a bit of perf testing as noted here #6508 (comment)
... admittedly this gonna be a) kinda tricky because our gpu profiling is so bad and the result is gonna vary wildly between gpus b)we're almost always cpu bound T_T and c) I believe now more strongly that @kpreid is right that we lost this particular battle by doing discard

@Wumpf Wumpf changed the title Render points with per-fragment depth. Render points with per-fragment depth, improving visuals for points intersecting with other geometry Nov 15, 2024
@Wumpf
Copy link
Member

Wumpf commented Nov 15, 2024

Implemented a version that shares the sphere intersection we already do and applies the fragment shader depth change also on picking and outline mask.

Then tried to do a bit of profiling using XCode gpu capture's profiling feature on arkit scenes with blown up points. Unfortunately I wasn't able to reattach after changing shaders (gpu capture seems to leave the rerun process in a broken state) which made this incredibly finicky: depending on the camera angle I got wildly different perf results (not a big surprise). I eventually just settled at comparing the frame times of the "multiview" demo with this diff in order to show only a single static view:

diff --git a/crates/viewer/re_renderer_examples/multiview.rs b/crates/viewer/re_renderer_examples/multiview.rs
index af74feb01f..3343c8dbc0 100644
--- a/crates/viewer/re_renderer_examples/multiview.rs
+++ b/crates/viewer/re_renderer_examples/multiview.rs
@@ -292,7 +292,7 @@ impl Example for Multiview {
         Self {
             perspective_projection: true,

-            camera_control: CameraControl::RotateAroundCenter,
+            camera_control: CameraControl::Manual,
             camera_position: glam::Vec3::ZERO,

             model_mesh_instances,
@@ -325,7 +325,7 @@ impl Example for Multiview {

         let seconds_since_startup = time.seconds_since_startup();
         let view_from_world =
-            IsoTransform::look_at_rh(self.camera_position, Vec3::ZERO, Vec3::Y).unwrap();
+            IsoTransform::look_at_rh(glam::vec3(10.0, 0.0, 0.0), Vec3::ZERO, Vec3::Y).unwrap();

         let triangle = TestTriangleDrawData::new(re_ctx);
         let skybox = GenericSkyboxDrawData::new(re_ctx, Default::default());
@@ -334,7 +334,7 @@ impl Example for Multiview {
         let mut builder = PointCloudBuilder::new(re_ctx);
         builder
             .batch("Random Points")
-            .world_from_obj(glam::Affine3A::from_rotation_x(seconds_since_startup))
+            //.world_from_obj(glam::Affine3A::from_rotation_x(seconds_since_startup))
             .add_points(
                 &self.random_points_positions,
                 &self.random_points_radii,
@@ -366,38 +366,25 @@ impl Example for Multiview {
             }
         };

-        // Using a macro here because `DrawData` isn't object safe and a closure cannot be
-        // generic over its input type.
-        #[rustfmt::skip]
-        macro_rules! draw {
-            ($name:ident @ split #$n:expr) => {{
-                let (view_builder, command_buffer) = self.draw_view(re_ctx,
-                    TargetConfiguration {
-                        name: stringify!($name).into(),
-                        resolution_in_pixel: splits[$n].resolution_in_pixel,
-                        view_from_world,
-                        projection_from_view: projection_from_view.clone(),
-                        pixels_per_point,
-                        ..Default::default()
-                    },
-                    skybox.clone(),
-                    $name,
-                    $n,
-                );
-                framework::ViewDrawResult {
-                    view_builder,
-                    command_buffer,
-                    target_location: splits[$n].target_location,
-                }
-            }};
-        }
-
-        let draw_results = vec![
-            draw!(triangle @ split #0),
-            draw!(lines @ split #1),
-            draw!(meshes @ split #2),
-            draw!(point_cloud @ split #3),
-        ];
+        let (view_builder, command_buffer) = self.draw_view(
+            re_ctx,
+            TargetConfiguration {
+                name: "test".into(),
+                resolution_in_pixel: resolution,
+                view_from_world,
+                projection_from_view: projection_from_view.clone(),
+                pixels_per_point,
+                ..Default::default()
+            },
+            skybox.clone(),
+            point_cloud,
+            0,
+        );
+        let draw_results = vec![framework::ViewDrawResult {
+            view_builder,
+            command_buffer,
+            target_location: glam::Vec2::ZERO,
+        }];

         self.take_screenshot_next_frame_for_view = None;

after that the fps report still was fluctuating too much, so I put this into the xcode gpu perf capture:

Main:
image

This branch (latest changes prior to this comment)
image

What the scene looked like:
Untitled


... so.. is it worth it? The scene tested here is for sure convoluted, but I don't think it's too unrealistic from a fragment shader workload pov.
Stands to reason that we could enable this sometimes but shouldn't we instead just log proper spheres in those cases? That line of reasoning means also that we should take a step back and make the point shader a bit more simple instead.

-> Shelving this effort :(

@Wumpf Wumpf closed this Nov 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants