Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

demo: Miscellaneous improvements to clustered simplification #778

Merged
merged 6 commits into from
Oct 2, 2024
Merged

Conversation

zeux
Copy link
Owner

@zeux zeux commented Oct 2, 2024

This change mainly reworks cluster boundary locking to use explicit lock flags and reworks DAG build a little bit to allow for customizable group size with more careful checks. In addition, partitioning code is now correctly taking into account vertices with the same position, which strengthens adjacency between neighboring clusters, and switches to vertex adjacency for simplicity.

The larger group size allows much more freedom, as METIS would more often generate at least partially connected groups with a larger size (I also wanted to investigate recursive partitioning instead of k-way partitioning but since the goal isn't really to maximize METIS quality this is probably good enough for now). Together with more precise boundary detection, we can now get cliffs.obj (2M triangles) all the way down to a single 3-cluster (383 triangle) group when using METIS=1 configuration:

Screenshot from 2024-10-01 20-23-45
Screenshot from 2024-10-01 20-23-50

Using METIS=2 is actually worse here, as although it generates smaller final result (255 triangles), the tree is much deeper (depth 26 instead of depth 16) and the clusters are ~90% filled on average. UE5 targets a slightly more conservative cluster size (usually I get ~126 triangles per cluster there) which I assume allows METIS a little bit more headroom to make mistakes with suboptimal cluster splits during a recursive bisection, something to try another time.

Note that meshopt_simplifyWithAttributes will currently only lock a vertex if all copies of the vertex with the same position are locked; if only some of them are locked and some aren't, locking isn't guaranteed to happen because only one of the vertices will be used for reference. The ideal interface contract is not fully clear to me here (eg in presence of meshopt_SimplifySparse flag we can't look at all copies), but it would probably be better if it was enough to lock any copy referenced by the index buffer... this change does not address this as it's easy enough to lock all copies on the boundary simultaneously.

This contribution is sponsored by Valve.

While using automatic border locking usually works very well, it
prevents simplifying open geometric borders that the source mesh might
contain. When a mesh has many edges like this, this negatively affects
the DAG quality by locking edges contained within one cluster group.

Instead, we can simply mark vertices that are shared by more than one
group after a partition and feed that data to simplifier.

The effect of this is situational; it's beneficial overall but because
the rest of the pipeline has odd issues with partitioning, even when
using METIS for everything, sometimes this ends up constraining the
build process a little more for unclear reasons. Theoretically this
should be strictly better though, and on some meshes like kitten.obj it
does measurably help, however it needs fixes for attribute seams to
keep it enabled.
This is helpful to estimate the memory impact of the full LOD chain and
makes it easier to compare stats with UE which reports them in log file.
It is pretty difficult to debug the visualized data with just groups
without a sense as to where the cluster boundaries lie; this outputs
duplicate geometry but makes it easier to work with in Blender.
Using 4 clusters is fairly limiting wrt partition quality, as it demands
very careful grouping that METIS is not willing to provide. Instead use
a larger group size (8 atm), and adjust the simplification condition to
be adaptive to the group size so that partitioning is free to produce
groups of varying sizes.
Previously when we built adjacency between groups we would not create
edges between clusters when they shared the position of the vertex with
a different attribute. This meant that when the cluster boundary
contained a crease, it would not be reflected correctly in the
adjacency.

Similarly, but more importantly, we need this information when locking
the cluster boundary for the same reason; before, lockBoundary could
result in gaps because the locking information on the boundary was
inconsistent. This allows us to enable kUseLocks by default.
Instead of using shared edges to provide adjacency, we now use shared
vertices; this works more or less the same way but is easier to compute.
For some reason this also results in fewer stuck triangles when using
full METIS pipeline.
@zeux zeux changed the title demo: Miscellaneous improvements to Nanite demo demo: Miscellaneous improvements to hierarchical clustered simplification Oct 2, 2024
@zeux zeux changed the title demo: Miscellaneous improvements to hierarchical clustered simplification demo: Miscellaneous improvements to clustered simplification Oct 2, 2024
@zeux zeux merged commit 4f744eb into master Oct 2, 2024
12 checks passed
@zeux zeux deleted the nanitem branch October 2, 2024 17:39
@JMS55
Copy link

JMS55 commented Oct 3, 2024

Awesome to see more progress here.

  1. Larger group sizes - good to hear this helps. This has been on my list of things to try, as it's easy to tweak. Nice see it tested.
  2. "Partitioning code is now correctly taking into account vertices with the same position" - can you explain this one a bit more to me? Do you mean that two vertices would have the same (~nearly the same?) position, but not share the same index (as in from an index buffer)? Why would this occur?
  3. RE: Explicit locking rather than the automatic locking flag - another dev made a PR to my (Bevy's) codebase adding this Don't lock external edges during simplification JMS55/bevy#28. When I tried it out, however, I found it very very aggressively collapsed edges, leading to few LOD levels (Ideally we want a perfect log2(n) distribution), with the final level always being a sphere or oval and not an actually good approximation of the mesh. I'll have to compare the code to yours, maybe it's bugged.

@zeux
Copy link
Owner Author

zeux commented Oct 3, 2024

Do you mean that two vertices would have the same (~nearly the same?) position, but not share the same index (as in from an index buffer)? Why would this occur?

This can happen if an attribute seam happens to lie on the cluster boundary (for example, a UV seam where there are two vertices with two different indices, the same position, but two different UV coordinates).

@zeux
Copy link
Owner Author

zeux commented Oct 3, 2024

As for the boundary, that code doesn't seem to reset lock flags to false on any edges other than the mesh boundary; this would permanently keep all interior boundaries locked, including the original cluster boundaries, unless I am misreading it, which would quickly terminate simplification as no edges will be collapsible. My code computes the boundary between all groups at a given level at once from scratch, but it's also possible to reset the flags back to false. Here's cliffs.obj with kUseLocks=false (2371 triangles on smallest LOD) vs kUseLocks=true (383 triangles on smallest LOD).

Screenshot From 2024-10-02 19-27-41

Screenshot From 2024-10-02 19-27-55

@SparkyPotato
Copy link

As for the boundary, that code doesn't seem to reset lock flags to false on any edges other than the mesh boundary; this would permanently keep all interior boundaries locked, including the original cluster boundaries, unless I am misreading it, which would quickly terminate simplification as no edges will be collapsible

Ah, group_boundary is never reset back to false, good catch! That should fix the issues in simplification.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants