gltfpack: Accelerate deduplication pass for scenes with a lot of primitives #790
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
While on most scenes our strategy performs reasonably well, and we
already have some potential N^2 processing elsewhere, there are cases
where the existing N^2 algorithms in the pipeline are actually
semi-linear, because mesh merging lazily and effectively aggregates most
meshes; since deduplication is the first pass, it may actually be
quadratic in the face of a scene with a lot of meshes.
A simple fix is to track the number of occurences of each hash and only
search for duplicates for meshes where this value is >1. We could of
course implement more involved tracking but for now this works just
fine.
We also now prune the node list to avoid accidental duplicate attachments,
as it confuses the logic downstream in the pipeline.