-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KHR_draco_mesh_compression: Restrictions on accessor IDs #1267
Comments
Thanks for considering my proposed change for inclusion. One issue with this change that I did not mention, is that while this restriction would allow for greater flexibility for loaders implementation, it would make it a bit more complex for developers of "optimiser" tools such as gltf-pipeline, as it would need to be considered when de-duplicating accessors. With this restriction in place, two accessors that have exactly the same json content would not be allowed to be replaced by a single instance if both were referenced from different Draco-compressed meshes. This is not an issue for "normal" accessors, as their json content refer to the actual data of the accessor (so their json can only be identical if they refer to the same data). This is definitely fixable in the optimiser implementation (by adding a reference to the Draco bufferView and attribute id within "internal" json attributes, such as For the proposed restriction, this is only an issue if the Draco compression extension is used as Given the choice, I would still prefer to have the restriction in place, as I guess more developers will implement loaders, which should be as easy to do as possible, than advanced optimiser tools. Note that similar issues for optimisers are already present for de-duplicating or re-indexing other arrays such as |
Hi @JeremieA, For example you have two primitives that have the exact same compressed data and they are referencing the same bufferView. Then you want to make sure that the "attributes" and "indices" in the two primitives must contain the same accessor ids? I attached two gltf files. One that I think would be valid according to this restriction and one that would be invalid according to the restriction. |
^If so, my proposed wording in the original comment should mention |
I think we are going to need to decide on this and (one way or another) add further detail to the extension specification. Another case that is not handled well by my proposed wording above would be this: suppose two primitives have the same POSITION/NORMAL data and share the same Draco buffer for these attributes. Now suppose each primitive also has a COLOR attribute stored in an uncompressed accessor. I think that would be allowed, but it isn't clear. If so we should amend the restriction to:
|
I'm unclear on what this restriction means when the Draco extension is required. Currently, when Adobe Dimension outputs Draco, I assign dummy data to the "regular" attributes and indices to satisfy the validator but I expect that this data is never used. When multiple, different meshes are present, the bufferViews are, of course, different but the dummy data is the same. This works fine in Babylon but three.js thinks the meshes are the same and just uses a copy of the same mesh rather than loading a new one. My assumption was that this was a bug in the Three.js loader but, if this thread implies that regular attributes must be different for different meshes, even if they aren't used, I should change the exporter. |
That is what the thread implies, yes — if the Draco extension is required and two meshes have dummy POSITION attributes pointing at accessor 0, but the meshes' compressed data is in different bufferviews, it would be invalid with this restriction. In other words, loaders should be able to pre-process an asset replacing dummy accessors with decompressed Draco data, without needing to create any new accessors in the process. But, I thought we'd managed to avoid relying on this restriction in the three.js loader — do you mind sharing an example that reproduces the issue so I can debug? I'm still open to merging the restriction or omitting it, either way. |
Sure. I'll send you a sample file. |
Thanks, looked into this and it's a bug in the threejs loader. We cache on the accessor indices to determine when a mesh is reused (because glTF meshes cannot be instantiated apart independently of their materials), and didn't account for extensions in that caching. We'll fix that, but I'm still undecided on this restriction.. |
Should be fixed soon: mrdoob/three.js#15084. |
To summarize from this week's call — (1) There has been some misunderstanding about the meaning of attribute accessors when the draco extension is required, and there is no uncompressed fallback data. The intention was that accessors must still contain valid
The second quote is non-normative. I believe we should include normative requirements that these accessor properties correctly describe the compressed data ... the other interpretation is not implementable, because the Draco data does not (to my knowledge) include type or component type information. @MiiBond any concerns with this? (2) The other issue, and the original topic of this thread, was whether two accessors that really have the same properties (given the requirements above) could be referenced by two different primitives whose compressed data is stored in two different draco buffers. I don't have a strong opinion on (2). We could add the restriction as a "bug fix" but the issue has not come up in practice. |
@lexaknyazev did you have a preference on (2)? I didn't fully catch your comment on the call. |
On (1): (2): |
Good to know. I'm having trouble finding the right enum to get that information back out at decode time, but it appears that the data exists anyway. Do you agree we should clarify that the accessor data still has to be correct, or are you suggesting we loosen this? |
I think that #1343 covers accessors' reqs. |
Probably wouldn't hurt to specify all accessor properties except bufferView and byteOffset apply, but agreed. |
@lexaknyazev @donmccurdy what are next steps here? |
(1) is implied by the spec already, but could be clearer. The second point is not so clear, and it seems like we're leaning toward (2a) rather than (2b). |
I think that once (1) is solved, (2) becomes less of an issue, as different meshes will typically have different stats for (1), requiring separate accessors. For the rare case where two different Draco streams happen to have exactly matching metadata for (1), I don't have a strong opinion on (2a) vs (2b). Do implementations have a strong need for one or the other choice? |
I'm just catching up on this thread. I've updated the Draco logic in the Adobe Dimension exporter but I'm wondering if I've misunderstood this again. I've set the accessor information to match the uncompressed stream, not the compressed one. |
I'm happy with either, just so long as something is decided. :)
I'll make up some definitions for the sake of discussion:
The only way to compute accessor info for compressed data is by reading the decompressed data. If someone were writing fallback (non-draco) data into accessors, they must use the decompressed data. In no case should uncompressed data be used as a fallback, or to populate accessor info. @MiiBond that sounds like what you've described, but I'm not sure? |
Thanks, @donmccurdy. Yeah, I'm using decompressed data now. I just wanted to make sure because you mentioned "Open PR to clarify that count, min, max, componentType, and type must reflect compressed data accurately". |
I'll solidify my vote to (2b), disallow sharing of the same accessor between different Draco streams. My thinking is: The typical case is that different Draco streams within a model should not be expected to share bounding boxes and other such details, so, typically (1) will require a separate accessor for each unique Draco stream. It would be a rare case where two unrelated Draco streams are similar enough that there could be a shared accessor. This would be a micro-optimization, given that such an accessor doesn't contain any data, it's just a few bytes of JSON metadata. Thus it potentially places a burden on our readers to look for this micro-optimization and un-do it when necessary. So, my vote is to disallow that. Thoughts? |
That works for me. 👍 |
There was a proposed restriction from @JeremieA in PR #874, which I think we mostly agreed with but that didn't make it into the spec before release:
(struck the portion about morph targets, which have been left for a future version or new extension)
I'm fine with adding this requirement, although the three.js implementation does not depend on it. We could also wait and see if the issue is raised by others implementing the extension.
The text was updated successfully, but these errors were encountered: