-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[request] BoundingBox filter node #744
Comments
Quick note here to accompany my thumps up. I use photogrammetry a lot for outside views of houses. Images are taken with a drone and often include views with a little altitude. I just did a house in a valley. you can imagine the amount of data I get, spanning several kilometers around (mountains in the background, houses in the neighborhood). This makes things incredibly difficult to process unless you have a monster workstation with terabytes of RAM :p In such cases, the advantages of a bounding box strikes me as obvious there. |
Image masking can be used to reconstruct only a specific area: |
@natowi image masking is a different(but still beneficial to have, zephyr has nice tooling and visualization of this too) feature. As my issue states, in the Zephyr software, the first result of processing visualizes camera alignment with a sparse point cloud, with the next processing step as the dense point cloud. You can mask images prior to that, and I have done such within that software, but I've still found benefit to a bounding volume which I can position/scale/orient to exclude everything outside of it. 3DF Zephyr can be used for free with <=50 images if you'd like to get a feel of the feature, or compare their image masking tooling in addition to the bounding box feature. |
Image masking will be (partially) supported with the next release or by building from source. |
To accompany my thumps up. I think such a bounding box feature should be implemented to work with the output from the Sfm-node BEFORE the DepthMap step. Cause the DepthMap is the most computaional extensive step, it could bring the most performance benefit limiting this computation to only the relevant parts of the reconstruction. Surely, this is not a trival task cause it needs an extra interface in the GUI and a relative big change in the DepthMap code. |
Now available with the 2020 release: #256 (comment) The remark by @djoerg is similar to #589 and #665. It may be resolved by the WIP ImageMasking node |
Is your feature request related to a problem? Please describe.
A bounding box similar to the feature in 3DF Zephyr allows you to define a volume(interactively in the viewport with a gizmo) to ignore data outside of it for future processing steps. This should allow for reducing computation and memory from observed data that the user knows is not important
Describe the solution you'd like
In other software, I have had a light point cloud generated with the alignment/extraction step. The bounding box can indicate the area of interest in case near/far points were added but not relevant for the final output(eg background/distance data of a scene, or the base of what the subject rests on top of).
This avoids generating dense points outside of the volume, or meshing outside of the volume, or any data that doesn't contribute much value to the the content within the bounding box volume for any other operations like texturing.
A similar tool can also delete points/triangles, useful for removing the base surface.
Describe alternatives you've considered
Current advice is to handle this via external tools like MeshLab.
The text was updated successfully, but these errors were encountered: