Skip to content
Simon edited this page Jan 9, 2024 · 28 revisions

It is possible to do a reconstruction without using the Depthmap node and thus without the need to use a Nvidia graphic card (depthmap requires CUDA). It is much faster than depth map but the resulting mesh is low quality, so it is still recommended that the depthmap is used to generate the mesh if possible.

With the 2023.3 Release you can now directly load the Draft Meshing pipeline: grafik

To improve the reconstruction quality, adjust the Describer Density and Quality in the FeatureExtraction node. It can double the mesh count of the final result. Results may vary on your data. Enabling other describer types may help, too (try Akaze) Try different settings to find the ideal values for your use case.

https://github.com/alicevision/Meshroom/issues/2292

Notes on Meshroom versions prior 2023

(Feature introduced in v2019.1)

You should use the HIGH preset on the FeatureExtraction node to get enough density for the Meshing. See Reconstruction-parameters

Quality comparison:

Monstree dataset
Default reconstruction = with DepthMap
mesh

Check draft reconstruction with the 41 images dataset : https://skfb.ly/6YsKH
If you have only a few images, not selecting AKAZE in the SFM node can increase the point cloud density. Adjust Describer Preset in FeatureExtraction if necessary.

Update: The default draft meshing with the new 2021 release will give a more complete mode:

draft21

Note that enabling Akaze may lead to a worse result in this version with default settings (see https://github.com/alicevision/meshroom/issues/1314)

If that is the case, disconnect the Describer Type node connection between FeatureMatching and StructureFromMotion