-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow for multiple meshes #864
Comments
Yes. |
...and then align the meshes alicevision/AliceVision#425 |
Hi Fabien, are you agreeing with me or saying this can already be done? |
Hmm, i think I see where you are going here. Duplicate the graph at the SFM node will result in two paths that both generate meshes. How to force first SFM to model to one set and SFM2 to the other set. Or am I looking to duplicate nodes for the second line at one of the Feature Nodes? |
Unfortunately it is not possible to that in Meshroom but I agree with the need. |
Thank you for the reply. Actually we can do it, sorta, in a roundabout way. 300 image data set, 200 will align with themselves and 100 will align with themselves which really means two data sets that don't align together. So if we add one set, say the 200 set, run the graph then go back and augment with the 2nd set we get a Camera Inits with two distinct data sets. Now delete ImageMatchingMultiSFM and replace with standard ImageMatching Node chained normally as in the default graph. Now you have two independent graphs that are processing two different image sets and end up with two (separate) models which can then be realigned externally in say MeshLab. End of day that result could also be achieved simply by running each set in its own project file. But we still need to run the initial graph so we can determine which cameras set is not being merged with the other, delete them and add them to a new project. Unless we can do a twist on augmentation where we determine which camera set was not aligned with the other and add that set by itself to a second group. Now to figure out scale and aligning of the two resulting models |
That's exactly what I was thinking, suggesting. I had the augment node loaded and it occurred to me that there was a specific CameraInit with a unique image set. Unchaining imageMatchingMultiSFM from the original graph and replacing it with standard ImageMatching Node gets you tow separate graphs with unique image sets. |
Wouldn't it already help a lot when you could register relative positions within a image set, without the need to redo the product of combinations between the entire set. And then mark the two (or more) images between the two image set as connection point? |
I'm somewhat having a similar issue. I have a top and a bottom half. I load images from one half then augment with images from second half. Normal setup with ImageMatchMultiSfM node. In SfM for first half, all cameras are registered and SfM shows fine for that half, then the second half registers fine in the SfM node for those images (through the ImageMatchMultiSfM node), but only the second half. Thus it is not combining the two SfM nodes. So I figure it can't find the connection between the two sets. But if I load the SfM node from first half into the viewer, and then also load the SfM from the second half into the viewer, they are aligned perfectly, and I'm just missing a way to merge the two separate SfM's into a single one for further processing in DepthMap and meshing. Does anyone know if this is possible? |
@nufanDK is it able to find a connection if you would only load the "connection" side? |
Why not load top and bottom at the same time? I find Meshroom handles this very well if you provide overlap. What is the object and what is your setup, turntable in a light box or "in the wild"? |
@NexTechAR-Scott it still might not find the overlap because of a different initial image pair. |
If you feed it a 3rd view it will 0 degrees And that’s only generally needed if one side is more feature rich than the other. You can get away with just 0 and 180 degrees if the two sides have some parity in features |
@NexTechAR-Scott you miss the point. Given a close world assumption (hence no extra information) the image set in combination with the configured meshroom parameters cannot find the the right sequence that would build a path with overlap. |
I was considering rotating and mirroring the images from one of the sides to better simulate that the object was hanging in midair when the images was captured, but this might be more in my mind, that this would make sense rather than in actual real-world practice. |
Not missing the point at all. I’m the one who started this thread for exactly the same reason so I certainly understand the issue. But until Meshroom is capable of handling this issue on its own users need some solution or workaround. And what I have outlined above works every single time. Have you tried it, are you speaking from experience. A lot of the “issues” users run into with every photogrammetry suite can be mitigated by modifying their capture technique. |
Have any clear plastic cups laying around? Transparent won’t mesh Elevate the object off the turntable Enable keep largest mesh |
You are right, but here you step outside the the close world assumption. Modifying a capture technique might mitigate the issue. So for @nufanDK it might be a solution given the current software (also a closed world assumption). Both changing the technique and improving the software changes the scope. |
Thank you for this! As well as the image. I was afraid too much distortion would happen through the transparent object, but I will definitely try something like this the next time. Initially I had planned to suspend it midair by a few wires, but it was too difficult to keep the item still. |
As a follow-up, I tried importing the images of both sides into meshroom as a single set. Standard settings only reconstructed 33 of 176 cameras, but when I enabled "Guided Matching" in FeatureMatching resulted in all 176 cameras (side 1+2 combined) being reconstructed. Only caveat is that it took significantly longer to compute (approx. 10 hours on my laptop). |
MergeMeshes node: #1271 could be helpful |
Ohh, is this new? Exciting. Will check this out, thanks. |
hi, you @NexTechAR-Scott got some results ? i dont get it to run successfully, also dont know how to really run it (where to best place the node, etc.) :) merging meshes into one big mesh would be great. You could focus on some areas (e.g. one side of a house), then build the textured-mesh, save it. Then you could go on with e.g. another side of the house and build a textured-mesh, save it. Then merge these two (or even more) meshes into one, if possible with all camera poses (so that you could do ExportAnimatedCamera for the one-big-mesh with all cameras). With concentrating on one mesh (e.g. one side of a house), the run-time till the textured-mesh (with changing parameters), if you use e.g. 50-100 pictures, is "nearly" acceptable to work on one mesh. If you need to put in all images for one big mesh, the number of images could easily be 10.000 or more, and waiting for that textured mesh is annoying :) Also if at least not all camera poses are recognized and you need to redo something, change parameters or put in more images. |
@flobotics if you have overlapping cameras in your different meshes, you can align your meshes based on the common camera |
...would you be able to make an instruction video for this? |
i took some photos of the left-side of a table (left-table-foot in the pictures) with 4-cctag3 markers. Then i run it through meshroom with sift+cctag3. Then added a sftTransform-node after sfm-node, with "from-markers" and 4 markers with scale 0.885. (The 4 cctag3 markers are 8.85cm away from each other) Then i took some photos of the right-side of the same table (right-table-foot in the pictures) with the same 4-cctag3 markers at the same spot. Then run through meshroom with sift+cctag3. Then i added these nodes, inside the mergeMesh node i need to disable "Pre-Process" and "Post-Process" to work. The resulting stl file has both table-foots in the stl-file when opening with blender. But how to texture this new merged stl file with meshroom ? On the left side is the meshing-node mesh.obj, on the right side the merged Mesh with both table foots. |
I believe this was raised once before but am now struggling to remember the search query that led me to it.
Scenario: Capturing from multiple sides to get for example a complete shoe.
We have great success with elevating objects on a transparent platform, doing a rotation then rotating the object 90 degrees and doing another rotation.
So lets say that is a 300 image set with 200 cameras for one rotation and 100 for the second.
During SFM 200 may not align but 100 do so what is can be processed as a model is only 100 cameras.
The assumption up till now has always been that the 200 cameras simply could not be aligned.
OK, make a new project and ONLY use the 200 cameras that failed to align and viola, they all align and look great.
That's a bit of a head scratcher until we realize that what is happening is that we are essentially getting two distinct image sets that can't align to each other so MR is eliminating one of them.
If you run that same 300 data set through Reality Captured you will get two (or more) components that can be meshed and realigned manually.
It seems that MR tries really hard to generate one complete mesh and if it can't use some cameras (even thought those cameras on their own do align) it just ignores them.
I have tried SFM augmentation as has often been suggested but continue to end up with the same result, only a portion of the cameras available for meshing.
In summation, it does not appear that cameras simply can't be aligned due to poor quality or lack of features rather in certain circumstances sets (consisting of cameras that align within the set) can't algin.
Make sense?
The text was updated successfully, but these errors were encountered: