Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow for multiple meshes #864

Open
NexTechAR-Scott opened this issue Apr 22, 2020 · 29 comments
Open

Allow for multiple meshes #864

NexTechAR-Scott opened this issue Apr 22, 2020 · 29 comments
Labels
feature request feature request from the community

Comments

@NexTechAR-Scott
Copy link

I believe this was raised once before but am now struggling to remember the search query that led me to it.

Scenario: Capturing from multiple sides to get for example a complete shoe.

We have great success with elevating objects on a transparent platform, doing a rotation then rotating the object 90 degrees and doing another rotation.

So lets say that is a 300 image set with 200 cameras for one rotation and 100 for the second.

During SFM 200 may not align but 100 do so what is can be processed as a model is only 100 cameras.

The assumption up till now has always been that the 200 cameras simply could not be aligned.

OK, make a new project and ONLY use the 200 cameras that failed to align and viola, they all align and look great.

That's a bit of a head scratcher until we realize that what is happening is that we are essentially getting two distinct image sets that can't align to each other so MR is eliminating one of them.

If you run that same 300 data set through Reality Captured you will get two (or more) components that can be meshed and realigned manually.

It seems that MR tries really hard to generate one complete mesh and if it can't use some cameras (even thought those cameras on their own do align) it just ignores them.

I have tried SFM augmentation as has often been suggested but continue to end up with the same result, only a portion of the cameras available for meshing.

In summation, it does not appear that cameras simply can't be aligned due to poor quality or lack of features rather in certain circumstances sets (consisting of cameras that align within the set) can't algin.

Make sense?

@NexTechAR-Scott NexTechAR-Scott added the feature request feature request from the community label Apr 22, 2020
@fabiencastan
Copy link
Member

Yes.
It would be great to be able to generate multiple scenes from the SfM when all cameras cannot be aligned altogether. And then create a mesh for each part.

@natowi
Copy link
Member

natowi commented Apr 22, 2020

...and then align the meshes alicevision/AliceVision#425

@NexTechAR-Scott
Copy link
Author

Yes.
It would be great to be able to generate multiple scenes from the SfM when all cameras cannot be aligned altogether. And then create a mesh for each part.

Hi Fabien, are you agreeing with me or saying this can already be done?

@NexTechAR-Scott
Copy link
Author

Yes.
It would be great to be able to generate multiple scenes from the SfM when all cameras cannot be aligned altogether. And then create a mesh for each part.

Hi Fabien, are you agreeing with me or saying this can already be done?

Hmm, i think I see where you are going here.

Duplicate the graph at the SFM node will result in two paths that both generate meshes.

How to force first SFM to model to one set and SFM2 to the other set.

Or am I looking to duplicate nodes for the second line at one of the Feature Nodes?

@fabiencastan
Copy link
Member

Unfortunately it is not possible to that in Meshroom but I agree with the need.
Your issue was only talking about Meshing, but we will need to support the notion of multiple 3D scene in one dataset in all nodes from SfM to Texturing.

@NexTechAR-Scott
Copy link
Author

Unfortunately it is not possible to that in Meshroom but I agree with the need.
Your issue was only talking about Meshing, but we will need to support the notion of multiple 3D scene in one dataset in all nodes from SfM to Texturing.

Thank you for the reply.

Actually we can do it, sorta, in a roundabout way.

300 image data set, 200 will align with themselves and 100 will align with themselves which really means two data sets that don't align together.

So if we add one set, say the 200 set, run the graph then go back and augment with the 2nd set we get a Camera Inits with two distinct data sets.

Now delete ImageMatchingMultiSFM and replace with standard ImageMatching Node chained normally as in the default graph.

Now you have two independent graphs that are processing two different image sets and end up with two (separate) models which can then be realigned externally in say MeshLab.

End of day that result could also be achieved simply by running each set in its own project file.

But we still need to run the initial graph so we can determine which cameras set is not being merged with the other, delete them and add them to a new project.

Unless we can do a twist on augmentation where we determine which camera set was not aligned with the other and add that set by itself to a second group.

Now to figure out scale and aligning of the two resulting models

@natowi
Copy link
Member

natowi commented Apr 23, 2020

determine which cameras set is not being merged with the other, delete them and add them to a new project

We could move the images to a new Image Group as it is being done when using Augment Reconstruction. For that we need to take care of this #510

Could be combined with #514

@NexTechAR-Scott
Copy link
Author

NexTechAR-Scott commented Apr 24, 2020

........We could move the images to a new Image Group as it is being done when using Augment Reconstruction......

That's exactly what I was thinking, suggesting.

I had the augment node loaded and it occurred to me that there was a specific CameraInit with a unique image set.

Unchaining imageMatchingMultiSFM from the original graph and replacing it with standard ImageMatching Node gets you tow separate graphs with unique image sets.

@skinkie
Copy link

skinkie commented May 2, 2020

Wouldn't it already help a lot when you could register relative positions within a image set, without the need to redo the product of combinations between the entire set. And then mark the two (or more) images between the two image set as connection point?

@nufanDK
Copy link

nufanDK commented May 23, 2020

I'm somewhat having a similar issue. I have a top and a bottom half. I load images from one half then augment with images from second half. Normal setup with ImageMatchMultiSfM node. In SfM for first half, all cameras are registered and SfM shows fine for that half, then the second half registers fine in the SfM node for those images (through the ImageMatchMultiSfM node), but only the second half. Thus it is not combining the two SfM nodes. So I figure it can't find the connection between the two sets. But if I load the SfM node from first half into the viewer, and then also load the SfM from the second half into the viewer, they are aligned perfectly, and I'm just missing a way to merge the two separate SfM's into a single one for further processing in DepthMap and meshing. Does anyone know if this is possible?

@skinkie
Copy link

skinkie commented May 23, 2020

@nufanDK is it able to find a connection if you would only load the "connection" side?

@NexTechAR-Scott
Copy link
Author

Why not load top and bottom at the same time?

I find Meshroom handles this very well if you provide overlap.

What is the object and what is your setup, turntable in a light box or "in the wild"?

@skinkie
Copy link

skinkie commented May 23, 2020

@NexTechAR-Scott it still might not find the overlap because of a different initial image pair.

@NexTechAR-Scott
Copy link
Author

NexTechAR-Scott commented May 23, 2020

If you feed it a 3rd view it will

0 degrees
180 degrees
90 degrees

And that’s only generally needed if one side is more feature rich than the other.

You can get away with just 0 and 180 degrees if the two sides have some parity in features

@skinkie
Copy link

skinkie commented May 23, 2020

@NexTechAR-Scott you miss the point. Given a close world assumption (hence no extra information) the image set in combination with the configured meshroom parameters cannot find the the right sequence that would build a path with overlap.

@nufanDK
Copy link

nufanDK commented May 23, 2020

A lot of interesting points. I don't have a completely 0 degrees angle, more like a 15 degrees from both sides. My setup is turntable-ish. I've put the item on a uniformly colored table with a uniformly colored background (mimicking the lightbox method). So there is definitely a chance that the table is registered a little in both sets, and that is what gives the issue with the ImageMatchMultiSfM node, but it just looks so right when I load them separately on top of each in the viewer.
Combined

@nufanDK
Copy link

nufanDK commented May 23, 2020

I was considering rotating and mirroring the images from one of the sides to better simulate that the object was hanging in midair when the images was captured, but this might be more in my mind, that this would make sense rather than in actual real-world practice.

@NexTechAR-Scott
Copy link
Author

@NexTechAR-Scott you miss the point. Given a close world assumption (hence no extra information) the image set in combination with the configured meshroom parameters cannot find the the right sequence that would build a path with overlap.

Not missing the point at all.

I’m the one who started this thread for exactly the same reason so I certainly understand the issue.

But until Meshroom is capable of handling this issue on its own users need some solution or workaround.

And what I have outlined above works every single time.

Have you tried it, are you speaking from experience.

A lot of the “issues” users run into with every photogrammetry suite can be mitigated by modifying their capture technique.

@NexTechAR-Scott
Copy link
Author

I was considering rotating and mirroring the images from one of the sides to better simulate that the object was hanging in midair when the images was captured, but this might be more in my mind, that this would make sense rather than in actual real-world practice.

Have any clear plastic cups laying around?

Transparent won’t mesh

Elevate the object off the turntable

Enable keep largest mesh

@NexTechAR-Scott
Copy link
Author

4673DD91-9A6C-4AD4-97C3-918D6F2165E3

@skinkie
Copy link

skinkie commented May 23, 2020

A lot of the “issues” users run into with every photogrammetry suite can be mitigated by modifying their capture technique.

You are right, but here you step outside the the close world assumption. Modifying a capture technique might mitigate the issue. So for @nufanDK it might be a solution given the current software (also a closed world assumption). Both changing the technique and improving the software changes the scope.

@nufanDK
Copy link

nufanDK commented May 23, 2020

I was considering rotating and mirroring the images from one of the sides to better simulate that the object was hanging in midair when the images was captured, but this might be more in my mind, that this would make sense rather than in actual real-world practice.

Have any clear plastic cups laying around?

Transparent won’t mesh

Elevate the object off the turntable

Enable keep largest mesh

Thank you for this! As well as the image. I was afraid too much distortion would happen through the transparent object, but I will definitely try something like this the next time. Initially I had planned to suspend it midair by a few wires, but it was too difficult to keep the item still.

@nufanDK
Copy link

nufanDK commented May 24, 2020

As a follow-up, I tried importing the images of both sides into meshroom as a single set. Standard settings only reconstructed 33 of 176 cameras, but when I enabled "Guided Matching" in FeatureMatching resulted in all 176 cameras (side 1+2 combined) being reconstructed. Only caveat is that it took significantly longer to compute (approx. 10 hours on my laptop).

@natowi
Copy link
Member

natowi commented Feb 12, 2021

MergeMeshes node: #1271 could be helpful

@NexTechAR-Scott
Copy link
Author

MergeMeshes node: #1271 could be helpful

Ohh, is this new?

Exciting.

Will check this out, thanks.

@flobotics
Copy link

flobotics commented Apr 23, 2022

hi, you @NexTechAR-Scott got some results ?

i dont get it to run successfully, also dont know how to really run it (where to best place the node, etc.) :)

merging meshes into one big mesh would be great. You could focus on some areas (e.g. one side of a house), then build the textured-mesh, save it. Then you could go on with e.g. another side of the house and build a textured-mesh, save it.

Then merge these two (or even more) meshes into one, if possible with all camera poses (so that you could do ExportAnimatedCamera for the one-big-mesh with all cameras).

With concentrating on one mesh (e.g. one side of a house), the run-time till the textured-mesh (with changing parameters), if you use e.g. 50-100 pictures, is "nearly" acceptable to work on one mesh. If you need to put in all images for one big mesh, the number of images could easily be 10.000 or more, and waiting for that textured mesh is annoying :) Also if at least not all camera poses are recognized and you need to redo something, change parameters or put in more images.

@natowi
Copy link
Member

natowi commented Apr 24, 2022

@flobotics if you have overlapping cameras in your different meshes, you can align your meshes based on the common camera

@skinkie
Copy link

skinkie commented Apr 24, 2022

@flobotics if you have overlapping cameras in your different meshes, you can align your meshes based on the common camera

...would you be able to make an instruction video for this?

@flobotics
Copy link

flobotics commented Apr 26, 2022

i took some photos of the left-side of a table (left-table-foot in the pictures) with 4-cctag3 markers. Then i run it through meshroom with sift+cctag3. Then added a sftTransform-node after sfm-node, with "from-markers" and 4 markers with scale 0.885. (The 4 cctag3 markers are 8.85cm away from each other)

Then i took some photos of the right-side of the same table (right-table-foot in the pictures) with the same 4-cctag3 markers at the same spot. Then run through meshroom with sift+cctag3.

Then i added these nodes, inside the mergeMesh node i need to disable "Pre-Process" and "Post-Process" to work. The resulting stl file has both table-foots in the stl-file when opening with blender.

sfmtransform-6

sfmtransform-7

sfmtransform-8

sfmtransform-9

But how to texture this new merged stl file with meshroom ?

On the left side is the meshing-node mesh.obj, on the right side the merged Mesh with both table foots.

sfmtransform-10

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request feature request from the community
Projects
None yet
Development

No branches or pull requests

6 participants