Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[question] CCTag workflow #907

Closed
Geksaida opened this issue May 21, 2020 · 26 comments
Closed

[question] CCTag workflow #907

Geksaida opened this issue May 21, 2020 · 26 comments

Comments

@Geksaida
Copy link

I've been trying to figure out how to use CCTag markers and want to understand how to use them proper way.

There are some photos of featureless wall with markers on it. The idea was to see if Meshroom can localize cameras only using those markers.
IMG_6204

However using CCTags alone didn't result in reconstruction. Even with SIFT features it was failing on the SfM node untill I added Akaze festures. But using Akaze alone was enough to localize the cameras. So what is the point of CCTags?

I though there must be some issue and looking through #716 decided to use snapshot builds. I've tried current and March snapshots replacing bin folder in aliceVision. None of them even allowed me to use CCTag3 as describer type. FeatureExtraction node simply failed. What am I doing wrong? What is the workflow for CCTags?

@natowi
Copy link
Member

natowi commented May 23, 2020

Meshroom 2019.2 supports cctag3, but apart from detecting the markers, that´s basically it. (You need to enable cctag in feature extraction and following nodes)
The next release will bring some new features that make use of cctags like scaling, orientation.

@julianrendell
Copy link

julianrendell commented May 23, 2020 via email

@natowi
Copy link
Member

natowi commented May 23, 2020

@julianrendell Some features are already available in the dev branch alicevision/AliceVision#695 & #652 but some features are still being worked on.

@julianrendell
Copy link

Very cool- thanks @natowi !

@r-a-i
Copy link

r-a-i commented Oct 9, 2020

Hi. I'm now running the 2020.1.0 release and would like to know how to use CCTag3 for alignment and scaling.
I'm experimenting with this by enabling CCTag3 in the FeatureExtraction, FeatureMatching and StructureFromMotion nodes. Then I pipe the output from StructureFromMotion to SfMTransform and it computes green.
Now what?
My goal is to align all CCTag3 markers to a common plane (set all z-values =0) and set the distance between two markers (ID1 and ID2) to 35 mm. I could set additional pair-wise distances between CCTag3 markers if allowed.

Can anyone provide some steps to get there?

@TR7
Copy link

TR7 commented Oct 9, 2020

Hi. I'm now running the 2020.1.0 release and would like to know how to use CCTag3 for alignment and scaling.
Can anyone provide some steps to get there?

Hi r-a-i,
here is my workflow (but you may should wait for an answer from natowi for a good meshroom based solution without additional programming):
i'm using the "ply" output of the StructureFromMotion to save the calculated CCTAG3 Positions together with the rest of the 3d point cloud. then i open the ply file in python (with open3d) and search for points with color (0,0,0) to (30,0,0). it seems like these color range is reserved for the detected CCTAG Points (and its ID coded into the Color, except for ID31 which seems to collide with colors from the point cloud). With these cctag positions i wrote a custom python script for further processing.

@natowi
Copy link
Member

natowi commented Oct 11, 2020

@r-a-i @TR7 Hi there, I will continue my work to update the manual to the 2020.1 release starting at the end of the next week. I have some notes on CCTAG markers and will share the details. At the moment I have some other work to do.

@natowi
Copy link
Member

natowi commented Oct 21, 2020

@smallfly
Copy link

smallfly commented Nov 20, 2020

Hi @natowi ,

We are trying to capture growing mushrooms. We have a jar on a turntable and we do a complete capture every hour using two cameras, and doing rotation steps of 10 degrees between photos. We therefore have 72 photos per capture/hour, that we process in Meshroom using CLI.

We are trying to use CCTags to properly scale and rotate each meshes produced. So far we have not been successful. We have followed the instructions described here but we get the error 'invalid number of image describer'.

Here are photos from camera 1 and 2.

Camera 1

Camera 2

Is the way we are using the CCTags correct?
The documentation says to place only two CCTags, but we have four of them, could that be an issue?

Here is a screencapture of our SfMTransform node. We also have enabled the DescriberType "CCTAG (3)" in our FeatureExtraction node.

Capture

Any help would be greatly appreciated.

@natowi
Copy link
Member

natowi commented Nov 20, 2020

@smallfly yes, at the moment only two markers are supported for scaling. The error 'invalid number of image describer' is caused by the marker id 17.
If you followed my example, only add two markers in the settings: 0 Marker cctag id 0 -> x0 y0 z0 | 1 Marker cctag id 1 -> x1 y0 z0 this sets the x distance between the markers to 100mm.

Use a SfMTranform node with Transformation Method auto_from_cameras: Use cameras" (give it a try) or From_Single_Camera" with regex *.jpg for example to apply the correct orientation to your model.

This node is followed by another SFMTransform node, now for cctag scaling.

This might be a good workflow for you (untested):

As output you would get multiple scaled, oriented and aligned models. (SfmAlignment method: cctags)

workfl

In case this is a larger project, you could even rewrite the template for "augment reconstruction" or "live reconstruction" to create your desired workflow in the gui.

@smallfly
Copy link

Thanks @natowi for the quick reply.

Marker id 17
Could you tell me more about why the marker id 17 is causing this error?

Orientation and Scaling
So if I understand correctly for the 'orientation' and the 'scaling' I need to do the following:

Pipeline

  1. I do the orientation step using a SFMTransform node with Transformation Method "auto_from_cameras"

Orientation

  1. I do the scaling step using a second SFMTransform node with only two of the markers. For example marker id 1 and marker id 2 (been the ID found in the filename of the marker - here

Scale

I just tried this pipeline and I still get the same error 'Alignment from markers: Invalid number of image describer types: 0' in the node SFMTransform_2 (the one for the scaling). Here is the log in the console.

Tranform-Error

Find attached the status.txt and log.txt files from that step.

Alignment
For the alignment of several meshes I need to duplicate the 3D reconstruction steps of the pipeline for each meshes I want to align. Do I understand correctly?

If this is the case, when should I be running the initial nodes of the all the other data sets - 'CameraInit', 'FeatureExtraction', 'ImageMatching', 'FeatureMatching' and 'StructureFromMotion' ? Should I have previously done the reconstruction of all my captures and then link to the data from the 'StructureFromMotion' node that I found in the MeshroomCache of each reconstructions?

Thank you!

@natowi
Copy link
Member

natowi commented Nov 20, 2020

@smallfly 00001.pdf is actually cctag id 0 and 00002.pdf is cctag id 1.
Before chaining the nodes make sure cctag scaling works.
Only two markers are supported for now and you had previously defined id17, id0 and id1.
Best write the marker id (pdf nr -1) as a small number on the cctag cardboard.

Now you defined marker id 1 and id2 --> 0002.pdf and 0003.pdf

You need to test which Transformation Method in the SFMTransform node works best for you. (depends on your workflow)
If you use my proposed gui workflow you can just do the orientation for the first sfm and later align the other sfms to it.

Make sure you enabled CCTAG in Feature Extraction

For the alignment open the first reconstruction in the gui and create the other nodes. Then insert the paths to the computed sfm folders of the other reconstructions. You could do your reconstructions from camerainit to sfm from the cli and later scale/orient and align the sfm outputs in a new Meshroom project.

There is not only one solution - you best experiment to see what workflow works best for you.

I´ll see if I can find some time to do a more advanced tutorial on the topic, but no promises.

@smallfly
Copy link

@natowi thanks for the details and updates to the documentation.

I just changed the markers ID to id 0 and id 1, and made sure that only cctag3 is select in the 2nd SFMTransform node.
I'm still getting the same error though.

@natowi
Copy link
Member

natowi commented Nov 20, 2020

Best share a few images for testing

@smallfly
Copy link

@natowi here is one complete capture - 72 photos.

@natowi
Copy link
Member

natowi commented Nov 20, 2020

@smallfly make sure cctags were extracted by checking the feature overlay in the image viewer.

I used six files from your dataset for testing: 20201117-190001_R0.0_C0 - 20201117-190001_R50.0_C0

As this is a small dataset with few features, in FeatureExtraction set the Describer Preset to ultra and enable CCTAG3 (optional: akaze).
Enable CCTAG3 Describer in FeatureMatching and SFM - this is required for CCTAG based scaling.

Adding SfmTransform (auto_from_cameras) + SfmTransform (from_markers, as described before / manual alignment) will result in a decent alignment and scaling.

btw. looks like you used the cctag markers id0, id1, id2, id4

By experimenting with the marker settings I was able to orient the model to the grid based on cctag (better in this case than with the sfmtransform node "from camera", since your cameras have a tilt).

scr

You need to do some tests and reconstruction parameter optimizations. Maybe put some textured paper on the glass to provide some more features.

I´d recommend you to try out different settings with a subset of your dataset.

@smallfly
Copy link

@natowi Thank you for all these detailed informations and tests. I will look at all these in details when I'm back home.

Would you be open to share the Meshroom Graph you used for this test?

Thanks again!

@natowi
Copy link
Member

natowi commented Nov 21, 2020

Nothing fancy. All the node settings are described above

@smallfly
Copy link

Hi @natowi

I think I have set up everything as you explained. I'm still getting the same error.

Here are some details/screencaptures as well as the Meshroom Graph.

The graph showing the detected markers in the picture overlay, and the log of the SFMTransform2 with the error.
20201121-MG

The settings for the FeatureExtraction node
20201121-FeatureExtraction

The settings for the FeatureMatching node
20201121-FeatureMatching

The settings for the 1st SFMTransform node
20201121-SFMTransform

The settings for the 2nd SFMTransform node
20201121-SFMTransform2-01

20201121-SFMTransform2-02

@natowi
Copy link
Member

natowi commented Nov 21, 2020

I think you missed to select cctag in the StructureFromMotion (SFM) node (and if you select akaze in Feature extraction, you have to select akaze in all following nodes up to the last node where you want to use akaze. When you don´t select akaze in SFM but in the following node it will fail)
features
(This will show the detected Features in FeatureExtraction / FeatureMatching / StructureFromMotion)

Don´t start with a complicated workflow in your first go. Start with the variant with two cctag markers.
If you use multiple cctags for scaling and orientation you can skip the previous orientation node.
Only add akaze if you really need it. Details: https://github.com/alicevision/meshroom/wiki/Reconstruction-parameters

I´ll update the wiki with the info on how to use multiple markers for scaling and orientation.

@smallfly
Copy link

smallfly commented Nov 21, 2020

@natowi you are right, I mixed SFM (StructureFromMotion) with SfMTransform.

I have select cctag in the StructureFromMotion (SFM) node. I still do not get any 3rd value in the picture overlay though. And I now get a new error in the SfMTransform2 - 'Failed to find marker:2'

20201120-SFM_Settings-Picture_overlay

20201120-SFMTransform2_Log-Error

  • Do all markers (in my case 4 of them) need to be always visible in every photos?
  • Can I use only 2 markers in my graph but still have 4 of them on the jar, or will that create problems and I should only have 2 markers installed on the jar.

@natowi
Copy link
Member

natowi commented Nov 21, 2020

Not all markers were detected in the small test dataset. In the full dataset all markers should be detected. I had some (other) issues that some cameras were not reconstructed. This is caused by the background which is not fully featureless. Could be corrected with some parameter tweaking. This can be the cause for the missing marker. As I said, you setup has some flaws.

Here is the simple test setup with the six images (series 20201117-190001_R0.0_C0 to 20201117-190001_R50.0_C0).
Only three markers are detected reliably in this case: 0, 1, 4
It is hard to know the marker placing on site as you don´t have the id next to the markers. The feature to show the marker id in Meshroom is not yet implemented, so it is a little bit tricky to find the correct settings.
Basically you assign the 3d coordinates for the four markers. If the four markers are placed in an equal distance in a square you can provide the corresponding 3d coordinates in the settings. For the three existing markers I chose the following settings which scale and orient the model with a marker distance of 100mm. As you can see, the size of one square in the 3d viewer grid is 1 (100mm)

cctagwww

3d viewer axis: x=red y=green z=blue line=positive direction

@MarcoRos75
Copy link

Salve! sto provando anche io a gestire i CCtag con Meshroom! io volevo utilizzare i cctag in un rilievo fotogrammetrico di una strada ( lunga 50m) per cui i cctag son distribuiti, non sono visibili tutti in una unica foto...ma in sequenza! questo può essere un problema o va bene comunque per il loro riconoscimento automatico?! Vorrei poi avere delle indicazioni più chiare su come fornire le coordinate nel nodo SFMTrasform... io di solito lavoro in metri....che sistema di misura usa il nodo?!?!

@natowi
Copy link
Member

natowi commented Nov 24, 2020

This issue is closed and everything important has been added to the documentation. For similar questions please open a new issue.

@MarcoRos75 I moved your question to a new issue #1163

@smallfly
Copy link

@natowi Thank you for all your help! This is right now scaling and rotating all the meshes properly.

One last question I would ask here is about the specific CCTags we are using:
We are currently only setting 3 markers (of the 4 we have on our jar) in the SFMTransform - 0, 1, 4. If we add the 4th one in (marker ID 2), the process exit with the error 'Failed to find marker:2'. Why is that? Does that mean that this maker was never found on any of the photos?

We will for sure have more questions about this pipeline and also about all the others settings available. All this will be in other threads.

Thanks again.

@natowi
Copy link
Member

natowi commented Nov 25, 2020

@smallfly You can find the solution here: https://groups.google.com/g/alicevision/c/t4kcSceAFD4
Also make sure the background of your turntable setup is absolutely blank without gaps at the edges. The detected features at the edges can mess up the reconstruction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants