-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[question] CCTag workflow #907
Comments
Meshroom 2019.2 supports cctag3, but apart from detecting the markers, that´s basically it. (You need to enable cctag in feature extraction and following nodes) |
How do you enable the CCTags bounding box and scaling?
I’ve managed to build trunk, but still woefully uneducated as to how to configure things! Being able to hint at the bounding box/ground plane and scaling might help my attempts at getting cleaner results.
…On May 23, 2020, 10:42 AM -0700, natowi ***@***.***>, wrote:
Meshroom 2019.2 supports cctag3, but apart from detecting the markers, that´s basically it. (You need to enable cctag in feature extraction and following nodes)
The next release will bring some new features that make use of cctags like scaling, orientation.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
@julianrendell Some features are already available in the dev branch alicevision/AliceVision#695 & #652 but some features are still being worked on. |
Very cool- thanks @natowi ! |
Hi. I'm now running the 2020.1.0 release and would like to know how to use CCTag3 for alignment and scaling. Can anyone provide some steps to get there? |
Hi r-a-i, |
Hi @natowi , We are trying to capture growing mushrooms. We have a jar on a turntable and we do a complete capture every hour using two cameras, and doing rotation steps of 10 degrees between photos. We therefore have 72 photos per capture/hour, that we process in Meshroom using CLI. We are trying to use CCTags to properly scale and rotate each meshes produced. So far we have not been successful. We have followed the instructions described here but we get the error 'invalid number of image describer'. Here are photos from camera 1 and 2. Is the way we are using the CCTags correct? Here is a screencapture of our SfMTransform node. We also have enabled the DescriberType "CCTAG (3)" in our FeatureExtraction node. Any help would be greatly appreciated. |
@smallfly yes, at the moment only two markers are supported for scaling. The error 'invalid number of image describer' is caused by the marker id 17. Use a SfMTranform node with Transformation Method auto_from_cameras: Use cameras" (give it a try) or From_Single_Camera" with regex *.jpg for example to apply the correct orientation to your model. This node is followed by another SFMTransform node, now for cctag scaling. This might be a good workflow for you (untested): As output you would get multiple scaled, oriented and aligned models. (SfmAlignment method: cctags) In case this is a larger project, you could even rewrite the template for "augment reconstruction" or "live reconstruction" to create your desired workflow in the gui. |
Thanks @natowi for the quick reply. Marker id 17 Orientation and Scaling
I just tried this pipeline and I still get the same error 'Alignment from markers: Invalid number of image describer types: 0' in the node SFMTransform_2 (the one for the scaling). Here is the log in the console. Find attached the status.txt and log.txt files from that step. Alignment If this is the case, when should I be running the initial nodes of the all the other data sets - 'CameraInit', 'FeatureExtraction', 'ImageMatching', 'FeatureMatching' and 'StructureFromMotion' ? Should I have previously done the reconstruction of all my captures and then link to the data from the 'StructureFromMotion' node that I found in the MeshroomCache of each reconstructions? Thank you! |
@smallfly 00001.pdf is actually cctag id 0 and 00002.pdf is cctag id 1. Now you defined marker id 1 and id2 --> 0002.pdf and 0003.pdf You need to test which Transformation Method in the SFMTransform node works best for you. (depends on your workflow) Make sure you enabled CCTAG in Feature Extraction For the alignment open the first reconstruction in the gui and create the other nodes. Then insert the paths to the computed sfm folders of the other reconstructions. You could do your reconstructions from camerainit to sfm from the cli and later scale/orient and align the sfm outputs in a new Meshroom project. There is not only one solution - you best experiment to see what workflow works best for you. I´ll see if I can find some time to do a more advanced tutorial on the topic, but no promises. |
@natowi thanks for the details and updates to the documentation. I just changed the markers ID to id 0 and id 1, and made sure that only cctag3 is select in the 2nd SFMTransform node. |
Best share a few images for testing |
@smallfly make sure cctags were extracted by checking the feature overlay in the image viewer. I used six files from your dataset for testing: 20201117-190001_R0.0_C0 - 20201117-190001_R50.0_C0 As this is a small dataset with few features, in FeatureExtraction set the Describer Preset to ultra and enable CCTAG3 (optional: akaze). Adding SfmTransform (auto_from_cameras) + SfmTransform (from_markers, as described before / manual alignment) will result in a decent alignment and scaling. btw. looks like you used the cctag markers id0, id1, id2, id4 By experimenting with the marker settings I was able to orient the model to the grid based on cctag (better in this case than with the sfmtransform node "from camera", since your cameras have a tilt). You need to do some tests and reconstruction parameter optimizations. Maybe put some textured paper on the glass to provide some more features. I´d recommend you to try out different settings with a subset of your dataset. |
@natowi Thank you for all these detailed informations and tests. I will look at all these in details when I'm back home. Would you be open to share the Meshroom Graph you used for this test? Thanks again! |
Nothing fancy. All the node settings are described above |
Hi @natowi I think I have set up everything as you explained. I'm still getting the same error. Here are some details/screencaptures as well as the Meshroom Graph. The graph showing the detected markers in the picture overlay, and the log of the SFMTransform2 with the error. The settings for the FeatureExtraction node The settings for the FeatureMatching node The settings for the 1st SFMTransform node |
I think you missed to select cctag in the StructureFromMotion (SFM) node (and if you select akaze in Feature extraction, you have to select akaze in all following nodes up to the last node where you want to use akaze. When you don´t select akaze in SFM but in the following node it will fail) Don´t start with a complicated workflow in your first go. Start with the variant with two cctag markers. I´ll update the wiki with the info on how to use multiple markers for scaling and orientation. |
@natowi you are right, I mixed SFM (StructureFromMotion) with SfMTransform. I have select cctag in the StructureFromMotion (SFM) node. I still do not get any 3rd value in the picture overlay though. And I now get a new error in the SfMTransform2 - 'Failed to find marker:2'
|
Not all markers were detected in the small test dataset. In the full dataset all markers should be detected. I had some (other) issues that some cameras were not reconstructed. This is caused by the background which is not fully featureless. Could be corrected with some parameter tweaking. This can be the cause for the missing marker. As I said, you setup has some flaws. Here is the simple test setup with the six images (series 20201117-190001_R0.0_C0 to 20201117-190001_R50.0_C0). 3d viewer axis: x=red y=green z=blue line=positive direction |
Salve! sto provando anche io a gestire i CCtag con Meshroom! io volevo utilizzare i cctag in un rilievo fotogrammetrico di una strada ( lunga 50m) per cui i cctag son distribuiti, non sono visibili tutti in una unica foto...ma in sequenza! questo può essere un problema o va bene comunque per il loro riconoscimento automatico?! Vorrei poi avere delle indicazioni più chiare su come fornire le coordinate nel nodo SFMTrasform... io di solito lavoro in metri....che sistema di misura usa il nodo?!?! |
This issue is closed and everything important has been added to the documentation. For similar questions please open a new issue. @MarcoRos75 I moved your question to a new issue #1163 |
@natowi Thank you for all your help! This is right now scaling and rotating all the meshes properly. One last question I would ask here is about the specific CCTags we are using: We will for sure have more questions about this pipeline and also about all the others settings available. All this will be in other threads. Thanks again. |
@smallfly You can find the solution here: https://groups.google.com/g/alicevision/c/t4kcSceAFD4 |
I've been trying to figure out how to use CCTag markers and want to understand how to use them proper way.
There are some photos of featureless wall with markers on it. The idea was to see if Meshroom can localize cameras only using those markers.
However using CCTags alone didn't result in reconstruction. Even with SIFT features it was failing on the SfM node untill I added Akaze festures. But using Akaze alone was enough to localize the cameras. So what is the point of CCTags?
I though there must be some issue and looking through #716 decided to use snapshot builds. I've tried current and March snapshots replacing bin folder in aliceVision. None of them even allowed me to use CCTag3 as describer type. FeatureExtraction node simply failed. What am I doing wrong? What is the workflow for CCTags?
The text was updated successfully, but these errors were encountered: