-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[question] Masking throughout the whole pipeline #1261
Comments
Regarding undistortion, I found the folowing files: I looked at the tutorial at OpenCV but I still can't figure out how use the information exported to the text files to undistort the masks.
I asusme that |
Regarding the undistortion (rectification):
this is performed by the "PrepareDenseScne" node, so you do not
have to implement this yourself.
As you can see in my
workflow, I use a second "PrepareDenseScene2" node to
rectify the images before generating the mask from them, so the
masks should fit to the rest of the workflow.
On 08.02.2021 13:56, isolin wrote:
I saw several threads concerning masking. I am trying to
accomplish a large scale reconstruction from various sources,
with moving objects (persons, cars) and other objects (like my
own capturing equipment) that need to be removed. If I don't
remove them they already fool the SfM so much that it either
fails or projects everything on a sphere-like surface. So manual
masking up front is the only way to go for me.
Unfortunately, #1097 states it is not yet in the
release. I tried to compile AliceVision on my own (Ubuntu) but
ultimately failed. So I decided to create my own masking
pipeline.
On input I have .png images with binary masks stored in the
alpha channel.
After FeatureExtraction I remove those features
that overlap with the respective masked areas ✔️
After DepthMapFilter I set masked regions of depth
maps to -1, inspired by the script from #566 ✔️
Texturing still uses the masked parts
producing a lot of artifacts ❗
I started to thing about what I am doing wrong and I would
appreciate help!
hypothesis: As far as I can see there is no undistortion
applied in the script of @ALfuhrmann.
That might cause that mask does not fit well anymore. I will
try to take that into account.
hypothesis: What are the _simMap.exr and _nmodMap.png
files? Should I apply masking to them as well?
hypothesis: Is there anything else that would stop the Texturing
node to sample the images in the masked regions?
Thank you for any help and suggestions.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
…--
Dr. Anton L. Fuhrmann,
senior researcher
mail: fuhrmann@vrvis.at
office: ++43(0)1 90 8989 2 -602
FAX: ++43(0)1 90 8989 2 -900
web: http://www.VRVis.at/
FN: 195369h, HG Wien
VRVis Zentrum für Virtual Reality
und Visualisierung Forschungs-GmbH
Donau-City-Strasse 11
A-1220 Wien, Austria, Europe
|
The "dist" parameter contains the distortion coefficients.
|
Danke for the quick answer! 😄 Yes, in your workflow you generate the masks from undistorted images so they are already undistorted. But in my workflow
Since Meshroom unfortunately ignores the alpha channel (it sets the color to black instead), I need to undistort the (distorted) masks in addition to the standard |
Hm. The sparse reconstruction is the step that generates the camera calibration. But if you have an already previously calibrated camera, you can use the PrepareDenseScene node for rectifying the images, that's what it does. |
No, I don't have a calibrated camera. But the issue is that dynamic stuff on the one hand and "static" text rendered in the video (displaying actual gps coordinates) perfectly fool the implicit camera calibration. I figured out how to run the aliceVision binaries so I will just use the |
Update: I was successful in undistorting my masks and setting For the masked test which is always at the bottom, I can just crop the image, but I would like to get a correct pipeline so that similar errors do not appear for regions that can not be cropped like people or reflective surfaces in the middle of the image. For illustration, here are the white artifacts repeating in patterns as the camera flies over the landscape. One can also see some border around. I assume it is caused by banding in the texturing node. So I wonder if anyone knows how to make masking work with texturing? @fabiencastan, is there a chance to see it in the upcoming release planned for this month? It is already a very old #188 feature request. I am sure it would make a huge impact on the quality once the doors would open for any sort of manual or automated preprocessing to mask out troublemakers like moving objects, useless background stuff, highly reflective and glossy surfaces etc. I love the concept of AliceVision! I think it does not need fancy nodes to do masking automatically #750. One can use plenty of other tools for that. In first row it needs a great and reliable core. And for masking, it's just about handling alpha properly all over the pipeline like the basis proposed in #715 plus |
It is definitely banding in the |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue is closed due to inactivity. Feel free to re-open if new information is available. |
Hi @isolin , have you by any chance come across a solution for this, or are you still using this method? I am segmenting objects using SAM and HED and would like to maybe back project these masks onto my already texturized mesh. |
Nope, I didn't make it. I located the texturing code that I would need to edit in order to take masks into account, but I never found time for it. |
Did you try the new masking node? https://github.com/alicevision/Meshroom/wiki/New-Features-in-Meshroom-2023.1#1-image-masking |
I saw several threads concerning masking. I am trying to accomplish a large scale reconstruction from various sources, with moving objects (persons, cars) and other objects (like my own capturing equipment) that need to be removed. If I don't remove them they already fool the SfM so much that it either fails or projects everything on a sphere-like surface. So manual masking up front is the only way to go for me.
Unfortunately, #1097 states it is not yet in the release. I tried to compile AliceVision on my own (Ubuntu) but ultimately failed. So I decided to create my own masking pipeline.
On input I have .png images with binary masks stored in the alpha channel.
I started to thing about what I am doing wrong and I would appreciate help!
_simMap.exr
and_nmodMap.png
files? Should I apply masking to them as well?Thank you for any help and suggestions.
The text was updated successfully, but these errors were encountered: