Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reconstruction from videos #232

Closed
AdityaSrivastava2819 opened this issue Sep 3, 2018 · 54 comments
Closed

Reconstruction from videos #232

AdityaSrivastava2819 opened this issue Sep 3, 2018 · 54 comments

Comments

@AdityaSrivastava2819
Copy link

So, I am just asking if Meshroom supports reconstruction from videos. And please make a forum and start using that YouTube channel of yours.

@andybak
Copy link

andybak commented Sep 3, 2018

I'd be interested in this. I understand image quality is a major factor in good reconstruction but there's an element of "quantity over quality" that can come into play and I know there's been active research on using "lots of frames" to make up for the lower quality and resolution of each individual frame.

I also wonder if the inherently temporal nature of video can be used to improve feature matching. Assuming no straight cuts in the video then the software can assume that each image is only slightly spatially different to the previous one. There's a lot of information (or at least optimisations) that could be derived from this.

@AdityaSrivastava2819
Copy link
Author

It would also eliminate the need of taking too many pics. One video from all angles and done. And do you know where can I find Meshroom docs?

@andybak
Copy link

andybak commented Sep 3, 2018

(This is a bit beyond my technical comprehension level but definitely seems relevent: https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Resch_Scalable_Structure_From_2015_CVPR_paper.pdf )

@ChemicalXandco
Copy link
Contributor

This may sound dumb but, why not split a video into pictures, then cherry pick the ones that are good/not blurry and use that?

@andybak
Copy link

andybak commented Sep 3, 2018

Yeah. On a simple level that would do the trick.

The least worst part of that approach is the "split a video into pictures".

Cherry picking would soon become awfully time consuming with a few minutes of footage so it would be great if this step wasn't necessary.

But the ideal situation would be:

  1. Totally automated workflow. No need to split to images
  2. Feature detection optimized for the situation where there's a lot of photos but each photo is going to be lower quality. I think it's probably going to be a slightly different set of trade-offs.
  3. Finally (and I realise this is a massive feature so I'm not expecting anyone to actually implement this in the near future) deriving some benefit from the sequential nature of video over and above what's possible when the source images aren't similarly constrained.

@AdityaSrivastava2819
Copy link
Author

AdityaSrivastava2819 commented Sep 3, 2018 via email

@ChemicalXandco
Copy link
Contributor

ChemicalXandco commented Sep 4, 2018

Zephyr does have an 'Import Pictures from video' button - never used it though. It also has 'Blurriness auto filtering'
capture267 Would be nice if meshroom would implement this kind of workflow for people that like video better; personally, I prefer pictures.

@fabiencastan
Copy link
Member

@andybak : Thanks for the interesting link. We don't have this level of integration for video but I agree that this is an interesting area.

There is an experimental node for keyframe selection in a video, which removes too similar or too blurry images. This node is not yet provided in the binaries as it introduces many dependencies.

So if you built it by yourself, you can test the KeyframeSelection node. It is not yet fully integrated into Meshroom, so you have to manually drag&drop the exported frames to launch the reconstruction (instead of just adding a connection in the graph).

If someone is interested to contribute for a deeper integration of videos, we would be glad to provide assistance.

@AdityaSrivastava2819
Copy link
Author

AdityaSrivastava2819 commented Sep 4, 2018 via email

@TristanHehnen
Copy link

A work around, as long as video files are not yet natively supported by meshroom, one could extract the single frames using ffmpeg, like demonstrated here.

@PeterTheOne
Copy link

I tried the process of extracting frames with ffmpeg and using them as source. Take a look at my twitter threads showing the workflow from video to 3d model: Stone and Statue.

@altaic
Copy link

altaic commented May 7, 2019

There are a number of video-specific algorithms with publications and (mostly) working code. Off the top of my head, Elastic Fusion comes to mind. I know of several others that I can list some other time.

@jarble
Copy link

jarble commented Jun 10, 2019

It would be relatively easy to reconstruct a volumetric video from multiple videos that were recorded using a stereo camera system. You would only need to combine the first frame of each video, then the second frame, and so on, for every simultaneous frame in each video. Each video would need to have exactly the same number of frames, starting and ending at the same time.

Reconstructing a 3D model from a single video might also be possible, since you would only need to select several frames from the video to be used as input. There are several software libraries for monocular SLAM that can reconstruct 3D models in real-time while tracking the camera's position.

@stale
Copy link

stale bot commented Oct 8, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale for issues that becomes stale (no solution) label Oct 8, 2019
@nicofirst1
Copy link

Any update on this?

@stale stale bot removed the stale for issues that becomes stale (no solution) label Oct 10, 2019
@fabiencastan
Copy link
Member

The KeyframeSelection node has been included in the binary release of Meshroom 2019.2.

@nicofirst1
Copy link

And how can I add it to the default graph?

@fabiencastan
Copy link
Member

For now, you have to create a KeyframeSelection node (right click in the graph editor) and compute it separately. Then you can import the result of the KeyframeSelection into Meshroom by usual drag&drop of the extracted images.

@nicofirst1
Copy link

Thank you

@nicofirst1
Copy link

When running the node I get


terminate called after throwing an instance of 'std::runtime_error'
  what():  Failed to load vocabulary tree file
Aborted (core dumped)

Where can I find such file?

@natowi
Copy link
Member

natowi commented Oct 10, 2019

vlfeat_K80L3.SIFT.tree from https://gitlab.com/alicevision/trainedVocabularyTreeData/raw/master/vlfeat_K80L3.SIFT.tree

KeyframeSelection side note: not all video formats are supported. mp4 is known to work.

@fabiencastan
Copy link
Member

fabiencastan commented Oct 10, 2019

Create an ImageMatching node, and copy/paste the value of the "tree" param. It's already in the release files.

@fabiencastan
Copy link
Member

In fact, it's supposed to have the correct default value already...

@nicofirst1
Copy link

Actually when I create a new ImageMatching node the Tree parameter seems to be empty

@fabiencastan
Copy link
Member

That's strange. Are you using the release? Which platform?

@nicofirst1
Copy link

I'm using the release from here on Ubuntu 18

@fabiencastan
Copy link
Member

And is the "tree" param of the ImageMatching also empty in the default graph?

@nicofirst1
Copy link

Ok I found the error, apparently I cloned the meshroom in the same folder in which I downloaded the release. I removed the cloned repo and now it is working fine.

@natowi
Copy link
Member

natowi commented Oct 10, 2019

How about placing the KeyframeSelection node before the CameraInit node?
Once the KeyframeSelection node completes we know how many images we have and can hand them over to CameraInit.
Having the KeyframeSelection node in the graph could add a condition to the render farm submitter, that the KeyframeSelection node is being processed first, ignoring the rest of the graph. Once KeyframeSelection completes, the rest of the graph could be submitted.

In my opinion being able to actually include the node in a graph would be better (and more intuitive) than some stand-alone node.

kf

@fabiencastan
Copy link
Member

Currently, when I submit on renderfarm all uncomputed nodes are submitted in the same way.
If we really want to do that, we need to have different types of nodes, lock the submit when these nodes are not computed, upgrade the topology when this computation is done, etc. It creates some complexity on the logic and is also not so obvious from a end-user perspective.

@natowi
Copy link
Member

natowi commented Oct 10, 2019

I understand. So it is probably the best to remove the KeyframeSelection node and move it to a dedicated menu. I guess we will have the same problem with aliceVision_utils_split360Images, so we might want to create a "add data" menu for adding images from path, splitting 360 images and importing videos.

@fabiencastan
Copy link
Member

Another way would be to have the logic in this kind of nodes to provide the maximum number of images that could be extracted and use that all along the pipeline, so we can keep all the current behaviors.

@natowi
Copy link
Member

natowi commented Oct 10, 2019

Yes this was also something I thought of, but assumed the maximum number of images could become really huge in some cases.

Here is a workaround I am using at the moment:
I have copied the output folder path of KeyframeExtraction and set it as the Live Reconsturuction Image Folder Path. Then I start watching the folder and execute the graph.

@nicofirst1
Copy link

By the way, do you know why I get this error?

[18:38:24.527543][warning] Unable to open the video : /home/dizzi/InstallationPackages/Meshroom-2019.2.0/Saves/church.mp4
terminate called after throwing an instance of 'std::invalid_argument'
 what():  Unable to open the video : /home/dizzi/InstallationPackages/Meshroom-2019.2.0/Saves/church.mp4
Aborted (core dumped)

The path is copy pasted from nautilus.

@natowi
Copy link
Member

natowi commented Oct 17, 2019

@nicofirst1
Copy link

I already provided the full path as:
/home/dizzi/InstallationPackages/Meshroom-2019.2.0/Saves/church.mp4

I even tried dragging and dropping the video into the media attribute with the same result

@natowi
Copy link
Member

natowi commented Oct 17, 2019

@nicofirst1 Hmm, no idea. You could try running aliceVision_utils_keyframeSelection from the cli.

@nicofirst1
Copy link

You mean the one in ./aliceVision/bin? If so I tried running them but i get this error:

./aliceVision/bin/aliceVision_utils_keyframeSelection: error while loading shared libraries: libaliceVision_keyframe.so.2: cannot open shared object file: No such file or directory

Same thing with version 2
./aliceVision/bin/aliceVision_utils_keyframeSelection-2.0: error while loading shared libraries: libaliceVision_keyframe.so.2: cannot open shared object file: No such file or directory

@natowi
Copy link
Member

natowi commented Oct 17, 2019

@nicofirst1 Ok, does Meshroom work with imported images? You could try this.

@nicofirst1
Copy link

I install Meshroom from the binaries. The default pipeline works without error. I can import images/dir with either drag/drop or specifying a path and no error is risen.

@lpla
Copy link

lpla commented Jan 21, 2020

I am having the same issues that @nicofirst1 reported with [warning] Unable to open the video : . Called aliceVision/bin/aliceVision_utils_keyframeSelection-2.0 from command line, after LD_LIBRARY_PATH=/home/lpla/Meshroom-2019.2.0/aliceVision/lib ; export LD_LIBRARY_PATH , in Ubuntu Server 18.04.

Any clue?

@simogasp
Copy link
Member

keyframeSelection relies on OpenCV for reading the video and normally the release version is built with opencv support. So most likely the codec is not recognized by opencv. Can you share the details (codec name) of the video you are trying to use?
Maybe installing the library corresponding to that codec could help to fix the problem but at that point, I'm afraid you also need to rebuild opencv to take that into account.

So the only way is to convert the video into a sequence of images into a directory and feed that directory as input (I know, if the video is long, it's gonna be a waste of space...). If you have ffmpeg installed you can do it from command line
https://en.wikibooks.org/wiki/FFMPEG_An_Intermediate_Guide/image_sequence#Making_an_Image_Sequence_from_a_video

@lpla
Copy link

lpla commented Jan 22, 2020

The original video is a MOV from an iPhone. I converted it into mp4 using ffmpeg, but got the same result.

Also, I tried to extract one frame per second as JPG using ffmpeg -i ../IMG_4200.MOV -r 1/1 $filename%03d.jpg but then, the meshroom_photogrammetry command from the binary release crashes with 'No input images':

$ ~/Meshroom-2019.2.0$ ./meshroom_photogrammetry --input /home/lpla/testChair --output /home/lpla/Meshroom-2019.2.0/testKeySelectionVideoChair
Plugins loaded:  CameraCalibration, CameraInit, CameraLocalization, CameraRigCalibration, CameraRigLocalization, ConvertSfMFormat, DepthMap, DepthMapFilter, ExportAnimatedCamera, ExportColoredPointCloud,
ExportMaya, FeatureExtraction, FeatureMatching, ImageMatching, ImageMatchingMultiSfM, KeyframeSelection, LDRToHDR, MeshDecimate, MeshDenoising, MeshFiltering, MeshResampling, Meshing, PrepareDenseScene, P
ublish, SfMAlignment, SfMTransform, StructureFromMotion, Texturing
Nodes to execute:  ['CameraInit_1', 'ImageMatching_1', 'StructureFromMotion_1', 'Meshing_1', 'MeshFiltering_1', 'Texturing_1', 'Publish_1']
WARNING: downgrade status on node "CameraInit_1" from Status.ERROR to Status.SUBMITTED
                                    
[1/7] CameraInit          
 - commandLine: aliceVision_cameraInit  --sensorDatabase "/home/lpla/Meshroom-2019.2.0/aliceVision/share/aliceVision/cameraSensors.db" --defaultFieldOfView 45.0 --groupCameraFallback folder --verboseLevel
 info --output "/tmp/MeshroomCache/CameraInit/c448939571d5c70b05c9ae4ad416ada37d6273ca/cameraInit.sfm" --allowSingleView 1
 - logFile: /tmp/MeshroomCache/CameraInit/c448939571d5c70b05c9ae4ad416ada37d6273ca/log
 - elapsed time: 0:00:00.044236
ERROR:root:Error on node computation: Error on node "CameraInit_1":
Log:                                                                                               
Program called with the following parameters:                                                    
 * allowSingleView = 1  
 * defaultCameraModel = "" (default)
 * defaultFieldOfView = 45                                      
 * defaultFocalLengthPix = -1 (default)
 * defaultIntrinsic = "" (default)
 * groupCameraFallback =  Unknown Type "20EGroupCameraFallback"                         
 * imageFolder = "" (default)                                                                 
 * input = "" (default)                                                           
 * output = "/tmp/MeshroomCache/CameraInit/c448939571d5c70b05c9ae4ad416ada37d6273ca/cameraInit.sfm"
 * sensorDatabase = "/home/lpla/Meshroom-2019.2.0/aliceVision/share/aliceVision/cameraSensors.db"
 * verboseLevel = "info"                                                          
                                  
[11:12:56.830408][error] Program need -i or --imageFolder option                                                         
No input images.                                              

WARNING: downgrade status on node "ImageMatching_1" from Status.SUBMITTED to Status.NONE
WARNING: downgrade status on node "StructureFromMotion_1" from Status.SUBMITTED to Status.NONE
WARNING: downgrade status on node "Meshing_1" from Status.SUBMITTED to Status.NONE        
WARNING: downgrade status on node "MeshFiltering_1" from Status.SUBMITTED to Status.NONE
WARNING: downgrade status on node "Texturing_1" from Status.SUBMITTED to Status.NONE
WARNING: downgrade status on node "Publish_1" from Status.SUBMITTED to Status.NONE
Traceback (most recent call last):           
  File "/opt/rh/rh-python36/root/usr/lib64/python3.6/site-packages/cx_Freeze/initscripts/__startup__.py", line 14, in run
  File "/opt/Meshroom/setupInitScriptUnix.py", line 39, in run
  File "bin/meshroom_photogrammetry", line 144, in <module>
  File "/opt/Meshroom/meshroom/core/graph.py", line 1131, in executeGraph
  File "/opt/Meshroom/meshroom/core/node.py", line 274, in process
  File "/opt/Meshroom/meshroom/nodes/aliceVision/CameraInit.py", line 239, in processChunk
  File "/opt/Meshroom/meshroom/core/desc.py", line 453, in processChunk
RuntimeError: Error on node "CameraInit_1":
Log:                                                                                               
Program called with the following parameters:                                                    
 * allowSingleView = 1  
 * defaultCameraModel = "" (default)
 * defaultFieldOfView = 45                                      
 * defaultFocalLengthPix = -1 (default)
 * defaultIntrinsic = "" (default)
 * groupCameraFallback =  Unknown Type "20EGroupCameraFallback"
 * imageFolder = "" (default)
 * input = "" (default)
 * output = "/tmp/MeshroomCache/CameraInit/c448939571d5c70b05c9ae4ad416ada37d6273ca/cameraInit.sfm"
 * sensorDatabase = "/home/lpla/Meshroom-2019.2.0/aliceVision/share/aliceVision/cameraSensors.db"
 * verboseLevel = "info"

[11:12:56.830408][error] Program need -i or --imageFolder option
No input images.

This also happened with the 2019.1.0 version.

Then, I also tried to use Meshroom from cloning the Github repo, but I guess that I need to compile AliceVision (https://github.com/alicevision/AliceVision/blob/develop/INSTALL.md) to make it work, so when I call the previous command in the cloned repo (after installing requirements.txt) I get:

$ bin/meshroom_photogrammetry 
Traceback (most recent call last):
  File "bin/meshroom_photogrammetry", line 7, in <module>
    import meshroom
ImportError: No module named meshroom

The only way I got this video frames working from my Ubuntu 18.04 Server was by connecting to the server using ssh -X from a computer running Wayland (Ubuntu 19.10 Wayland Session login option) and running the Meshroom binary from compiled release. If I try to run this remotely in a non-Wayland device (OS X, X11 session), OpenGL crashes appear.

@simogasp
Copy link
Member

[11:12:56.830408][error] Program need -i or --imageFolder option

You need to set the imageFolder input in the CameraInit folder

@natowi natowi removed the scope:doc label Jan 28, 2020
@natowi
Copy link
Member

natowi commented Jan 28, 2020

Issue resolved. Drag and drop import of videos will be supported in the next release https://github.com/alicevision/meshroom/blob/2381cde667d86dca13a539a3709865e8f68117d6/meshroom/ui/reconstruction.py#L621-L626. Details added to the documentation.

@natowi natowi closed this as completed Jan 28, 2020
@ad48hp
Copy link

ad48hp commented Feb 29, 2020

Would the possibility of computing based on optical flow be exploited ?

@fabiencastan
Copy link
Member

@ad48hp Yes, that would be cool. I would support any contribution in this direction.

@EwoutH
Copy link
Contributor

EwoutH commented May 11, 2020

I have some videos without (correct) metadata. Would it be possible to estimate the FOV from a video file without manual input?

@fabiencastan
Copy link
Member

Same than for photos, meshroom will start with a 45° guess and if you scene is constrained enough it should converge to the right value.

@claudito1991
Copy link

hi there, i have started a new issue with a problem related, could anyone here help me? issue #955

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests