Skip to content

Tensorflow implementation of Semi-Supervised Monocular Depth Estimation with Left-Right Consistency Using Deep Neural Network.

License

Notifications You must be signed in to change notification settings

jahaniam/semiDepth

Repository files navigation

semiDepth

Tensorflow implementation of Semi-Supervised Monocular Depth Estimation with Left-Right Consistency Using Deep Neural Network.

disclaimer:

Most of this code is based on monodepth. We extended their work and added the lidar supervision into the training process. The authors take no credit from Monodepth, therefore namings conventions of the files are same and licenses should remain intact. Please cite their work if you find them helpful.

semiDepth

Link to full video: https://www.youtube.com/watch?v=7ldCPJ60abw

Semi-Supervised Monocular Depth Estimation with Left-Right Consistency Using Deep Neural Network

Requirements

This code was tested with Tensorflow 1.12, CUDA 9.0 and Ubuntu 16.04 and Gentoo.
Please download kitti depth annotaion dataset and place it with the correct folder structure.

Data

This model requires rectified stereo pairs for training and registered annotated depth map.
There are two main datasets available:

please follow monodepth download instruction (we do not convert into jpg). We only used stereo images for training the cityscape model.

You can download depth annotated data from this link. sudo pip3 install rospkg catkin_pkg Please go to utils/filenames/eigen_train_files_withGT_annotated.txt and make sure your folder names and structure of your folders matches the file.

eigen_train_files_withGT_annotated.txt is structured as bellow: left_image right_image left_annotated_depth right_annotated_depth

Training

Eigen split:

python monodepth_main.py --mode train --model_name my_model --data_path ~/data/KITTI/ \
--filenames_file utils/filenames/eigen_train_files_withGT_annotated.txt --log_directory tmp/

You can continue training by loading the last saved checkpoint using --checkpoint_path and pointing to it:

python monodepth_main.py --mode train --model_name my_model --data_path ~/data/KITTI/ \
--filenames_file utils/filenames/eigen_train_files_withGT_annotated.txt --log_directory ~/tmp/ \
--checkpoint_path tmp/my_model/model-5000

For fine-tune from a checkpoint you should use --retrain.
For monitoring use tensorboard and point it to your log_directory.
To apply hotfix for gradient smoothness loss bug add --do_gradient_fix (we used this flag for all of our experiments)

Please look at the monodepth_main and original monodepth github for all the available options.

Testing

To test change the --mode flag to test and provide the path of the checkpoint of your model by --checkpoint_path. You can visualized the result using --save_visualized. We save post processed output for visualization. It should create a folder next to the model checkpoint folder containing results:

python monodepth_main.py --mode test --data_path ~/data/KITTI/ \
--filenames_file utils/filenames/eigen_test_files.txt --log_directory tmp/ \
--checkpoint_path tmp/my_model/model-181250 --save_visualized

Testing on single image

To test the network on one image you can use the monodepth_simple.py it should svae the output file with the same input file name+'_disp' in the same directory

python monodepth_simple.py --image /path-to-image --checkpoint_path /path-to-model

Please note that there is NO extension after the checkpoint name

This will create a file named invDepth.npy containing result.

Evaluation on KITTI

To evaluate eigen, we used 652 annotated images:

python2 utils/evaluate_kitti_depth.py --split eigen --predicted_disp_path \
models/eigen_finedTuned_cityscape_resnet50Forward/invDepth.npy  \
--gt_path /home/datasets/ --garg_crop --invdepth_provided --test_file \
utils/filenames/eigen_test_files_withGT.txt \
--shared_index utils/filenames/eigen692_652_shared_index.txt

By running the code for eigen_finedTuned_cityscape_resnet50Forward and invDepth.npy you should get results below:

abs_rel,     sq_rel,        rms,    log_rms,     d1_all,         a1,         a2,         a3
0.0784,     0.4174,      3.464,      0.126,      0.000,      0.923,      0.984,      0.995

Models

You can download our pre-trained model from links below:

eigen_finedTuned_cityscape_resnet50Forward

cityscape_resnet50Forward

Results

You can download the npy file containing result for the 697 eigen test files from invDepth.npy. We are filtering the 45 images which there is no annotation depth map for it in our evaluation python code using the file eigen692_652_shared_index.txt

Video

Screenshot

License

Please have a look at original monodepth

About

Tensorflow implementation of Semi-Supervised Monocular Depth Estimation with Left-Right Consistency Using Deep Neural Network.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published