An official PyTorch implementation of "Regression Prior Networks" for effective uncertainty estimation.
Example on Nyuv2 dataset (monocular depth estimation)
This repo was tested on Python 3.7.6 and PyTorch 1.4.0
All other requirements can be installed with conda
conda env create -f requirements.yml
For Nyu training, we use the subsampled data (50K) from the DenseDepth repo:
Train data (4.1 GB) and test data (1.4 GB). Store them in data folder without unpacking.
All trained checkpoints (ensemble of gaussians, our model) can be found here (1.6 GB). Those should be extracted in checkpoints folder.
To reproduce reported test metrics (table 3), run
bash experiments/reproduce_nyu_metrics.sh
OOD scores (table 4) may be reproduced with
bash experiments/reproduce_ood_scores.sh
Please note that we require additional KITTI subset (437 MB) for this. Unzip it in data folder. (You may simply take first 654 images from test_depth_completion_anonymous if you have KITTI dataset)
Finally, to get individual examples use:
python get_nyu_samples.py --indices $DESIRED_INDICES
You may also retrain all Nyuv2 Gaussian models with:
python nyu_train.py --checkpoint $CHECKPOINT_FOLDER_PATH --model_type "gaussian"
and then distil them to NWP with:
python nyu_train.py --checkpoint $CHECKPOINT_FOLDER_PATH --teacher_checkpoints $PATHS_TO_TEACHERS --model_type "nw_prior"
Please note that by default it uses all available GPUs and requires ~18.2Gb of GPU memory.
- Wrap the output from your model using one of our distribution_wrappers.
- (If feasible) Train an ensemble of base models with NLL objective. You may inherit our NLLSingleDistributionTrainer or use smth similar.
- (If feasible) Distill to a single Prior model by inheriting from DistillationTrainer class and training with it (look at nyu_trainers for an example).
- (If 2-3 are not feasible, but you have ood data to train on) Use NWPriorRKLTrainer class for straightforward training. It requires additional hyperparameters - ood coefficient, inverse train beta and prior ood beta. Those should be tuned - we recommend starting with 0.1, 1e-2 and 1e-2 for them respectively.
- During testing, wrap the output & get the prior distribution. You can get all desired uncertainties from it.
- Advanced visualization of results
- Training script
- Evaluation on KITTI
If you find our work useful, please cite the corresponding paper:
@article{RPN20,
author = {Andrey Malinin, Sergey Chervontsev, Ivan Provilkov, Mark Gales},
title = {Regression Prior Networks},
journal = {arXiv e-prints},
volume = {abs/2006.11590},
year = {2020},
url = {https://arxiv.org/abs/2006.11590},
eid = {arXiv:2006.11590},
eprint = {2006.11590}
}