Skip to content

The Code of Contrast Prior and Fluid Pyramid Integration for RGBD Salient Object Detection(CVPR2019)

Notifications You must be signed in to change notification settings

JXingZhao/ContrastPrior

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

For training:

  1. Clone this code by git clone https://github.com/JXingZhao/ContrastPrior.git --recursive, assume your source code directory is$ContrastPrior;

  2. Download training data (rmhn), and extract it to $ContrastPrior/data/;

  3. Build caffe with cd caffe && mkdir build && cd build && cmake .. && make -j32&& make pycaffe;

  4. Download initial model and put it into $ContrastPrior/Model/;

  5. Start to train with python run.py.

For testing:

  1. Download pretrained model $ContrastPrior/Model/;

  2. Generate saliency maps by python test.py;

  3. Run $ContrastPrior/evaluation/main.m to evaluate the saliency maps.

Pretrained models, datasets and results:

| Page | | Training Set (rmhn) | | All RGBD Datasets (xdvf) | | Evaluation results |

If you think this work is helpful, please cite

@inproceedings{zhao2019Contrast,

title={Contrast Prior and Fluid Pyramid Integration for RGBD Salient Object Detection},

author={Zhao, Jia-Xing and Cao, Yang and Fan, Deng-Ping and Cheng, Ming-Ming and Li, Xuan-Yi and Zhang, Le},

booktitle=CVPR,

year={2019}

}

@inproceedings{fan2017structure,

title={{Structure-measure: A New Way to Evaluate Foreground Maps}},

author={Fan, Deng-Ping and Cheng, Ming-Ming and Liu, Yun and Li, Tao and Borji, Ali},

booktitle={IEEE International Conference on Computer Vision (ICCV)},

pages = {4548-4557},

year={2017},

note={\url{http://dpfan.net/smeasure/}},

organization={IEEE}

}

About

The Code of Contrast Prior and Fluid Pyramid Integration for RGBD Salient Object Detection(CVPR2019)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published