Skip to content
/ LDF Public

Codes for the CVPR2020 paper "Label Decoupling Framework for Salient Object Detection"

Notifications You must be signed in to change notification settings

weijun-arc/LDF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

by Jun Wei, Shuhui Wang, Zhe Wu, Chi Su, Qingming Huang, Qi Tian

Introduction

framework To get more accurate saliency maps, recent methods mainly focus on aggregating multi-level features from fully convolutional network (FCN) and introducing edge information as auxiliary supervision. Though remarkable progress has been achieved, we observe that the closer the pixel is to the edge, the more difficult it is to be predicted, because edge pixels have a very imbalance distribution. To address this problem, we propose a label decoupling framework (LDF) which consists of a label decoupling (LD) procedure and a feature interaction network (FIN). LD explicitly decomposes the original saliency map into body map and detail map, where body map concentrates on center areas of objects and detail map focuses on regions around edges. Detail map works better because it involves much more pixels than traditional edge supervision. Different from saliency map, body map discards edge pixels and only pays attention to center areas. This successfully avoids the distraction from edge pixels during training. Therefore, we employ two branches in FIN to deal with body map and detail map respectively. Feature interaction (FI) is designed to fuse the two complementary branches to predict the saliency map, which is then used to refine the two branches again. This iterative refinement is helpful for learning better representations and more precise saliency maps. Comprehensive experiments on six benchmark datasets demonstrate that LDF outperforms state-of-the-art approaches on different evaluation metrics.

Prerequisites

Clone repository

git clone https://github.com/weijun88/LDF.git
cd LDF/

Download dataset

Download the following datasets and unzip them into data folder

Training & Evaluation

  • If you want to train the model by yourself, please download the pretrained model into res folder
  • Split the ground truth into body map and detail map, which will be saved into data/DUTS/body-origin and data/DUTS/detail-origin
    python3 utils.py
  • Train the model and get the predicted body and detail maps, which will be saved into data/DUTS/body and data/DUTS/detail
    cd train-coarse/
    python3 train.py
    python3 test.py
  • Use above predicted maps to train the model again and predict final saliency maps, which will be saved into eval/maps/LDF folder.
    cd /train-fine/
    python3 train.py
    python3 test.py
  • Evaluate the predicted results.
    cd eval
    matlab
    main
  • Training twice is to get smoother body and detail maps, as shown in following figure visualize

Testing & Evaluate

  • If you just want to evaluate the performance of LDF without training, please download our trained model into train-fine/out folder
  • Predict the saliency maps
    cd train-fine
    python3 test.py
  • Evaluate the predicted results
    cd eval
    matlab
    main

Saliency maps & Trained model

Citation

  • If you find this work is helpful, please cite our paper
@InProceedings{CVPR2020_LDF,
    author    = {Wei, Jun and Wang, Shuhui and Wu, Zhe and Su, Chi and Huang, Qingming and Tian, Qi},
    title     = {Label Decoupling Framework for Salient Object Detection},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2020}
}

About

Codes for the CVPR2020 paper "Label Decoupling Framework for Salient Object Detection"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published