Predicting atypical visual saliency for autism spectrum disorder via scale-adaptive inception module and discriminative region enhancement loss
This repository contains Keras implementation of our atypical visual saliency prediction model.
Please cite with the following Bibtex code:
@article{wei2020predicting,
title={Predicting atypical visual saliency for autism spectrum disorder via scale-adaptive inception module and discriminative region enhancement loss},
author={Wei, Weijie and Liu, Zhi and Huang, Lijin and Nebout, Alexis and Le Meur, Olivier and Zhang, Tianhong and Wang, Jijun and Xu, Lihua},
journal={Neurocomputing},
volume={453},
pages={610--622},
year={2021},
publisher={Elsevier}
}
Pretrained weight on Saliency4ASD
Train model from scratch
$ python train.py --train_set_path path/to/training/set --val_set_path path/to/validation/set
For training model based on our pretrained weight, please download the weight file and put it into weights/
.
$ python train.py --train_set_path path/to/training/set --val_set_path path/to/validation/set --model_path weights/weights_DRE_S4ASD--0.9714--1.0364.pkl --dreloss False
The dataset directory structure should be
└── Set
├── Images
│ ├── 1.png
│ └── ...
├── FixMaps
│ ├── 1.png
│ └── ...
├── FixPts
│ ├── 1.mat
│ └── ...
(If use DRE loss ...)
├── FixMaps_TD
│ ├── 1.png
│ └── ...
├── FixPts_TD
├── 1.mat
└── ...
Note: We convert the *_f.png
files in Saliency4ASD\TrainingDataset\AdditionalData\ASD_FixPts\
to MAT file by following code:
% Matlab Code
im = imread('1_f.png');
save('1.mat', 'im');
Clone this repository and download the pretrained weights.
Then just run the code using
$ python test.py --model-path weights/weights_DRE_S4ASD--0.9714--1.0364.pkl --images-path images/ --results-path results/
This will generate saliency maps for all images in the images directory and save them in results directory
cuda 9.0
cudnn 7.0
python 3.5
keras 2.2.2
tensorflow 1.2.1
opencv3 3.1.0
matplotlib 2.0.2
The detailed environment dependencies is in environment.yaml. You can easily copy the conda environment via
conda env create -f environment.yaml
It is recommended to compare with our model by online benchmarks, such as
But if you are interested in the comparison with our model on the Saliency4ASD 30, you can refer to the ./DatasetPartition.txt
for the specific index of images.
The original Saliency4ASD only contains FixPts in PNG format. We provide a simple code to convert the PNG file to MAT file for easy-using of our model.
The test.py miss a line to sort the file_name
. It has been fixed now.
Add the index of images in training set, validation set and testing set in the ablation study.
The code is heavily inspired by the following project:
- SAM : https://github.com/marcellacornia/sam
- EML-Net : https://github.com/SenJia/EML-NET-Saliency
Thanks for their contributions.
Many thanks to @Imposingapple for pointing out a bug and fixing it.
If you have any questions, please contact me at codename1995@shu.edu.cn or my supervisor Prof. Zhi Liu at liuzhi@staff.shu.edu.cn.
This code is distributed under MIT LICENSE.