This repo is for our IEEE TCSVT paper
(ArXiv Link: https://arxiv.org/abs/1707.00408 IEEE Link: https://ieeexplore.ieee.org/document/8481710).
The main idea is to align the pedestrian within the bboxes, and reduce the noisy factors, i.e., scale and pose variances.
For more details, you can see this png file. But it is low-solution now, and I may replace it recently.
1.Clone this repo.
git clone https://github.com/layumi/Pedestrian_Alignment.git
cd Pedestrian_Alignment
mkdir data
2.Download the pre-trained model. Put it into './data'.
cd data
wget http://www.vlfeat.org/matconvnet/models/imagenet-resnet-50-dag.mat
3.Compile Matconvnet
(Note that I have included my Matconvnet in this repo, so you do not need to download it again. I have changed some codes comparing with the original version. For example, one of the difference is in /matlab/+dagnn/@DagNN/initParams.m
. If one layer has params, I will not initialize it again, especially for pretrained model.)
You just need to uncomment and modify some lines in gpu_compile.m
and run it in Matlab. Try it~
(The code does not support cudnn 6.0. You may just turn off the Enablecudnn or try cudnn5.1)
If you fail in compilation, you may refer to http://www.vlfeat.org/matconvnet/install/
Download Market1501 Dataset. [Google] [Baidu]
For training CUHK03, we follow the new evaluation protocol in the CVPR2017 paper. It conducts a multi-shot person re-ID evaluation and only needs to run one time.
-
Add your dataset path into
prepare_data.m
and run it. Make sure the code outputs the right image path. -
uncomment https://github.com/layumi/Pedestrian_Alignment/blob/master/resnet52_market.m#L23
Run train_id_net_res_market_new.m
to pretrain the base branch.
Run train_id_net_res_market_align.m
to finetune the whole net.
-
Run
test/test_gallery_stn_base.m
andtest/test_gallery_stn_align.m
to extract the image features from base branch and alignment brach. Note that you need to change the dir path in the code. They will store in a .mat file. Then you can use it to do the evaluation. -
Evaluate feature on the Market-1501. Run
evaluation/zzd_evaluation_res_faster.m
. You can get a Single-query Result around the following result.
Methods | Rank@1 | mAP |
---|---|---|
Ours | 82.81% | 63.35% |
You may find our trained model at GoogleDrive
We conduct an extra interesting experiment: When zooming in the input image (adding scale variance), how does our alignment network react?
We can observe a robust transform on the output image (focusing on the human body and keeping the scale).
The left image is the input; The right image is the output of our network.
Please cite this paper in your publications if it helps your research:
@article{zheng2017pedestrian,
title={Pedestrian Alignment Network for Large-scale Person Re-identification},
author={Zheng, Zhedong and Zheng, Liang and Yang, Yi},
doi={10.1109/TCSVT.2018.2873599},
note={\mbox{doi}:\url{10.1109/TCSVT.2018.2873599}},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2018}
}
Thanks for the suggestions from Qiule Sun.