Skip to content

layumi/2016_person_re-ID

Repository files navigation

A Discriminatively Learned CNN Embedding for Person Re-identification

In this package, we provide our training and testing code written in Matconvnet for the paper A Discriminatively Learned CNN Embedding for Person Re-identification.

We also include matconvnet-beta23 which has been modified for our paper. All codes have been test on Ubuntu14.04 and Ubuntu16.04 with Matlab R2015b.

This code is ONLY released for academic use.

Resources

~What's new: We add the data preparation and evaluation codes for CUHK03.

~What's new: We make the code of model structure more easy to follow.

~What's new: We provide a better code for extract feature.

~What's new: We provide a faster evaluation code for Market-1501.

Installation

  1. Clone this repo

    git clone https://github.com/layumi/2016_person_re-ID.git
    cd 2016_person_re-ID
    mkdir data
  2. Download the pretrained model.

    This model is ONLY released for academic use. You can find the pretrained model in GoogleDriver or [BaiduYun] (https://pan.baidu.com/s/1miG2OpM). Download and put them into the ./data.

    BaiduYun sometime changes the link. If you find the url fail, you can contact me to update it.

  3. Compile matconvnet (Note that I have included my Matconvnet in this repo, so you do not need to download it again. I has changed some codes comparing with the original version. For example, one of the difference is in /matlab/+dagnn/@DagNN/initParams.m. If one layer has params, I will not initialize it again, especially for pretrained model.)

    You just need to uncomment and modify some lines in gpu_compile.m and run it in Matlab. Try it~ (The code does not support cudnn 6.0. You may just turn off the Enablecudnn or try cudnn5.1)

    If you fail in compilation, you may refer to http://www.vlfeat.org/matconvnet/install/

Dataset

  • Download Market1501 Dataset. [Google] [Baidu] The photos are taken in Tsinghua University.

  • DukeMTMC-reID is a larger dataset in the same format of Market1501. The photos are taken in Duke University. You can download it from DukeMTMC-reID Dataset. We also upload the result to DukeMTMC-reID leaderboard.

  • If you want to rehearsal our result on CUHK03 Dataset, you can simply change the number of kernel from 751 to 1367 in resnet52_market.m and recreate net.mat. Because there are 751 IDs in Market-1501 while 1367 training identities are in CUHK03. More information can be found in cuhk03-prepare-eval dir. We add the data preparation and evaluation codes for CUHK03.

  • Training dataset for Oxford5k (http://cmp.felk.cvut.cz/cnnimageretrieval/)

Test

  1. Run test/test_gallery_query_crazy.m to extract the features of images in the gallery and query set. They will store in a .mat file. Then you can use it to do evaluation.
  2. Evaluate feature on the Market-1501. Run evaluation/zzd_evaluation_res_faster.m. You can get the following Single-query Result.
Methods               Rank@1 mAP
Ours* (SQ) 80.82% 62.30%
Ours* (MQ-avg) 86.67% 70.16%
Ours* (MQ-max) 86.76% 70.68%
Ours* (MQ-max+rerank) 86.67% 72.55%

*Note that the result is slightly higher than the result reported in our paper.

*For multi-query result, you can use evaluation/zzd_evaluation_res_fast.m . It is slower than evaluation/zzd_evaluation_res_faster.m since it need to extract extra features. (The evaluation code is modified from the Market-1501 Baseline Code)

FQA

  1. What is multi-query setting?

Actually, we can get a sequence of the query under one camera instead of one image. Then we can use every image in this sequence to extract a query mean feature (mean of feature extracted from several images). We call it multi-query. If we use this feature to do person retrieval, we usually get a better result. But it use additional images (in 'Market-1501/gt_bboxes'). You can find more detail in the original paper.

Train

  1. Add your dataset path into prepare_data.m and run it. Make sure the code outputs the right image path.

  2. Run train_id_net_res_2stream.m to have fun.

Citation

Please cite this paper in your publications if it helps your research:

@article{zheng2016discriminatively,
  title={A Discriminatively Learned CNN Embedding for Person Re-identification},
  author={Zheng, Zhedong and Zheng, Liang and Yang, Yi},
  doi={10.1145/3159171},
  note={\mbox{doi}:\url{10.1145/3159171}},
  journal={ACM Transactions on Multimedia Computing Communications and Applications},
  year={2017}
}

Acknowledge

Thanks for Xuanyi Dong to realize our paper in Caffe.

Thanks for Weihang Chen to realize our paper in Keras.

Thanks for Weihang Chen to report the bug in prepare_data.m.

Related Repos

  1. Person re-ID with GAN
  2. Pedestrian Alignment Network

Releases

No releases published

Sponsor this project

Packages

No packages published

Contributors 3

  •  
  •  
  •