This repository contains codes for 2019 iQIYI Celebrity Video Identification Challenge, which achieved a mAP score of 0.8949 on the test set (Ranked 6th), inspired by Jasonbaby and created by Wenzhe Wang.
- Python 3.5
- tensorflow-gpu (I use 1.4.0)
- Keras (I use 2.0.8)
-
Clone the iQIYI-VID repository into
$VID_ROOT
git clone https://github.com/zhezheey/iQIYI-VID.git
-
Install python packages you might not have in
requirements.txt
pip install -r requirements.txt
-
Download the IQIYI-VID dataset, then place
face_train_v2.pickle
andface_val_v2.pickle
inside the$VID_ROOT/feat
directory,train_gt.txt
andval_gt.txt
inside the$VID_ROOT/data
directory. -
Train the MLP models (see more details here)
cd $VID_ROOT/train python get_gt.py # Change the batch_size in train.py according to your GPU memory. sh train.sh
-
By default, trained models are saved under
$VID_ROOT/train/model
.
Follow the steps below to build the Docker image of our submission (see more details here).
-
Move the trained models into the
$VID_ROOT/docker/resources
directory. -
Build the Docker image
cd $VID_ROOT/docker docker build -t zheey:1.0 -f Dockerfile .
@article{liu2018iqiyi,
title={iqiyi-vid: A large dataset for multi-modal person identification},
author={Liu, Yuanliu and Shi, Peipei and Peng, Bo and Yan, He and Zhou, Yong and Han, Bing and Zheng, Yi and Lin, Chao and Jiang, Jianbin and Fan, Yin and others},
journal={arXiv preprint arXiv:1811.07548},
year={2018}
}