- This repo is a reimplementation of seesawfacenet(paper)
- For models, including the pytorch implementation of the backbone modules of Arcface/MobileFacenet/seesawfacenet(including seesaw_shareFaceNet,seesaw_shuffleFaceNet, DW_seesawFaceNetv1, DW_seesawFaceNetv2)
- We build build this repo based on the work of @TreB1eN(https://github.com/TreB1eN/InsightFace_Pytorch), and fixed few bugs before usage.
- Pretrained models are posted, include the MobileFacenet in the original paper
seesawfacenet @ googledrive seesawfacenet @ baidudisk extraction code:exiy
-
clone
git clone https://github.com/TropComplique/mtcnn-pytorch.git
Provide the face images your want to detect in the data/face_bank folder, and guarantee it have a structure like following:
data/facebank/
---> id1/
---> id1_1.jpg
---> id2/
---> id2_1.jpg
---> id3/
---> id3_1.jpg
---> id3_2.jpg
If more than 1 image appears in one folder, an average embedding will be calculated
3.2.3 Prepare Dataset (MS1MV2(face_emore, refined MS1M...whatever we call it) For training refer to the original paper)
download the MS1MV2 dataset:
- emore dataset @ BaiduDrive, emore dataset @ Dropbox
- More Dataset please refer to the original post
Note: If you use MS1MV2 dataset and the cropped VGG2 dataset, please cite the original papers.
-
after unzip the files to 'data' path, run :
python prepare_data.py
after the execution, you should find following structure:
faces_emore/
---> agedb_30
---> calfw
---> cfp_ff
---> cfp_fp
---> cfp_fp
---> cplfw
--->imgs
---> lfw
---> vgg2_fp
-
- download the desired weights to model folder:
-
2 to take a picture, run
python take_pic.py -n name
press q to take a picture, it will only capture 1 highest possibility face if more than 1 person appear in the camera
-
3 or you can put any preexisting photo into the facebank directory, the file structure is as following:
- facebank/
name1/
photo1.jpg
photo2.jpg
...
name2/
photo1.jpg
photo2.jpg
...
.....
if more than 1 image appears in the directory, average embedding will be calculated
-
4 to start
python face_verify.py
```
python infer_on_video.py -f [video file name] -s [save file name]
```
the video file should be inside the data/face_bank folder
previous work on mtcnn for android platform and face cropping
- mtcnn_android_native mtcnn_android_native
- Face-extractor-based-on-mtcnn Face-extractor-based-on-mtcnn
```
python train.py -b [batch_size] -lr [learning rate] -e [epochs]
# python train.py -net mobilefacenet -b 256 -w 24
```
- This repo is mainly based on TreB1eN/InsightFace_Pytorch and cvtower/SeesawNet_pytorch, and inspired by deepinsight/insightface as well.
- PRs are welcome, especially for models for mobile platfroms
- Email : jtzhangcas@gmail.com
Citation
Please cite our papers in your publications if it helps your research:
@misc{zhang2019seesawnet, title={Seesaw-Net: Convolution Neural Network With Uneven Group Convolution}, author={Jintao Zhang}, year={2019}, eprint={1905.03672}, archivePrefix={arXiv}, primaryClass={cs.CV} }
@misc{zhang2019seesawfacenets, title={SeesawFaceNets: sparse and robust face verification model for mobile platform}, author={Jintao Zhang}, year={2019}, eprint={1908.09124}, archivePrefix={arXiv}, primaryClass={cs.CV} }