Opencv-tensorflow-mxnet implementation of face tracking and verification on real time using either CPU or GPU. The pretrained models used for face verification were obtained from David Sandberg's github repo which are tensorflow implementations of FaceNet. Instead of using Viola and Jones' face detector, this repository uses a implementation of the MTCNN model for both face detection and tracking initialization. It is also used for database creation. (the original paper can be found here)
The code has been tested using python 3.7.1 under 18.10 Ubuntu.
- Clone this repo and cd to the 'models' directory.
- Download either Facenet model from Sandberg's repo.
- Run
python prepare_facenet.py path/to/facenet.zip
and cd to parent directory.
- Cd to the 'src' directory.
- Run
python main.py demo/avengers.mp4 demo/user_data.csv demo/embeddings.npy
.
The program consists of three parts: video file (webcam feed or downloaded video); user metadata and embeddings (both created from the create_database.py script). User metadata and embeddings derive from an image folder that contains all of the users to be identified. As a prerequisite you should have both video file and the images folder. Take into account that the image filename of a user will be used as his username.
- Cd to the 'src' directory.
- Run
python create_database.py output/user_data.csv output/embeddings.npy path/to/images/
to create embedddings. - Run
python main.py video_file.mp4 output/user_data.csv output/embeddings.npy
for face tracking and verification. If you wish to keep track of time a user appears in the video, run screentime.py instead of main.py.