Skip to content

sthanhng/yoloface

Repository files navigation

YOLOFace

Deep learning based Face detection using the YOLOv3 algorithm

Getting started

The YOLOv3 (You Only Look Once) is a state-of-the-art, real-time object detection algorithm. The published model recognizes 80 different objects in images and videos. For more details, you can refer to this paper.

YOLOv3's architecture

Imgur

Credit: Ayoosh Kathuria

OpenCV Deep Neural Networks (dnn module)

OpenCV dnn module supports running inference on pre-trained deep learning models from popular frameworks such as TensorFlow, Torch, Darknet and Caffe.

Prerequisites

  • Tensorflow
  • opencv-python
  • opencv-contrib-python
  • Numpy
  • Keras
  • Matplotlib
  • Pillow

Development for this project will be isolated in Python virtual environment. This allows us to experiment with different versions of dependencies.

There are many ways to install virtual environment (virtualenv), see the Python Virtual Environments: A Primer guide for different platforms, but here are a couple:

  • For Ubuntu
$ pip install virtualenv
  • For Mac
$ pip install --upgrade virtualenv

Create a Python 3.6 virtual environment for this project and activate the virtualenv:

$ virtualenv -p python3.6 yoloface
$ source ./yoloface/bin/activate

Next, install the dependencies for the this project:

$ pip install -r requirements.txt

Usage

  • Clone this repository
$ git clone https://github.com/sthanhng/yoloface
  • For face detection, you should download the pre-trained YOLOv3 weights file which trained on the WIDER FACE: A Face Detection Benchmark dataset from this link and place it in the model-weights/ directory.

  • Run the following command:

image input

$ python yoloface.py --image samples/outside_000001.jpg --output-dir outputs/

video input

$ python yoloface.py --video samples/subway.mp4 --output-dir outputs/

webcam

$ python yoloface.py --src 1 --output-dir outputs/

Sample outputs

Imgur

License

This project is licensed under the MIT License - see the LICENSE.md file for more details.

References