Skip to content

Prototype of a liveness detection system to identify spoofs in videos from camera selfies

License

Notifications You must be signed in to change notification settings

andreluizbvs/liveness_system

Repository files navigation

Liveness Detection System

Prototype of a liveness detection system to identify spoofs in videos from camera selfies. The project includes training, evaluation, inference and adversarial attack generation to test and improve the liveness detection model.

summary

>>> Please see the comprehensive project report for a complete explanation of this project (link below) <<<

Table of Contents

Installation

Using Conda

  1. Clone the repository:

    git clone https://github.com/andreluizbvs/liveness_system.git
    cd liveness_system
  2. Create and activate the Conda environment (the first way, using the yml file, is recommended):

    conda env create -f environment.yml
    conda activate liveness

    or

    conda create -n liveness python=3.12
    conda activate liveness
    pip install -r requirements.txt
    conda install -c conda-forge libstdcxx-ng

Usage

  1. Prepare model weights and datasets. Place datasets in the data/ folder, and the weights in the ckpt/ folder, both at this project's root directory:

    • [Required] Download the datasets here. Extract the zip contents in the data/ folder.

      Obs.: It is not necessary to get the whole CelebA-Spoof dataset. It is very big (77 GB), but if you need it, go here;

    • [Required] Download the pretrained models here.

  2. Run the setup script:

    pip install -e .
  3. Run the liveness inference script to see the system working on an image or a video. See an example below:

    cd src/
    python liveness_inference.py ../data/celebA-spoof/CelebA_Spoof_/CelebA_Spoof/Data/test/3613/spoof/541354.png

    You can pass images and videos paths here. Obs.: You can put "2>/dev/null" at the end of the command to suppress warnings. Remove if you wish to see it.

  4. [Recommended] Experiment with the three provided jupyter notebooks. There are already some pre-loaded results and some sample input images:

    • src/tools/adversarial_attack_manipulation.ipynb
    • src/tools/liveness_predict.ipynb
    • src/tools/liveness_output_analysis.ipynb

    These are quite intuitive. More info on them, please check out the project report. Here is a sample output of liveness_output_analysis.ipynb: analysis

  5. To train one of the two models (SiliconeMaskModel or FaceDepthModel) run the train script to train and evaluate the liveness detection model. Here are two examples:

    python train.py --data_path ../data/silicone_faces --model_name silicone

    or

    python train.py --data_path ../data/celebA-spoof --model_name depth

    Both will output the accuracy, precision, recall, and F1-score of the model. Also, it will automatically fine-tune the chosen model after its training session on adversarial attack augmented data. The data will be genrerated automatically and passed to the model. In the end, a comparison between the model's performance with and without fine-tuning against the adversarial attack data will be shown.

  6. To evaluate the models in the CelebA-Spoof Test set, simply run the following:

    python evaluate.py

Disclaimer

This project is provided "as is" without any warranties of any kind, either express or implied. Use at your own risk.

About

Prototype of a liveness detection system to identify spoofs in videos from camera selfies

Resources

License

Stars

Watchers

Forks