Prototype of a liveness detection system to identify spoofs in videos from camera selfies. The project includes training, evaluation, inference and adversarial attack generation to test and improve the liveness detection model.
>>> Please see the comprehensive project report for a complete explanation of this project (link below) <<<
-
Clone the repository:
git clone https://github.com/andreluizbvs/liveness_system.git cd liveness_system
-
Create and activate the Conda environment (the first way, using the yml file, is recommended):
conda env create -f environment.yml conda activate liveness
or
conda create -n liveness python=3.12 conda activate liveness pip install -r requirements.txt conda install -c conda-forge libstdcxx-ng
-
Prepare model weights and datasets. Place datasets in the
data/
folder, and the weights in theckpt/
folder, both at this project's root directory: -
Run the setup script:
pip install -e .
-
Run the liveness inference script to see the system working on an image or a video. See an example below:
cd src/
python liveness_inference.py ../data/celebA-spoof/CelebA_Spoof_/CelebA_Spoof/Data/test/3613/spoof/541354.png
You can pass images and videos paths here. Obs.: You can put "
2>/dev/null
" at the end of the command to suppress warnings. Remove if you wish to see it. -
[Recommended] Experiment with the three provided jupyter notebooks. There are already some pre-loaded results and some sample input images:
src/tools/adversarial_attack_manipulation.ipynb
src/tools/liveness_predict.ipynb
src/tools/liveness_output_analysis.ipynb
These are quite intuitive. More info on them, please check out the project report. Here is a sample output of
liveness_output_analysis.ipynb
: -
To train one of the two models (
SiliconeMaskModel
orFaceDepthModel
) run the train script to train and evaluate the liveness detection model. Here are two examples:python train.py --data_path ../data/silicone_faces --model_name silicone
or
python train.py --data_path ../data/celebA-spoof --model_name depth
Both will output the accuracy, precision, recall, and F1-score of the model. Also, it will automatically fine-tune the chosen model after its training session on adversarial attack augmented data. The data will be genrerated automatically and passed to the model. In the end, a comparison between the model's performance with and without fine-tuning against the adversarial attack data will be shown.
-
To evaluate the models in the CelebA-Spoof Test set, simply run the following:
python evaluate.py
This project is provided "as is" without any warranties of any kind, either express or implied. Use at your own risk.