- The aim of project is to classify people's emotions based on their face images.
- Project is divided into two parts which is combined to give output:
- Facial key points detection model
- Facial expression detection model
- There is around 20000 facial images, with their associated facial expression lables and 2000 images with their facial key-point annotations.
- To train a model that automatically shows the people emotions and expression.
- Created a deep learning model based on convolutional neural network (CNN) and Residual Block to predict facial keypoints.
- Dataset contsists of x and y coordinates of 15 facial key points.
- Input images are 96x96 pixels.
- Images consits of only one color channel i.e images are grayscaled.
- Dataset Source: Kaggle
How?
- Dataset contsists of x and y coordinates of 15 facial key points Input Image -> Trained Key Facial Points -> Detector Model
- This model classifies people's emotion.
- Data contains images that belongs to five categories:
0 : Angry 1 : Disgust 2 : Sad 3 : Happy 4 : Surprise
Dataset Source: Kaggle
- RESNET(RESIDUAL NETWORK) as deep learning model Resnet includes skip connections feautres which enables training of 152 layers without vanishing gradient issue.
The figure below shows flowchart of our proposed methedology:
precision | recall | f1-score | support | |
---|---|---|---|---|
0 | 0.78 | 0.76 | 0.77 | 249 |
1 | 1.00 | 0.73 | 0.84 | 26 |
2 | 0.79 | 0.83 | 0.81 | 312 |
3 | 0.92 | 0.94 | 0.93 | 434 |
4 | 0.96 | 0.88 | 0.91 | 208 |
accuracy | 0.86 | 1229 | ||
macro avg | 0.89 | 0.83 | 0.85 | 1229 |
weighted avg | 0.86 | 0.86 | 0.86 | 1229 |
- Performance on running model with 500 epochs and learning rate of 0.0001.
- Download dataset from above Gdrive link
- clone repo
- run Facial Expression Recognition.ipynb on colab/notebook.