This project is a part of Coursera's Guided Project - Facial Expression Recognition with Keras
In this project, we will build and train a convolutional neural network (CNN) in Keras from scratch to recognize facial expressions. The data consists of 48x48 pixel grayscale images of faces. The objective is to classify each face based on the emotion shown in the facial expression into one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral). We will use OpenCV to automatically detect faces in images and draw bounding boxes around them. Once we have trained, saved, and exported the CNN, we will directly serve the trained model to a web interface and perform real-time facial expression recognition on video and image data.
The project can be braodly divided into two parts -
- Build and train a model in Keras for Facial Expression Recognition.
- Deploy the model on web using FLASK and run it on videos.
-
Extract train and test images from
data.rar
file. -
Install dependencies using
pip install pipenv
pipenv install
-
Run the jupyter notebook for training,
model.json
andmodel_weights.h5
files will be created after training. -
Add the correct path to the video file in camera.py on line 11.
-
Now run
pipenv run python3 main.py