We developed a model to detect sign language using mediapipe holistic keypoints and an LSTM layered model in this project.Dumb people communicate using hand signs, so normal people have difficulty recognising their language through signs.As a result, systems that recognise various signs and convey information to ordinary people are required.
The goal of our project is to create a virtual talking system without sensors for people in need, with this concept achieving success through image processing and human hand gesture input.This primarily benefits people who are unable to communicate with others.
The following are the steps for implementation: Install and Import Dependencies
Detect Face,Hand and Pose Landmarks
Setup Folders for Data collection
Preprocess Data and Create Labels
Build and Train an LSTM Deep Learning Model
Make Sign Language Predictions
Save Model Weights
Evaluation using a Confusion Matrix
Test in Real Time