Skip to content

Real-time Sign Language Gesture Recognition Using 1DCNN + Transformers on MediaPipe landmarks

License

Notifications You must be signed in to change notification settings

209sontung/sign-language

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Logo

GitHub GitHub top language GitHub repo size GitHub Pipenv locked Python version

Table of Contents

Real-time Sign Language Gesture Recognition Using 1DCNN + Transformers on MediaPipe landmarks

Overview

Sign language is an important means of communication for individuals with hearing impairments. This project aims to build a real-time sign language gesture recognition system using deep learning techniques. The system utilizes 1D Convolutional Neural Networks (1DCNN) and Transformers to recognize sign language gestures based on the hand landmarks extracted from MediaPipe.

Prerequisites

  • Python 3.9
  • NumPy
  • Pandas
  • Matplotlib
  • OpenCV
  • MediaPipe
  • TensorFlow

Installation

  1. Clone this repository to your local machine using either the HTTPS or SSH link provided on the repository's GitHub page. You can use the following command to clone the repository via HTTPS:
git clone https://github.com/209sontung/sign-language
  1. Once the repository is cloned, navigate to the root directory of the project:
cd sign-language
  1. It is recommended to create a virtual environment to isolate the dependencies of this project. You can create a virtual environment using venv module. Run the following command to create a virtual environment named "venv":
python3 -m venv sign-language
  1. Activate the virtual environment. The activation steps depend on the operating system you're using:
  • For Windows:
venv\Scripts\activate
  • For macOS/Linux:
source venv/bin/activate
  1. Now, you can install the required dependencies by running the following command:
pip install -r requirements.txt
  1. Once the installation is complete, you're ready to use the real-time sign language gesture recognition system.

Note: Make sure you have a webcam connected to your machine or available on your device for capturing hand gestures.

You have successfully installed the system and are ready to use it for real-time sign language gesture recognition. Please refer to the Usage section in the README.md for instructions on how to run and utilize the system.

Usage

To use the sign language gesture recognition system, follow these steps:

  1. Ensure that you have installed all the required dependencies (see Installation).

  2. Run the main.py file, which contains the main script for real-time gesture recognition.

python main.py
  1. The system will start capturing your hand gestures using the webcam and display the recognized gestures in real-time.

Models

The repository includes pre-trained models for sign language gesture recognition. The following models are available in the models directory:

  • islr-fp16-192-8-seed42-fold0-best.h5: Best model weights for fold 0.
  • islr-fp16-192-8-seed42-fold0-last.h5: Last model weights for fold 0.
  • islr-fp16-192-8-seed_all42-foldall-last.h5: Last model weights for all folds.

Directory Structure

The directory structure of this repository is as follows:

├─ .gitignore
├─ LICENSE
├─ README.md
├─ main.py
├─ models
│  ├─ islr-fp16-192-8-seed42-fold0-best.h5
│  ├─ islr-fp16-192-8-seed42-fold0-last.h5
│  └─ islr-fp16-192-8-seed_all42-foldall-last.h5
├─ requirements.txt
├─ Pipfile
├─ Pipfile.lock
└─ src
   ├─ backbone.py
   ├─ config.py
   ├─ landmarks_extraction.py
   ├─ sign_to_prediction_index_map.json
   └─ utils.py

Contributing

Contributions to this project are welcome. If you find any issues or have suggestions for improvements, please open an issue or submit a pull request.

Acknowledgments

Sincere gratitude is extended to @hoyso48 for providing the idea and initial implementation of the backbone model.

License

This project is licensed under the MIT License. Feel free to use and modify the code as per the terms of the license.

About

Real-time Sign Language Gesture Recognition Using 1DCNN + Transformers on MediaPipe landmarks

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages