This project combines the power of FasterWhisper for live transcription and Hugging Face's Sentiment Analysis pipeline to provide real-time feedback on both transcription accuracy and the emotional tone of the spoken content. The entire project is integrated into a user-friendly Gradio interface for seamless interaction.
This application listens to audio input, transcribes it into text, and analyzes the emotional sentiment of each sentence in real-time. The model is designed to provide instant feedback, making it ideal for a wide range of use cases across industries like customer service, healthcare, education, and media.
- Real-Time Transcription: Using FasterWhisper for accurate and fast transcription.
- Sentiment Analysis: Hugging Face's pipeline to detect positive, negative, or neutral sentiment.
- Live Gradio Interface: An easy-to-use interface for real-time interaction and output display.
- Customizable Outputs: Modify the application to suit specific use cases in customer service, healthcare, media, etc.
GIThub.Trancription.mp4
-
Clone the repository:
git clone https://github.com/tobibiggest/real-time-audio-transcription-sentiment.git
-
Navigate to the project folder:
cd real-time-audio-transcription-sentiment
-
Install dependencies:
pip install -r requirements.txt
-
Run the application:
python app.py
- Python
- FasterWhisper
- Hugging Face Transformers
- Gradio
- Multilingual transcription support
- Advanced emotion detection
- Video integration for analyzing visual and audio data
- Custom sentiment analysis for specific industries
Feel free to submit issues or pull requests to improve the project! Contributions are always welcome.
This project is licensed under the MIT License. See the LICENSE file for more information.
Special thanks to the Hugging Face and OpenAI teams for providing the tools to build this project.