Bridging the Gap: Real-Time Sign Language Translation
Author(s):
Prof. S. R. Karjol, Mr. Mohammed Mujtaba M Mulla, Mr. Abhishek Channabasavaraj Kinagi, Mr. Avaneesh N Devareddi, Mr. Prajwal Siddappa Hallur
Keywords:
Deep Learning, Sign Language Recognition, Communication Accessibility, Real-Time Translation, Wearable Technology.
Abstract
This research explores the development of a real-time sign language translation mobile application designed to bridge the communication gap for deaf and mute individuals. Leveraging cutting-edge machine learning algorithms and computer vision techniques, the application captures and translates sign language gestures into text, enhancing accessibility and inclusivity. The study reviews existing systems like Microsoft Kinect and SignAloud gloves, and various academic efforts employing convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) to improve sign language recognition accuracy. Key features of the proposed solution include real-time video processing, accurate text translation, and a user-friendly interface, with a strong focus on privacy and security. Through meticulous development and rigorous testing, the application aims to empower deaf individuals to communicate seamlessly with the broader population, fostering greater social inclusion and participation. This project not only addresses immediate communication barriers but also sets the stage for future advancements in assistive technologies, contributing to a more inclusive society.
Article Details
Unique Paper ID: 165012
Publication Volume & Issue: Volume 10, Issue 12
Page(s): 2827 - 2834
Article Preview & Download
Share This Article
Join our RMS
Conference Alert
NCSEM 2024
National Conference on Sustainable Engineering and Management - 2024