Vishwas H S, Sourav Nagesh, Suhruth R, Sai Nagesh C H
Keywords:
OpenCV, Python, facial recognition, LSTM, SVM, RNN, ANN.
Abstract
Deaf and mute individuals, who make up approximately 5% of the global population, often rely on sign language to communicate with others. However, many of them may not have access to sign language, causing them to feel disconnected from others. To address this communication gap, a prototype for an Assistive medium has been designed that allows individuals to communicate using hand gestures to recognize different characters, which are then converted to text in real-time. This system utilizes various image processing techniques and deep learning models for gesture recognition. Hand gestures have the potential to facilitate human-machine interaction and are an essential part of vision-based gesture recognition technology. The system involves tracking, segmentation, gesture acquisition, feature extraction, gesture recognition, and text conversion, all of which are critical steps in the design process. Overall, this technology has the potential to help bridge the communication gap between deaf and mute individuals and those who can hear and speak.
Article Details
Unique Paper ID: 160100
Publication Volume & Issue: Volume 9, Issue 12
Page(s): 1321 - 1329
Article Preview & Download
Share This Article
Conference Alert
NCSST-2023
AICTE Sponsored National Conference on Smart Systems and Technologies
Last Date: 25th November 2023
SWEC- Management
LATEST INNOVATION’S AND FUTURE TRENDS IN MANAGEMENT