Realtime Hand Gesture Recognition using LSTM model and Conversion into Speech
Sakshi Mankar, Kanishka Mohapatra, Ashwin Avate, Mansi Talavadekar, Surendra Sutar
Gestures, LSTM neural network, ReLU activation function, sign-language, tensorflow.
People require communication to communicate with each other. “Specially abled people”, those who have speech or hearing disorder, “Mute” and “Deaf” people respectively, are always dependent on some sort of visual communication. People without visual and hearing disabilities sometimes face difficulties and cannot communicate with specially abled people due to lack of sign language education. Sign language is well received among them and they use it to express themselves. To achieve two-way communication between specially abled people and the general public there is a need to build a system that can interpret the gestures into text and speech. A vision-based technology of hand gesture recognition is an important part of human-computer interaction. Technology like gesture recognition can help us build a framework that can interpret sign language/gesture into text and speech. Gestures by hand which can represent a notion using unique shapes and finger position have a scope for human machine interaction. The major steps involved in designing the system are:  tracking, segmentation, gesture acquisition, feature extraction, gesture recognition and conversion into speech.
Article Details
Unique Paper ID: 154130

Publication Volume & Issue: Volume 8, Issue 10

Page(s): 120 - 124
Article Preview & Download

Share This Article

Join our RMS

Conference Alert

NCSEM 2024

National Conference on Sustainable Engineering and Management - 2024

Last Date: 15th March 2024

Call For Paper

Volume 10 Issue 10

Last Date for paper submitting for March Issue is 25 June 2024

About Us enables door in research by providing high quality research articles in open access market.

Send us any query related to your research on

Social Media

Google Verified Reviews