Sign Language Identification Using Deep Learning
Author(s):
Nadeem Gulam, Neha Pandey, Medha R, Rakshith Vk, Vasudeva G
Keywords:
ConvolutionalNeural Networks,Signlanguage,speech disability,speech impairment
Abstract
One of the hardest problems that people with disabilities face is speech and hearing impairment. The suggested system is a platform that enables communication between those who have speech impairments and the rest of the world. For those who are unable to talk, sign language serves as a way of communication because their language is difficult for others to understand. As a result, the community's minority of people with disabilities are unable to do even the most basic tasks. As a consequence, the proposed system converts spoken and written output from Sign Language (SL). Convolutional neural networks (CNN) are used in addition to this to extract effective hand features for recognising hand motions consistent with Sign Language. So that they may successfully communicate, persons with impairments can use this model to recognise their hand movements and translate them into text and voice.
Article Details
Unique Paper ID: 159772

Publication Volume & Issue: Volume 9, Issue 12

Page(s): 732 - 740
Article Preview & Download


Share This Article

Join our RMS

Conference Alert

NCSEM 2024

National Conference on Sustainable Engineering and Management - 2024

Last Date: 15th March 2024

Call For Paper

Volume 11 Issue 1

Last Date for paper submitting for Latest Issue is 25 June 2024

About Us

IJIRT.org enables door in research by providing high quality research articles in open access market.

Send us any query related to your research on editor@ijirt.org

Social Media

Google Verified Reviews