SIGN LANGUAGE DETECTION

  • Unique Paper ID: 159899
  • Volume: 9
  • Issue: 12
  • PageNo: 1174-1182
  • Abstract:
  • Deaf and mute individuals, who make up approximately 5% of the global population, often rely on sign language to communicate with others. However many of them may not have access to sign language, causing them to feel disconnected from others. To address this communication gap, a prototype for an assistive medium has been designed that allows individuals to communicate using hand gestures to recognize different characters, which are then converted to text in real-time. This system utilizes various image processing techniques and deep learning models for gesture recognition. Hand gestures have the potential to facilitate human-machine interaction and are an essential part of vision-based gesture recognition technology. The system involves tracking, segmentation gesture acquisition, feature extraction, gesture recognition, and text conversion, all of which are critical steps in the design process. Overall, this technology has the potential to help bridge the communication gap between deaf and mute individuals and those who can hear and speak.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{159899,
        author = {Vishwas H S and Suhruth R and Sourav Nagesh and Sai Nagesh C H and Manasa Sandeep},
        title = {SIGN LANGUAGE DETECTION},
        journal = {International Journal of Innovative Research in Technology},
        year = {},
        volume = {9},
        number = {12},
        pages = {1174-1182},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=159899},
        abstract = {Deaf and mute individuals, who make up
approximately 5% of the global population, often rely
on sign language to communicate with others. However
many of them may not have access to sign language,
causing them to feel disconnected from others. To
address this communication gap, a prototype for an
assistive  medium  has  been  designed  that  allows
individuals to communicate using hand gestures to
recognize	different	characters,	which	are	then
converted to text in real-time. This system utilizes
various image processing techniques and deep learning
models for gesture recognition. Hand gestures have the
potential to facilitate human-machine interaction and
are an essential part of vision-based gesture recognition
technology. The system involves tracking, segmentation
gesture	acquisition,	feature	extraction,	gesture
recognition, and text conversion, all of which are
critical steps  in the design  process.  Overall,  this
technology  has  the  potential  to  help  bridge  the
communication gap between deaf and mute individuals
and those who can hear and speak.
},
        keywords = {OpenCV, Python, facial recognition, LSTM, SVM, RNN, ANN.},
        month = {},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 9
  • Issue: 12
  • PageNo: 1174-1182

SIGN LANGUAGE DETECTION

Related Articles