Hand Gesture Recognition and Text Conversion Using Convolutional Neural Networks

  • Unique Paper ID: 165086
  • Volume: 11
  • Issue: 1
  • PageNo: 57-64
  • Abstract:
  • Sign language is an important means of communication for people with speech disabilities, but it presents significant challenges for non-signers due to a widespread lack of interpreters and awareness. This paper explores the development of a hand sign understanding and translation system using convolutional neural networks (CNN) to bridge the communication gap between hearing and deaf communities. Our research focuses on a three-step methodology: data collection, model training and extensive evaluation. Using a custom CNN architecture, our system can detect and convert hand gestures into real-time text, providing a complete communication solution. The methodology includes a dataset specially curated for this purpose, and the training phase uses the MNIST dataset to initially calibrate the model. Our system demonstrates a remarkable 95.7% accuracy in recognizing the 26 letters of the American Sign Language (ASL) alphabet, demonstrating its potential to facilitate seamless communication between signers and non-signers. This advance highlight the promising application of deep learning methods to improve accessibility and inclusion in deaf and hearing communities.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{165086,
        author = {Harsh kshirsagar and Prof. Jyotsna Nanajkar and Nikita Shinde and Aniket Shelke and Manoj Thombre},
        title = {Hand Gesture Recognition and Text Conversion Using Convolutional Neural Networks},
        journal = {International Journal of Innovative Research in Technology},
        year = {},
        volume = {11},
        number = {1},
        pages = {57-64},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=165086},
        abstract = {Sign language is an important means of communication for people with speech disabilities, but it presents significant challenges for non-signers due to a widespread lack of interpreters and awareness. This paper explores the development of a hand sign understanding and translation system using convolutional neural networks (CNN) to bridge the communication gap between hearing and deaf communities. Our research focuses on a three-step methodology: data collection, model training and extensive evaluation. Using a custom CNN architecture, our system can detect and convert hand gestures into real-time text, providing a complete communication solution. The methodology includes a dataset specially curated for this purpose, and the training phase uses the MNIST dataset to initially calibrate the model. Our system demonstrates a remarkable 95.7% accuracy in recognizing the 26 letters of the American Sign Language (ASL) alphabet, demonstrating its potential to facilitate seamless communication between signers and non-signers. This advance highlight the promising application of deep learning methods to improve accessibility and inclusion in deaf and hearing communities.},
        keywords = {Sign Language, ASL, Hearing disability, Convolutional Neural Network (CNN), Computer Vision, Machine Learning, Gesture recognition, Sign language recognition, Hue Saturation Value algorithm.},
        month = {},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 11
  • Issue: 1
  • PageNo: 57-64

Hand Gesture Recognition and Text Conversion Using Convolutional Neural Networks

Related Articles