Hand Sign Detection

  • Unique Paper ID: 174040
  • Volume: 11
  • Issue: 10
  • PageNo: 4529-4533
  • Abstract:
  • Hand sign detection is an emerging AI-driven field aimed at improving communication for individuals with hearing and voice impairments. This research introduces a deep learning-based system that recognizes hand gestures, changed them into text, and voice output. The system utilizes OpenCV for image processing, Mediapipe for hand tracking, and TensorFlow/Keras to train a CNN for gesture classification. Recognized gestures are mapped to text and converted into speech using the pyttsx3 library. A dataset of labeled hand images was collected and preprocessed using augmentation techniques to enhance model accuracy. The trained CNN achieved approximately 90% accuracy in classification. However, challenges such as lighting variations, hand size differences, and complex backgrounds affect recognition performance. The system currently supports a limited set of gestures and does not handle dynamic sign language. Future improvements will focus on expanding gesture recognition, refining real-time processing, and integrating NLP for full sign language interpretation. Deployment on mobile and embedded devices is also planned for broader accessibility.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{174040,
        author = {Aman khan and Mohd zaman and Waseem khan and Khadeeja Haneef and Mohd Aqib},
        title = {Hand Sign Detection},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {11},
        number = {10},
        pages = {4529-4533},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=174040},
        abstract = {Hand sign detection is an emerging AI-driven field aimed at improving communication for individuals with hearing and voice impairments. This research introduces a deep learning-based system that recognizes hand gestures, changed them into text, and voice output. The system utilizes OpenCV for image processing, Mediapipe for hand tracking, and TensorFlow/Keras to train a CNN for gesture classification. Recognized gestures are mapped to text and converted into speech using the pyttsx3 library.
A dataset of labeled hand images was collected and preprocessed using augmentation techniques to enhance model accuracy. The trained CNN achieved approximately 90% accuracy in classification. However, challenges such as lighting variations, hand size differences, and complex backgrounds affect recognition performance. The system currently supports a limited set of gestures and does not handle dynamic sign language. Future improvements will focus on expanding gesture recognition, refining real-time processing, and integrating NLP for full sign language interpretation. Deployment on mobile and embedded devices is also planned for broader accessibility.},
        keywords = {},
        month = {March},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 11
  • Issue: 10
  • PageNo: 4529-4533

Hand Sign Detection

Related Articles