Gesture Genius System using Deep Learning

  • Unique Paper ID: 172816
  • PageNo: 1249-1254
  • Abstract:
  • The Gesture Genius System is designed to bridge communication gaps between deaf-mute individuals and those with normal hearing and speech abilities by converting sign language gestures into text or audio. Utilizing advanced computer vision and deep learning techniques, the system captures and processes hand movements in real-time, employing convolutional neural networks (CNNs) to accurately interpret gestures and map them to corresponding text. Trained on a diverse dataset of sign language gestures, the system adapts to various environments, gesture speeds, and user differences, ensuring robust performance across different scenarios. Its real-time processing capabilities make it suitable for use in public services, education, and everyday interactions, allowing for seamless communication between sign language users and non-signers. The system’s intuitive interface ensures ease of use for both deaf-mute individuals and those unfamiliar with sign language. Future enhancements will include voice synthesis, enabling the conversion of gestures into spoken words, further expanding its potential to break communication barriers and foster inclusivity in various social settings.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{172816,
        author = {Kowsalya C and Hemshika Harini Devi C S and Rajeswari R and Mohana M},
        title = {Gesture Genius System using Deep Learning},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {11},
        number = {9},
        pages = {1249-1254},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=172816},
        abstract = {The Gesture Genius System is designed to bridge communication gaps between deaf-mute individuals and those with normal hearing and speech abilities by converting sign language gestures into text or audio. Utilizing advanced computer vision and deep learning techniques, the system captures and processes hand movements in real-time, employing convolutional neural networks (CNNs) to accurately interpret gestures and map them to corresponding text. Trained on a diverse dataset of sign language gestures, the system adapts to various environments, gesture speeds, and user differences, ensuring robust performance across different scenarios. Its real-time processing capabilities make it suitable for use in public services, education, and everyday interactions, allowing for seamless communication between sign language users and non-signers. The system’s intuitive interface ensures ease of use for both deaf-mute individuals and those unfamiliar with sign language. Future enhancements will include voice synthesis, enabling the conversion of gestures into spoken words, further expanding its potential to break communication barriers and foster inclusivity in various social settings.},
        keywords = {Gesture Genius System, communication gaps, deaf-mute individuals, sign language translation, text conversion, computer vision, deep learning, convolutional neural networks (CNNs), real-time processing, hand gesture recognition.},
        month = {February},
        }

Cite This Article

C, K., & S, H. H. D. C., & R, R., & M, M. (2025). Gesture Genius System using Deep Learning. International Journal of Innovative Research in Technology (IJIRT), 11(9), 1249–1254.

Related Articles