Deep Learning-Based Static Sign Language Interpretation Using VGG19 Architecture

  • Unique Paper ID: 177999
  • Volume: 11
  • Issue: 12
  • PageNo: 3726-3729
  • Abstract:
  • Sign language is a rich visual form of communication that relies on a mix of hand gestures and facial expressions, commonly used by people with hearing impairments. However, conventional methods of sign recognition often fall short when it comes to interpreting these complex gestures, leading to communication barriers. This project proposes a solution that translates hand gestures into both text and speech, allowing virtual assistants to use the output for improved interaction. Earlier research mostly concentrated on detecting simple, clear signs, often picking selected gestures from Indian Sign Language (ISL) for classification tasks. In this work, a deep learning technique is applied for recognizing static signs, utilizing a Convolutional Neural Network (CNN) based on the VGG19 architecture. The model is trained on ISL gesture datasets, making it capable of classifying hand signs with high precision. Once recognized, the gestures are turned into text and then converted into speech, which is saved as an audio file. This system promotes accessibility and better communication between those who use sign language and those who do not, while also pushing forward the development of assistive technologies.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{177999,
        author = {C. Jeyalakshmi and S.Sajithabanu and B.Aysha Banu and S.Hari Krishnan and N.Rajaguru and R.Sudharsan and R.Vigneshwaran},
        title = {Deep Learning-Based Static Sign Language Interpretation Using VGG19 Architecture},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {11},
        number = {12},
        pages = {3726-3729},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=177999},
        abstract = {Sign language is a rich visual form of communication that relies on a mix of hand gestures and facial expressions, commonly used by people with hearing impairments. However, conventional methods of sign recognition often fall short when it comes to interpreting these complex gestures, leading to communication barriers. This project proposes a solution that translates hand gestures into both text and speech, allowing virtual assistants to use the output for improved interaction. Earlier research mostly concentrated on detecting simple, clear signs, often picking selected gestures from Indian Sign Language (ISL) for classification tasks. In this work, a deep learning technique is applied for recognizing static signs, utilizing a Convolutional Neural Network (CNN) based on the VGG19 architecture. The model is trained on ISL gesture datasets, making it capable of classifying hand signs with high precision. Once recognized, the gestures are turned into text and then converted into speech, which is saved as an audio file. This system promotes accessibility and better communication between those who use sign language and those who do not, while also pushing forward the development of assistive technologies.},
        keywords = {Sign Language Recognition, Convolutional Neural Network, VGG19, Deep Learning, Indian Sign Language, Gesture Recognition, Translation Language.},
        month = {May},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 11
  • Issue: 12
  • PageNo: 3726-3729

Deep Learning-Based Static Sign Language Interpretation Using VGG19 Architecture

Related Articles