Deep Learning-Based Static Sign Language Interpretation Using VGG19 Architecture

  • Unique Paper ID: 177999
  • Volume: 11
  • Issue: 12
  • PageNo: 3726-3729
  • Abstract:
  • Sign language is a rich visual form of communication that relies on a mix of hand gestures and facial expressions, commonly used by people with hearing impairments. However, conventional methods of sign recognition often fall short when it comes to interpreting these complex gestures, leading to communication barriers. This project proposes a solution that translates hand gestures into both text and speech, allowing virtual assistants to use the output for improved interaction. Earlier research mostly concentrated on detecting simple, clear signs, often picking selected gestures from Indian Sign Language (ISL) for classification tasks. In this work, a deep learning technique is applied for recognizing static signs, utilizing a Convolutional Neural Network (CNN) based on the VGG19 architecture. The model is trained on ISL gesture datasets, making it capable of classifying hand signs with high precision. Once recognized, the gestures are turned into text and then converted into speech, which is saved as an audio file. This system promotes accessibility and better communication between those who use sign language and those who do not, while also pushing forward the development of assistive technologies.

Cite This Article

  • ISSN: 2349-6002
  • Volume: 11
  • Issue: 12
  • PageNo: 3726-3729

Deep Learning-Based Static Sign Language Interpretation Using VGG19 Architecture

Related Articles