Sign Language Recognition using Neural Networks

  • Unique Paper ID: 182444
  • Volume: 12
  • Issue: 2
  • PageNo: 2388-2393
  • Abstract:
  • Sign Language Recognition (SLR) systems have become increasingly vital in facilitating communication between deaf and hearing communities. While traditional SLR approaches relying on handcrafted features and sensor-based inputs face limitations in scalability and real-time performance. Recent breakthroughs in deep learning have significantly advanced the capabilities of the field. This survey paper comprehensively examines the evolution of SLR systems, from conventional methods to advanced learning models such as CNNs, RNNs, LSTMs, and Transformers. This paper analyzes how these models tackle important problems in both static and dynamic gesture recognition, with particular attention to their computational requirements and performance trade-offs. Building on this foundation, we highlight emerging hybrid approaches that combine CNNs for spatial feature extraction with RNNs/LSTMs for temporal modelling - a methodology we are implementing in our ongoing work for continuous sign language recognition. The survey explores key challenges in continuous SLR, including challenges like segmenting gestures, managing blending between signs, and integrating hand, facial, and body cues for multimodal recognition. We further examine recent advancements in real-time processing and accessibility features such as regional language translation. Through this comprehensive review, we identify current limitations in the field and propose future research directions to develop more robust, efficient, and inclusive SLR systems that can operate effectively in diverse real-world conditions.

Cite This Article

  • ISSN: 2349-6002
  • Volume: 12
  • Issue: 2
  • PageNo: 2388-2393

Sign Language Recognition using Neural Networks

Related Articles