Sign Language Recognition and Translation

  • Unique Paper ID: 180525
  • PageNo: 1437-1448
  • Abstract:
  • With an emphasis on deep learning, com puter vision, and sensor technologies, this literature review examines developments in Sign Language Recognition (SLR) and Translation. Early systems used flex sensors, which could only detect static mo tions, and simple machine learning algorithms. How ever, real-time translation and dynamic gesture recog nition have greatly improved thanks to deep learning models like CNNs, RNNs, and Transformer-based architectures. Despite these developments, real-time SLR and translation still have problems with dynamic gestures, subtle finger movements, invisible signs, lighting, and sensor calibration. The precision and generalizability of translation systems are also impact ed by problems including small datasets, dialect differ ences, and computational limitations. SLR and sign language translation are becoming more scalable and efficient because to ongoing advancements in multi modal sensor fusion and AI models, which increases their suitability for real-world applications. This re view of the literature discusses the technology used for sign language translation and recognition, which is becoming more and more common in the contempo rary digital world. For upcoming scholars, it highlights how these advancements enhance communication and accessibility by making them more useful by the deaf community.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{180525,
        author = {Gabale Akshata and Sanjana  Telange and Shruti  Patil and Ashvini Gavit},
        title = {Sign Language Recognition and Translation},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {1},
        pages = {1437-1448},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=180525},
        abstract = {With an emphasis on deep learning, com
puter vision, and sensor technologies, this literature 
review examines developments in Sign Language 
Recognition (SLR) and Translation. Early systems 
used flex sensors, which could only detect static mo
tions, and simple machine learning algorithms. How
ever, real-time translation and dynamic gesture recog
nition have greatly improved thanks to deep learning 
models like CNNs, RNNs, and Transformer-based 
architectures. Despite these developments, real-time 
SLR and translation still have problems with dynamic 
gestures, subtle finger movements, invisible signs, 
lighting, and sensor calibration. The precision and 
generalizability of translation systems are also impact
ed by problems including small datasets, dialect differ
ences, and computational limitations. SLR and sign 
language translation are becoming more scalable and 
efficient because to ongoing advancements in multi
modal sensor fusion and AI models, which increases 
their suitability for real-world applications. This re
view of the literature discusses the technology used for 
sign language translation and recognition, which is 
becoming more and more common in the contempo
rary digital world. For upcoming scholars, it highlights 
how these advancements enhance communication and 
accessibility by making them more useful by the deaf 
community.},
        keywords = {Sign Language, Sensor Glove, CNN, Text to-Speech (TTS), Gesture Recognition.},
        month = {June},
        }

Cite This Article

Akshata, G., & Telange, S. ., & Patil, S. ., & Gavit, A. (2025). Sign Language Recognition and Translation. International Journal of Innovative Research in Technology (IJIRT), 12(1), 1437–1448.

Related Articles