BIDERECTIONAL SIGN LANGUAGE TRANSLATOR USING CNN, LSTM, AND NLP-BASED SYSTEM

  • Unique Paper ID: 188602
  • PageNo: 2430-2434
  • Abstract:
  • The Bidirectional Sign Language Translator seeks to bridge the communication gap between the speech-impaired community and the hearing population by facilitating real-time, two-way sign language to speech/text and speech/text to sign language translation. The project utilizes computer vision, deep learning, and natural language processing to identify hand gestures and translate them into audible and textual output, and vice versa interpret spoken words into sign language representation. Hand detection and tracking are effectively carried out using tools such as OpenCV and MediaPipe. Static gestures are translated by a Convolutional Neural Network (CNN), while dynamic gestures are processed by models such as LSTM or 3D CNN for temporal processing. Concurrently, speech-to-text and text-to-speech modules further add the bidirectional nature to the communication. The system also uses NLP grammar structuring to enhance the output to natural and grammatically correct conversions. A user-friendly interface created with Flask or Streamlit guarantees ease of use and accessibility. Thorough tests and feedback loops guarantee system reliability and accuracy. The solution proposed has high potential in facilitating inclusivity and barrier-free human interaction, greatly benefiting the hearing and speech-impaired community.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{188602,
        author = {Bajrang Solunke and Varsha Karandikar and Riya Somani and Anjali Solanke and Pratyush Solanke and Akash Somsetwar and Poorva Sonawane},
        title = {BIDERECTIONAL SIGN LANGUAGE TRANSLATOR USING CNN, LSTM, AND NLP-BASED SYSTEM},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {7},
        pages = {2430-2434},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=188602},
        abstract = {The Bidirectional Sign Language Translator seeks to bridge the communication gap between the speech-impaired community and the hearing population by facilitating real-time, two-way sign language to speech/text and speech/text to sign language translation. The project utilizes computer vision, deep learning, and natural language processing to identify hand gestures and translate them into audible and textual output, and vice versa interpret spoken words into sign language representation. Hand detection and tracking are effectively carried out using tools such as OpenCV and MediaPipe. Static gestures are translated by a Convolutional Neural Network (CNN), while dynamic gestures are processed by models such as LSTM or 3D CNN for temporal processing. Concurrently, speech-to-text and text-to-speech modules further add the bidirectional nature to the communication. The system also uses NLP grammar structuring to enhance the output to natural and grammatically correct conversions. A user-friendly interface created with Flask or Streamlit guarantees ease of use and accessibility. Thorough tests and feedback loops guarantee system reliability and accuracy. The solution proposed has high potential in facilitating inclusivity and barrier-free human interaction, greatly benefiting the hearing and speech-impaired community.},
        keywords = {Bidirectional Communication, Computer Vision, Deep Learning, Gesture Recognition, Sign Language Translation},
        month = {December},
        }

Cite This Article

Solunke, B., & Karandikar, V., & Somani, R., & Solanke, A., & Solanke, P., & Somsetwar, A., & Sonawane, P. (2025). BIDERECTIONAL SIGN LANGUAGE TRANSLATOR USING CNN, LSTM, AND NLP-BASED SYSTEM. International Journal of Innovative Research in Technology (IJIRT), 12(7), 2430–2434.

Related Articles