Design of Spectacles for Sign Language Translation

  • Unique Paper ID: 188918
  • Volume: 12
  • Issue: 7
  • PageNo: 3969-3975
  • Abstract:
  • Sign language is the primary communication medium for individuals with hearing and speech impairments. However, its limited understanding among the general population results in significant communication barriers. This work proposes a spectacle-based real-time sign language translation system that integrates computer vision, machine learning, and embedded technologies. The system utilizes a Raspberry Pi operating on the 32-bit Legacy OS with Motion-based IP camera streaming for seamless and stable video acquisition. Hand landmark extraction is performed using Mediapipe Hands, which detects 21 key points per hand, followed by a custom keypoint classifier that converts the extracted coordinates into gesture classes. A TensorFlow- based CNN model, trained on the ASL Digits dataset, enables efficient gesture recognition. The spectacle-mounted camera streams video to a processing unit, which classifies gestures and maps them to corresponding English alphabets or phrases. Speech output is generated through passwordless SSH-triggered espeak on the Raspberry Pi, enabling hands-free audio com- munication. Experimental evaluations demonstrate robust real- time performance and high prediction accuracy. The proposed system presents a cost-effective, wearable, and efficient assistive technology solution aimed at reducing communication barriers between sign language users and non-signers.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{188918,
        author = {Shreya S and Dr. Shruthi M and Sneha H and Rohith M D and Vimarsha M},
        title = {Design of Spectacles for Sign Language Translation},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {7},
        pages = {3969-3975},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=188918},
        abstract = {Sign language is the primary communication medium for individuals with hearing and speech impairments. However, its limited understanding among the general population results in significant communication barriers. This work proposes a spectacle-based real-time sign language translation system that integrates computer vision, machine learning, and embedded technologies. The system utilizes a Raspberry Pi operating on the 32-bit Legacy OS with Motion-based IP camera streaming for seamless and stable video acquisition. Hand landmark extraction is performed using Mediapipe Hands, which detects 21 key points per hand, followed by a custom keypoint classifier that converts the extracted coordinates into gesture classes. A TensorFlow- based CNN model, trained on the ASL Digits dataset, enables efficient gesture recognition. The spectacle-mounted camera streams video to a processing unit, which classifies gestures and maps them to corresponding English alphabets or phrases. Speech output is generated through passwordless SSH-triggered espeak on the Raspberry Pi, enabling hands-free audio com- munication. Experimental evaluations demonstrate robust real- time performance and high prediction accuracy. The proposed system presents a cost-effective, wearable, and efficient assistive technology solution aimed at reducing communication barriers between sign language users and non-signers.},
        keywords = {Hand Gesture Recognition, Sign Language Translation, Mediapipe, Computer Vision, Assistive Technology, Keypoint Classification.},
        month = {December},
        }

Cite This Article

S, S., & M, D. S., & H, S., & D, R. M., & M, V. (2025). Design of Spectacles for Sign Language Translation. International Journal of Innovative Research in Technology (IJIRT), 12(7), 3969–3975.

Related Articles