Next-Gen Communication: AI and IoT-Based Framework for Indian Sign Language Conversion

  • Unique Paper ID: 172553
  • PageNo: 295-300
  • Abstract:
  • New avenues for sign language communication accessibility have been made possible by the combination of computer vision, artificial intelligence, and machine learning. By translating spoken or written language into Indian Sign Language (ISL), this article seeks to decrease the communication gap between the hearing-impaired community and non-sign language speakers. The system translates voice and text into equivalent ISL motions in real-time by utilizing cutting-edge Artificial Intelligence and Machine Learning (AI/ML) algorithms in conjunction with Internet of Things devices. Speech-to-text conversion is used in the article to process spoken input, which is subsequently converted to equivalent sign language motions using machine learning methods such as K-Nearest Neighbors (KNN) or decision trees. Furthermore, a computer vision method is used to analyze hand and body posture in order to identify ISL motions. In order to identify these movements, Convolutional Neural Networks (CNN) and region-based CNN (R-CNN) models are used. This allows the system to identify and categorize gestures using features that are taken from pictures or videos. Verbal and written input can be seamlessly converted into visual ISL output because to the system's straightforward and user-friendly fundamental architecture. The system's real-time capabilities are improved by using deep learning models for gesture recognition and predictive gesture production. Additionally, IoT devices are used to enhance gesture detection, making it quicker and more effective for real-time communication. The goal of this study is to help create a more inclusive environment where people with hearing impairments can freely interact in a variety of real-world scenarios, in addition to providing real-time sign language conversion. This system can be further enhanced to support a wider variety of languages and sign language motions with ongoing developments in AI and computer vision, which will ultimately help society by removing obstacles.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{172553,
        author = {Dr.D.Loganathan and Dr.S.Karthikeyan and A.Arockia Selvaraj and R.Malarvizhi and S.Subashini and B.Margaret Jannesthaya},
        title = {Next-Gen Communication: AI and IoT-Based Framework for Indian Sign Language Conversion},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {11},
        number = {9},
        pages = {295-300},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=172553},
        abstract = {New avenues for sign language communication accessibility have been made possible by the combination of computer vision, artificial intelligence, and machine learning. By translating spoken or written language into Indian Sign Language (ISL), this article seeks to decrease the communication gap between the hearing-impaired community and non-sign language speakers. The system translates voice and text into equivalent ISL motions in real-time by utilizing cutting-edge Artificial Intelligence and Machine Learning (AI/ML) algorithms in conjunction with Internet of Things devices. Speech-to-text conversion is used in the article to process spoken input, which is subsequently converted to equivalent sign language motions using machine learning methods such as K-Nearest Neighbors (KNN) or decision trees. Furthermore, a computer vision method is used to analyze hand and body posture in order to identify ISL motions. 
In order to identify these movements, Convolutional Neural Networks (CNN) and region-based CNN (R-CNN) models are used. This allows the system to identify and categorize gestures using features that are taken from pictures or videos. Verbal and written input can be seamlessly converted into visual ISL output because to the system's straightforward and user-friendly fundamental architecture. The system's real-time capabilities are improved by using deep learning models for gesture recognition and predictive gesture production. Additionally, IoT devices are used to enhance gesture detection, making it quicker and more effective for real-time communication.
The goal of this study is to help create a more inclusive environment where people with hearing impairments can freely interact in a variety of real-world scenarios, in addition to providing real-time sign language conversion. This system can be further enhanced to support a wider variety of languages and sign language motions with ongoing developments in AI and computer vision, which will ultimately help society by removing obstacles.},
        keywords = {},
        month = {January},
        }

Cite This Article

Dr.D.Loganathan, , & Dr.S.Karthikeyan, , & Selvaraj, A., & R.Malarvizhi, , & S.Subashini, , & Jannesthaya, B. (2025). Next-Gen Communication: AI and IoT-Based Framework for Indian Sign Language Conversion. International Journal of Innovative Research in Technology (IJIRT), 11(9), 295–300.

Related Articles