Translation of American Sign Language to English using a 3 - Model Architecture

  • Unique Paper ID: 166793
  • Volume: 11
  • Issue: 2
  • PageNo: 2125-2130
  • Abstract:
  • There are 70 million people worldwide who are hearing impaired or deaf. Unlike most of us, their first language is Sign Language. One of the most common ones is ASL. American Sign Language is used as a standard in most places and many sign languages borrow from it. However, the longstanding challenge of communication accessibility faced by the deaf and hard-of-hearing community has not been addressed. In an increasingly digital world, equitable communication remains a fundamental concern. Our project aims to bridge the communication gap by harnessing the power of machine learning and computer vision. Our project’s goal is to translate ASL to English, using a 3-model architecture to break up the translation process into 3 processes. The first model uses object detection to identify hands, then Image Classification to identify the letter used, and then finally Natural Language Processing to string together the letters to sentences. Our project has several User profiles. The ASL user, who we are translating, The English Speaker who we are translating for, and the Admin who manages the application. We plan to use several technologies Pytorch for model creation, specifically the Object detection, Image Classification and Natural Language Processing Suites.

Cite This Article

  • ISSN: 2349-6002
  • Volume: 11
  • Issue: 2
  • PageNo: 2125-2130

Translation of American Sign Language to English using a 3 - Model Architecture

Related Articles