Deep Learning for Sign Language: Gestures to Text

  • Unique Paper ID: 165068
  • Volume: 11
  • Issue: 1
  • PageNo: 47-51
  • Abstract:
  • This project aims to address communication barriers faced by the deaf and mute community by leveraging advanced technology to facilitate real-time recognition and interpretation of sign language gestures. It utilizes deep learning techniques, particularly the YOLO object detection model, known for its speed and accuracy in identifying objects within images and videos. Data collection involves compiling a comprehensive dataset of annotated videos capturing a wide range of sign language gestures, which are then used to train the YOLO model. The project also utilizes the OpenCV library for preprocessing and postprocessing tasks, such as resizing frames and adding text annotations to enhance gesture recognition accuracy. Performance is evaluated using metrics like precision, recall, and F1 score on a validation dataset, ensuring high accuracy and reliability in real-world scenarios

Cite This Article

  • ISSN: 2349-6002
  • Volume: 11
  • Issue: 1
  • PageNo: 47-51

Deep Learning for Sign Language: Gestures to Text

Related Articles