Beyond Sight: Generating Spoken Descriptions

  • Unique Paper ID: 192856
  • Volume: 12
  • Issue: 9
  • PageNo: 2811-2817
  • Abstract:
  • Image captioning is a task in computer vision that requires creating detailed descriptions for images. This process links visual content with natural language, allowing machines to interpret and explain visual scenes. This study presents a sophisticated system that employs a pre-trained convolutional neural network (CNN) to obtain comprehensive features from images. These features are integrated with an attention mechanism and are utilized to generate captions using a recurrent neural network (RNN). To develop thorough feature vectors for images, several pre-trained convolutional neural networks, such as Inception V3, were used in a planned and effective manner. The process of decoding makes use of the Long Short-Term Memory (LSTM) model, which was chosen because it is effective at creating clear and descriptive sentences. To further enhance performance, an innovative integration of the Inception V3 attention model was implemented, allowing the system to concentrate on specific areas of the image during the learning process. Experimental results from tests on the flickr8k dataset show very good performance, similar to the best current methods. The main goal of the study is to help people with visual impairments by giving them a way to learn about visual information through sound. Traditional approaches, like image captions, are not sufficient to address the specific requirements of the visually impaired community. Therefore, this study tackles the pressing need for an advanced solution—a model that is not only capable of analyzing images but also able to convert them into spoken descriptions. The description is created by using the Google Text-To-Speech (gTTS) API. This innovative method seeks to connect visual content with auditory comprehension, paving the way for a more inclusive and accessible future.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{192856,
        author = {Nidhi Yadav and Renuka Bhandari and Preeti Warrier},
        title = {Beyond Sight: Generating Spoken Descriptions},
        journal = {International Journal of Innovative Research in Technology},
        year = {2026},
        volume = {12},
        number = {9},
        pages = {2811-2817},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=192856},
        abstract = {Image captioning is a task in computer vision that requires creating detailed descriptions for images.  This process links visual content with natural language, allowing machines to interpret and explain visual scenes.  This study presents a sophisticated system that employs a pre-trained convolutional neural network (CNN) to obtain comprehensive features from images.  These features are integrated with an attention mechanism and are utilized to generate captions using a recurrent neural network (RNN). 
To develop thorough feature vectors for images, several pre-trained convolutional neural networks, such as Inception V3, were used in a planned and effective manner.  The process of decoding makes use of the Long Short-Term Memory (LSTM) model, which was chosen because it is effective at creating clear and descriptive sentences.  To further enhance performance, an innovative integration of the Inception V3 attention model was implemented, allowing the system to concentrate on specific areas of the image during the learning process.  Experimental results from tests on the flickr8k dataset show very good performance, similar to the best current methods. 
The main goal of the study is to help people with visual impairments by giving them a way to learn about visual information through sound.  Traditional approaches, like image captions, are not sufficient to address the specific requirements of the visually impaired community.  Therefore, this study tackles the pressing need for an advanced solution—a model that is not only capable of analyzing images but also able to convert them into spoken descriptions.  The description is created by using the Google Text-To-Speech (gTTS) API.  This innovative method seeks to connect visual content with auditory comprehension, paving the way for a more inclusive and accessible future.},
        keywords = {Machine Learning, Computer Vision, Image Captioning, Deep Learning, Feature Extraction, Long Short-Term Memory (LSTM), Google Text-To-Speech (GTTS)},
        month = {February},
        }

Cite This Article

Yadav, N., & Bhandari, R., & Warrier, P. (2026). Beyond Sight: Generating Spoken Descriptions. International Journal of Innovative Research in Technology (IJIRT), 12(9), 2811–2817.

Related Articles