From Pixels to Phrases: Enhancing Image Captioning with LSTM Model

  • Unique Paper ID: 159666
  • PageNo: 486-492
  • Abstract:
  • Generating natural language captions for images is an important task that requires understanding and identi- fying the objects within an image. However, the effectiveness of image caption generation has not been thoroughly proven. To address this gap, we propose a novel approach that com- bines Convolutional Neural Networks (CNNs) and Long Short- Term Memory (LSTM) models to generate image captions. Our approach comprises two sub-models: an Object Identification model and a Localization model that extract information about objects and their spatial relationships from images. We then use LSTM models to process the extracted text data, encoding the text input sequence as a fixed-length output vector. Finally, we integrate the image vector outputs and the corresponding descriptions to train the image caption generator model. We compare the performance of our LSTM-based model with other dense models, including VGG-16 and Transformer-based models, using the Flickr8k dataset. Our experimental results demonstrate that our LSTM-based approach outperforms previous VGG and Transformer-based models, as well as state-of-the-art image captioning models. By integrating image and text data using LSTM models, our approach provides a new benchmark for image caption generation, advancing the state-of-the-art in this critical area of research.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{159666,
        author = {Pulkit Dwivedi},
        title = {From Pixels to Phrases: Enhancing Image Captioning with LSTM Model},
        journal = {International Journal of Innovative Research in Technology},
        year = {},
        volume = {9},
        number = {12},
        pages = {486-492},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=159666},
        abstract = {Generating natural language captions for images is an important task that requires understanding and identi- fying the objects within an image. However, the effectiveness of image caption generation has not been thoroughly proven. To address this gap, we propose a novel approach that com- bines Convolutional Neural Networks (CNNs) and Long Short- Term Memory (LSTM) models to generate image captions. Our approach comprises two sub-models: an Object Identification model and a Localization model that extract information about objects and their spatial relationships from images. We then use LSTM models to process the extracted text data, encoding the text input sequence as a fixed-length output vector. Finally, we integrate the image vector outputs and the corresponding descriptions to train the image caption generator model. We compare the performance of our LSTM-based model with other dense models, including VGG-16 and Transformer-based models, using the Flickr8k dataset. Our experimental results demonstrate that our LSTM-based approach outperforms previous VGG and Transformer-based models, as well as state-of-the-art image captioning models. By integrating image and text data using LSTM models, our approach provides a new benchmark for image caption generation, advancing the state-of-the-art in this critical area of research.},
        keywords = {Image captioning, Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNNs), Natural lan- guage processing (NLP), Deep learning},
        month = {},
        }

Cite This Article

Dwivedi, P. (). From Pixels to Phrases: Enhancing Image Captioning with LSTM Model. International Journal of Innovative Research in Technology (IJIRT), 9(12), 486–492.

Related Articles