IMAGE-TO-SPEECH CONVERSION USING OCR, TTS AND CNN

  • Unique Paper ID: 176766
  • PageNo: 6302-6306
  • Abstract:
  • This paper presents a system that converts textual content from images into audible speech, leveraging Optical Character Recognition (OCR), Convolutional Neural Networks (CNNs), and Text-to-Speech (TTS) technologies. The goal is to aid visually impaired individuals by enabling them to understand visual text through audio output. The system first employs CNN-based models to enhance image preprocessing, ensuring noise reduction and accurate text localization. OCR is then used to extract textual information from the processed images. Finally, a TTS engine converts the recognized text into natural-sounding speech. The integration of these technologies results in a robust and efficient pipeline capable of handling a variety of image inputs including printed documents, signage, and handwritten notes. Experimental results demonstrate the system’s effectiveness in real-world scenarios, offering a practical tool for assistive technology and human-computer interaction.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{176766,
        author = {V. BHARATH KUMAR and YANAMALA DIVYA and V. UMA GAYATHRI and V. SIREESHA and B. GOPIKA CHANDANA},
        title = {IMAGE-TO-SPEECH CONVERSION USING OCR, TTS AND CNN},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {11},
        number = {11},
        pages = {6302-6306},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=176766},
        abstract = {This paper presents a system that converts textual content from images into audible speech, leveraging Optical Character Recognition (OCR), Convolutional Neural Networks (CNNs), and Text-to-Speech (TTS) technologies. The goal is to aid visually impaired individuals by enabling them to understand visual text through audio output. The system first employs CNN-based models to enhance image preprocessing, ensuring noise reduction and accurate text localization. OCR is then used to extract textual information from the processed images. Finally, a TTS engine converts the recognized text into natural-sounding speech. The integration of these technologies results in a robust and efficient pipeline capable of handling a variety of image inputs including printed documents, signage, and handwritten notes. Experimental results demonstrate the system’s effectiveness in real-world scenarios, offering a practical tool for assistive technology and human-computer interaction.},
        keywords = {Convolutional Neural Networks (CNN), Image-to-Speech Conversion, Optical Character Recognition (OCR), Text-to-Speech (TTS).},
        month = {April},
        }

Cite This Article

KUMAR, V. B., & DIVYA, Y., & GAYATHRI, V. U., & SIREESHA, V., & CHANDANA, B. G. (2025). IMAGE-TO-SPEECH CONVERSION USING OCR, TTS AND CNN. International Journal of Innovative Research in Technology (IJIRT), 11(11), 6302–6306.

Related Articles