Enhancing Banana Leaf Disease Diagnosis Using Explainable AI on a Simple Convolutional Neural Network

  • Unique Paper ID: 183091
  • Volume: 12
  • Issue: no
  • PageNo: 106-110
  • Abstract:
  • While high-performance deep learning models have been applied to banana leaf disease detection [1][3], their interpretability remains underexplored. In this study, we deliberately use a baseline Convolutional Neural Network (CNN) with moderate accuracy to demonstrate how Explainable AI (XAI) techniques—such as Grad-CAM and SoftMax confidence analysis—can validate and interpret model predictions. We train a Global Average Pooling (GAP)-based CNN on the Banana LSD dataset [4] and observe a test accuracy of 74.7%. While more advanced models have reported higher performance [5], [6], our focus remains on interpretability and practical relevance. By integrating explainability techniques, we demonstrate that even a basic model can provide reliable support for disease diagnosis, especially in agricultural environments where transparency and resource efficiency are essential.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{183091,
        author = {Bharath A R and Hemalatha N and Sreekumar K M},
        title = {Enhancing Banana Leaf Disease Diagnosis Using Explainable AI on a Simple Convolutional Neural Network},
        journal = {International Journal of Innovative Research in Technology},
        year = {},
        volume = {12},
        number = {no},
        pages = {106-110},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=183091},
        abstract = {While high-performance deep learning models have been applied to banana leaf disease detection [1][3], their interpretability remains underexplored. In this study, we deliberately use a baseline Convolutional Neural Network (CNN) with moderate accuracy to demonstrate how Explainable AI (XAI) techniques—such as Grad-CAM and SoftMax confidence analysis—can validate and interpret model predictions. We train a Global Average Pooling (GAP)-based CNN on the Banana LSD dataset [4] and observe a test accuracy of 74.7%. While more advanced models have reported higher performance [5], [6], our focus remains on interpretability and practical relevance. By integrating explainability techniques, we demonstrate that even a basic model can provide reliable support for disease diagnosis, especially in agricultural environments where transparency and resource efficiency are essential.},
        keywords = {},
        month = {},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 12
  • Issue: no
  • PageNo: 106-110

Enhancing Banana Leaf Disease Diagnosis Using Explainable AI on a Simple Convolutional Neural Network

Related Articles