Explainable AI in Healthcare

  • Unique Paper ID: 185639
  • Volume: 12
  • Issue: 5
  • PageNo: 2243-2247
  • Abstract:
  • Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient’s symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{185639,
        author = {Prof.Gandhi R.S. and Prof.Baravkar B.Y. and Ms.Lembhe Srushti Bharat and Ms.Gadadare Jyoti Dnyaneshwar},
        title = {Explainable AI in Healthcare},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {5},
        pages = {2243-2247},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=185639},
        abstract = {Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient’s symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.},
        keywords = {explainable AI; medical imaging; deep learning; radiomics},
        month = {October},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 12
  • Issue: 5
  • PageNo: 2243-2247

Explainable AI in Healthcare

Related Articles