Use Cases of Explainable AI: A Review

  • Unique Paper ID: 176458
  • PageNo: 6081-6090
  • Abstract:
  • Explainable Artificial Intelligence (XAI) is a critical advancement in AI that enhances transparency, interpretability, and trust in machine learning models. This paper explores the application of XAI across various domains, with a particular focus on healthcare, where model explainability is essential for diagnostic accuracy and clinical decision-making. We analyse different XAI techniques, including SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations), demonstrating their effectiveness in explaining complex AI models. Additionally, we discuss the challenges associated with XAI implementation, such as balancing model performance with explainability, addressing ethical concerns, and ensuring robustness against adversarial attacks. Our findings highlight the transformative potential of XAI in fostering AI accountability and reliability, paving the way for its broader adoption in critical decision-making systems. Future research should focus on developing more intuitive and real-time XAI models to enhance user understanding and trust in AI-driven systems.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{176458,
        author = {Pankaj Kumar and Dr. Suman},
        title = {Use Cases of Explainable AI: A Review},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {11},
        number = {11},
        pages = {6081-6090},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=176458},
        abstract = {Explainable Artificial Intelligence (XAI) is a critical advancement in AI that enhances transparency, interpretability, and trust in machine learning models. This paper explores the application of XAI across various domains, with a particular focus on healthcare, where model explainability is essential for diagnostic accuracy and clinical decision-making. We analyse different XAI techniques, including SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations), demonstrating their effectiveness in explaining complex AI models. Additionally, we discuss the challenges associated with XAI implementation, such as balancing model performance with explainability, addressing ethical concerns, and ensuring robustness against adversarial attacks. Our findings highlight the transformative potential of XAI in fostering AI accountability and reliability, paving the way for its broader adoption in critical decision-making systems. Future research should focus on developing more intuitive and real-time XAI models to enhance user understanding and trust in AI-driven systems.},
        keywords = {XAI, LIME, SHAP, XGBoost},
        month = {April},
        }

Cite This Article

Kumar, P., & Suman, D. (2025). Use Cases of Explainable AI: A Review. International Journal of Innovative Research in Technology (IJIRT), 11(11), 6081–6090.

Related Articles