A Comparative Survey of SHAP and LIME: Explaining Machine Learning Models for Transparent AI

  • Unique Paper ID: 169215
  • PageNo: 827-835
  • Abstract:
  • Artificial Intelligence (AI) and Machine Learning (ML) have increasingly become central to decision-making in critical domains such as healthcare, finance, and autonomous systems. However, their complexity has rendered many models opaque, often referred to as "black-box" models, making it difficult for users to understand or trust the decisions made. Explainable AI (XAI) seeks to address this by providing transparency in model decision-making processes. Two prominent XAI techniques, SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), are widely used to interpret complex models. This paper presents a comparative analysis of SHAP and LIME, examining their theoretical foundations, strengths, limitations, and applications. SHAP is rooted in cooperative game theory and offers global interpretability with consistent and reliable explanations, whereas LIME provides efficient, local explanations suited for real-time applications. The paper further discusses the challenges in applying these methods, particularly around scalability and real-time decision-making, and highlights potential future research directions, including hybrid models that combine the strengths of both SHAP and LIME.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{169215,
        author = {SUDIPTA DEY and PROF. TATHAGATA ROY CHOWDHURY},
        title = {A Comparative Survey of SHAP and LIME: Explaining Machine Learning Models for Transparent AI},
        journal = {International Journal of Innovative Research in Technology},
        year = {2024},
        volume = {11},
        number = {6},
        pages = {827-835},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=169215},
        abstract = {Artificial Intelligence (AI) and Machine Learning (ML) have increasingly become central to decision-making in critical domains such as healthcare, finance, and autonomous systems. However, their complexity has rendered many models opaque, often referred to as "black-box" models, making it difficult for users to understand or trust the decisions made. Explainable AI (XAI) seeks to address this by providing transparency in model decision-making processes. Two prominent XAI techniques, SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), are widely used to interpret complex models. This paper presents a comparative analysis of SHAP and LIME, examining their theoretical foundations, strengths, limitations, and applications. SHAP is rooted in cooperative game theory and offers global interpretability with consistent and reliable explanations, whereas LIME provides efficient, local explanations suited for real-time applications. The paper further discusses the challenges in applying these methods, particularly around scalability and real-time decision-making, and highlights potential future research directions, including hybrid models that combine the strengths of both SHAP and LIME.},
        keywords = {Explainable AI (XAI), Machine Learning Interpretability, SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), Black-box Models, Model Transparency, Feature Attribution, Model-agnostic Explanations, Cooperative Game Theory, Local Explanations, Global Interpretability, Model Explainability, Bias Detection, Trust in AI, Ethical AI, Algorithm Transparency, AI Accountability, Model Evaluation, Hybrid Explanatory Models, Computational Complexity in XAI.},
        month = {November},
        }

Cite This Article

DEY, S., & CHOWDHURY, P. T. R. (2024). A Comparative Survey of SHAP and LIME: Explaining Machine Learning Models for Transparent AI. International Journal of Innovative Research in Technology (IJIRT), 11(6), 827–835.

Related Articles