Demystifying Interpretable AI in Finance: A Review of SHAP and LIME

  • Unique Paper ID: 185501
  • Volume: 12
  • Issue: 5
  • PageNo: 1633-1637
  • Abstract:
  • SHAP and LIME have become essential tools for interpreting complex machine learning models, particularly in finance, where predictive systems influence critical decisions and economic stability. These methods provide detailed insights into how algorithms make predictions across a wide range of financial tasks, including credit scoring, fraud detection, and environmental, social, and governance evaluation. This review compiles findings from recent studies that apply SHAP and LIME in financial contexts and compares their theoretical foundations, practical effectiveness, and current limitations. It also considers the direction of ongoing improvements aimed at achieving scalability, reliability, and domain adaptation. Explainable artificial intelligence is shown to be a key component of transparency and accountability in financial technology, though much progress is still needed before interpretability becomes standard practice across the Finance sector.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{185501,
        author = {Sarthak Durgesh Marathe},
        title = {Demystifying Interpretable AI in Finance: A Review of SHAP and LIME},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {5},
        pages = {1633-1637},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=185501},
        abstract = {SHAP and LIME have become essential tools for interpreting complex machine learning models, particularly in finance, where predictive systems influence critical decisions and economic stability. These methods provide detailed insights into how algorithms make predictions across a wide range of financial tasks, including credit scoring, fraud detection, and environmental, social, and governance evaluation. This review compiles findings from recent studies that apply SHAP and LIME in financial contexts and compares their theoretical foundations, practical effectiveness, and current limitations. It also considers the direction of ongoing improvements aimed at achieving scalability, reliability, and domain adaptation. Explainable artificial intelligence is shown to be a key component of transparency and accountability in financial technology, though much progress is still needed before interpretability becomes standard practice across the Finance sector.},
        keywords = {},
        month = {October},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 12
  • Issue: 5
  • PageNo: 1633-1637

Demystifying Interpretable AI in Finance: A Review of SHAP and LIME

Related Articles