Explainable AI Based CCTV Surveillance for Intelligent Threat Detection and Transparent Decision Making

  • Unique Paper ID: 195475
  • Volume: 12
  • Issue: 11
  • PageNo: 2646-2651
  • Abstract:
  • The integration of Artificial Intelligence (AI) in Closed-Circuit Television (CCTV) surveillance systems has revolutionized threat detection capabilities; however, the "black-box" nature of deep learning models poses significant challenges in transparency, accountability, and trust. This paper presents a comprehensive framework for Explainable AI (XAI)-based CCTV surveillance that combines intelligent threat detection with transparent decision-making mechanisms. We propose a multi-layered architecture incorporating state-of-the-art deep learning models enhanced with interpretability techniques including Gradient-weighted Class Activation Mapping (Grad-CAM), SHAP (SHapley Additive explanations), and attention mechanisms. Our system achieves 94.7% accuracy in real-time threat detection while providing human-interpretable explanations for each decision. Experimental results on benchmark datasets demonstrate that our XAI-enhanced surveillance system maintains high performance while significantly improving operator trust and reducing false alarm rates by 37% compared to traditional black-box approaches.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{195475,
        author = {S. Janani and V Oviyaa and Abhishek Kumar and G V Shrichandran},
        title = {Explainable AI Based CCTV Surveillance for Intelligent Threat Detection and Transparent Decision Making},
        journal = {International Journal of Innovative Research in Technology},
        year = {2026},
        volume = {12},
        number = {11},
        pages = {2646-2651},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=195475},
        abstract = {The integration of Artificial Intelligence (AI) in Closed-Circuit Television (CCTV) surveillance systems has revolutionized threat detection capabilities; however, the "black-box" nature of deep learning models poses significant challenges in transparency, accountability, and trust. This paper presents a comprehensive framework for Explainable AI (XAI)-based CCTV surveillance that combines intelligent threat detection with transparent decision-making mechanisms. We propose a multi-layered architecture incorporating state-of-the-art deep learning models enhanced with interpretability techniques including Gradient-weighted Class Activation Mapping (Grad-CAM), SHAP (SHapley Additive explanations), and attention mechanisms. Our system achieves 94.7% accuracy in real-time threat detection while providing human-interpretable explanations for each decision. Experimental results on benchmark datasets demonstrate that our XAI-enhanced surveillance system maintains high performance while significantly improving operator trust and reducing false alarm rates by 37% compared to traditional black-box approaches.},
        keywords = {Explainable AI, CCTV Surveillance, Threat Detection, Deep Learning, Transparency, Interpretability, Computer Vision, Security Systems},
        month = {April},
        }

Cite This Article

Janani, S., & Oviyaa, V., & Kumar, A., & Shrichandran, G. V. (2026). Explainable AI Based CCTV Surveillance for Intelligent Threat Detection and Transparent Decision Making. International Journal of Innovative Research in Technology (IJIRT), 12(11), 2646–2651.

Related Articles