Explainable AI for Anomaly Detection in IoT Networks Using XGBoost

  • Unique Paper ID: 178511
  • Volume: 11
  • Issue: 12
  • PageNo: 8533-8536
  • Abstract:
  • This paper introduces a holistic approach for identifying anomalies in Internet of Things (IoT) networks utilizing the robust XGBoost classification model and explainable artificial intelligence (XAI) methods. We leverage SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), and a surrogate decision tree to supplement the model's interpretability. The performance of our method is tested on the IoT-23 dataset, which covers a variety of attack vectors as well as benign network traffic. The outcomes illustrate exceptional predictive accuracy as well as substantially improved model transparency, thereby enhancing understanding and confidence in automated systems for network security.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{178511,
        author = {R Raskhith Kumar and Sagar A and Thomas Sunil and V Saketh Nivesh and Manjunath P V},
        title = {Explainable AI for Anomaly Detection in IoT Networks Using XGBoost},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {11},
        number = {12},
        pages = {8533-8536},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=178511},
        abstract = {This paper introduces a holistic approach for identifying anomalies in Internet of Things (IoT) networks utilizing the robust XGBoost classification model and explainable artificial intelligence (XAI) methods. We leverage SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), and a surrogate decision tree to supplement the model's interpretability. The performance of our method is tested on the IoT-23 dataset, which covers a variety of attack vectors as well as benign network traffic. The outcomes illustrate exceptional predictive accuracy as well as substantially improved model transparency, thereby enhancing understanding and confidence in automated systems for network security.},
        keywords = {Anomaly Detection, Explainable AI, SHAP, LIME, IoT Security, XGBoost, Surrogate Models, Cybersecurity, Network Intrusion Detection, Machine Learning.},
        month = {May},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 11
  • Issue: 12
  • PageNo: 8533-8536

Explainable AI for Anomaly Detection in IoT Networks Using XGBoost

Related Articles