Explainable Neural Model for Robust Android Malware Detection via API Call Analysis

  • Unique Paper ID: 194693
  • Volume: 12
  • Issue: 10
  • PageNo: 4924-4931
  • Abstract:
  • Research is to use API call analysis to create an explainable hybrid neural model for reliable Android malware detection. By extracting API call sequences—which are crucial markers of malicious activity—the suggested method examines the behavioural patterns of Android applications. Explainable AI techniques are incorporated to provide transparency in the decision-making process, and a neural network-based detection model is used to categories applications as benign or malicious. by emphasising the most significant API characteristics that affect classification outcomes. We use a neural network-based detection model to sort apps into two groups: benign and malicious. We also use explainable AI techniques to make the decision-making process clear. By showing the API features that had the biggest impact on the classification results.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{194693,
        author = {Indurthi Meghana and Nusum Yamuna and Neeli Prathibha and Dr. R. Madana Mohana},
        title = {Explainable Neural Model for Robust Android Malware Detection via API Call Analysis},
        journal = {International Journal of Innovative Research in Technology},
        year = {2026},
        volume = {12},
        number = {10},
        pages = {4924-4931},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=194693},
        abstract = {Research is to use API call analysis to create an explainable hybrid neural model for reliable Android malware detection. By extracting API call sequences—which are crucial markers of malicious activity—the suggested method examines the behavioural patterns of Android applications. Explainable AI techniques are incorporated to provide transparency in the decision-making process, and a neural network-based detection model is used to categories applications as benign or malicious. by emphasising the most significant API characteristics that affect classification outcomes. We use a neural network-based detection model to sort apps into two groups: benign and malicious. We also use explainable AI techniques to make the decision-making process clear. By showing the API features that had the biggest impact on the classification results.},
        keywords = {Android Malware Detection, Explainable Artificial Intelligence (XAI), API Call Analysis, Machine Learning, Malware Classification, Android Applications.},
        month = {March},
        }

Cite This Article

Meghana, I., & Yamuna, N., & Prathibha, N., & Mohana, D. R. M. (2026). Explainable Neural Model for Robust Android Malware Detection via API Call Analysis. International Journal of Innovative Research in Technology (IJIRT), 12(10), 4924–4931.

Related Articles