Explainable AI-Driven Models for Early Academic Risk Prediction in Higher Education

  • Unique Paper ID: 193909
  • Volume: 12
  • Issue: 10
  • PageNo: 2295-2300
  • Abstract:
  • Early detection of academically vulnerable students remains a critical challenge in higher education, particularly in the context of increasing dropout rates and heterogeneous learning environments. This study proposes an explainable artificial intelligence (XAI) driven framework for early academic risk prediction that balances predictive performance with interpretability. Using demographic, behavioral, and institutional attributes while explicitly excluding intermediate assessment grades to prevent data leakage academic risk is formulated as a binary classification problem. Two ensemble learning models, Random Forest and Extreme Gradient Boosting (XGBoost), are implemented and rigorously evaluated. To enhance transparency, permutation-based and model-intrinsic feature importance analyses are conducted to identify key predictors influencing classification outcomes. Experimental results demonstrate that XGBoost achieves superior recall and F1-score for the minority at-risk class, making it particularly suitable for early-warning systems. The findings confirm that explainable ensemble models can provide reliable and interpretable decision-support mechanisms for proactive academic intervention strategies.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{193909,
        author = {Prof. Mahesh R. Sananse and Prof. Dr. Maithili Arjunwadkar},
        title = {Explainable AI-Driven Models for Early Academic Risk Prediction in Higher Education},
        journal = {International Journal of Innovative Research in Technology},
        year = {2026},
        volume = {12},
        number = {10},
        pages = {2295-2300},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=193909},
        abstract = {Early detection of academically vulnerable students remains a critical challenge in higher education, particularly in the context of increasing dropout rates and heterogeneous learning environments. This study proposes an explainable artificial intelligence (XAI) driven framework for early academic risk prediction that balances predictive performance with interpretability. Using demographic, behavioral, and institutional attributes while explicitly excluding intermediate assessment grades to prevent data leakage academic risk is formulated as a binary classification problem. Two ensemble learning models, Random Forest and Extreme Gradient Boosting (XGBoost), are implemented and rigorously evaluated. To enhance transparency, permutation-based and model-intrinsic feature importance analyses are conducted to identify key predictors influencing classification outcomes. Experimental results demonstrate that XGBoost achieves superior recall and F1-score for the minority at-risk class, making it particularly suitable for early-warning systems. The findings confirm that explainable ensemble models can provide reliable and interpretable decision-support mechanisms for proactive academic intervention strategies.},
        keywords = {Explainable AI, Ensemble Learning, Academic Risk Prediction, Educational Data Mining, Higher Education Analytics},
        month = {March},
        }

Cite This Article

Sananse, P. M. R., & Arjunwadkar, P. D. M. (2026). Explainable AI-Driven Models for Early Academic Risk Prediction in Higher Education. International Journal of Innovative Research in Technology (IJIRT), 12(10), 2295–2300.

Related Articles