A Fine-Tuned Transformer Framework for the Automated Classification of Algebraic Misconceptions

  • Unique Paper ID: 180900
  • PageNo: 3648-3651
  • Abstract:
  • The manual identification of specific student errors in mathematics is essential for effective teaching but is prohibitively time-consuming at scale. While Natural Language Processing (NLP) has been suggested as a potential solution, most proposals have remained conceptual. This work moves from the- ory to application, presenting the architecture, implementation, and rigorous evaluation of a system designed to automatically classify common algebraic misconceptions from students’ written explanations. Our approach employs a Bidirectional Encoder Representations from Transformers (BERT) model that has been fine-tuned on a purpose-built, labeled dataset of student work. The system processes raw text and uses the fine-tuned model to categorize errors such as incorrect sign usage, flawed distribution, and conceptual mistakes with variables. Evaluated on a curated set of 2,500 student responses, our model achieves a classification accuracy of 92.5% and a weighted F1-score of 0.91. These results confirm that deep learning models can function as dependable and scalable diagnostic instruments for educators, facilitating data-informed, targeted interventions to resolve specific learning difficulties.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{180900,
        author = {Harish Kirve and Prajakta Ghugare and Ayush Telrandhe and Prajwal Mahale},
        title = {A Fine-Tuned Transformer Framework for the Automated Classification of Algebraic Misconceptions},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {1},
        pages = {3648-3651},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=180900},
        abstract = {The manual identification of specific student errors in mathematics is essential for effective teaching but is prohibitively time-consuming at scale. While Natural Language Processing (NLP) has been suggested as a potential solution, most proposals have remained conceptual. This work moves from the- ory to application, presenting the architecture, implementation, and rigorous evaluation of a system designed to automatically classify common algebraic misconceptions from students’ written explanations. Our approach employs a Bidirectional Encoder Representations from Transformers (BERT) model that has been fine-tuned on a purpose-built, labeled dataset of student work. The system processes raw text and uses the fine-tuned model to categorize errors such as incorrect sign usage, flawed distribution, and conceptual mistakes with variables. Evaluated on a curated set of 2,500 student responses, our model achieves a classification accuracy of 92.5% and a weighted F1-score of 0.91. These results confirm that deep learning models can function as dependable and scalable diagnostic instruments for educators, facilitating data-informed, targeted interventions to resolve specific learning difficulties.},
        keywords = {Natural Language Processing, Educational Technology, Mathematics Education, Misconception Analysis, BERT, Error Classification, Learning Analytics.},
        month = {June},
        }

Cite This Article

Kirve, H., & Ghugare, P., & Telrandhe, A., & Mahale, P. (2025). A Fine-Tuned Transformer Framework for the Automated Classification of Algebraic Misconceptions. International Journal of Innovative Research in Technology (IJIRT), 12(1), 3648–3651.

Related Articles