Automatic Subjective Answer Evaluation

  • Unique Paper ID: 176808
  • PageNo: 6268-6271
  • Abstract:
  • Subjective questions and responses provide an open-ended assessment of a student’s understanding, allowing them to express their knowledge in a personalized and conceptual manner. However, the manual evaluation of such answers is often time-consuming, inconsistent, and prone to bias. This project proposes an automated system for evaluating subjective answers using Machine Learning (ML) and Natural Language Processing (NLP) techniques. The system utilizes various NLP methods and models such as Word2Vec, WordNet, Word Mover’s Distance (WMD), Cosine Similarity, Term Frequency-Inverse Document Frequency (TF-IDF), and Multinomial Naive Bayes (MNB) to analyze and score answers. By comparing student responses to reference answers on the basis of semantic similarity and keyword relevance, the model predicts a score with high accuracy. The system aims to improve grading consistency, reduce evaluation time, and enhance the overall efficiency of academic assessments. Experimental results show that the WMD technique performs better than Cosine Similarity in maintaining semantic integrity, and with sufficient training, the machine learning model is capable of functioning autonomously.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{176808,
        author = {Prasad Dattatray Jagdale and Prof. Chinmay Raje and Prathamesh Pandhare and Wrushabhkumar Bangar and Pritam Jadhav},
        title = {Automatic Subjective Answer Evaluation},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {11},
        number = {11},
        pages = {6268-6271},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=176808},
        abstract = {Subjective questions and responses provide an open-ended assessment of a student’s understanding, allowing them to express their knowledge in a personalized and conceptual manner. However, the manual evaluation of such answers is often time-consuming, inconsistent, and prone to bias. This project proposes an automated system for evaluating subjective answers using Machine Learning (ML) and Natural Language Processing (NLP) techniques. The system utilizes various NLP methods and models such as Word2Vec, WordNet, Word Mover’s Distance (WMD), Cosine Similarity, Term Frequency-Inverse Document Frequency (TF-IDF), and Multinomial Naive Bayes (MNB) to analyze and score answers. By comparing student responses to reference answers on the basis of semantic similarity and keyword relevance, the model predicts a score with high accuracy. The system aims to improve grading consistency, reduce evaluation time, and enhance the overall efficiency of academic assessments. Experimental results show that the WMD technique performs better than Cosine Similarity in maintaining semantic integrity, and with sufficient training, the machine learning model is capable of functioning autonomously.},
        keywords = {Subjective Answer Evaluation, Natural Language Processing (NLP), Machine Learning (ML), Word2Vec, Word Mover’s Distance (WMD), Cosine Similarity, TF-IDF, Multinomial Naive Bayes (MNB), Semantic Similarity, Automated Grading, Educational Technology, Text Preprocessing.},
        month = {April},
        }

Cite This Article

Jagdale, P. D., & Raje, P. C., & Pandhare, P., & Bangar, W., & Jadhav, P. (2025). Automatic Subjective Answer Evaluation. International Journal of Innovative Research in Technology (IJIRT), 11(11), 6268–6271.

Related Articles