All-in-One RAG Assessment Engine: Dynamic Creation, Automated Evaluation, and University-Centric Output

  • Unique Paper ID: 191950
  • Volume: 12
  • Issue: 8
  • PageNo: 8308-8314
  • Abstract:
  • The manual preparation of question papers and the evaluation of answers are laborious and cumbersome processes that are always prone to a great degree of personal bias. This study proposes an integrated Retrieval Augmented Generation assessment module that facilitates the automation of question paper creation and answer evaluation with strict compliance with the examination patterns followed by the institution. The proposed assessment module provides an approach to question paper creation and evaluation with strict compliance with the examination patterns. Questions were generated using the Google Gemini model with the use of specially designed queries. The generated examination papers are made to adhere to the institution's approved template. The evaluation module follows an approach by incorporating semantic similarity measurement along with the incorporation of the MiniVLM open-source model. The implementation follows the FastAPI and MongoDB stack along with the Next.js approach.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{191950,
        author = {Edric Jeffrey Sam and L. Mary Louis},
        title = {All-in-One RAG Assessment Engine: Dynamic Creation, Automated Evaluation, and University-Centric Output},
        journal = {International Journal of Innovative Research in Technology},
        year = {2026},
        volume = {12},
        number = {8},
        pages = {8308-8314},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=191950},
        abstract = {The manual preparation of question papers and the evaluation of answers are laborious and cumbersome processes that are always prone to a great degree of personal bias. This study proposes an integrated Retrieval Augmented Generation assessment module that facilitates the automation of question paper creation and answer evaluation with strict compliance with the examination patterns followed by the institution. The proposed assessment module provides an approach to question paper creation and evaluation with strict compliance with the examination patterns. Questions were generated using the Google Gemini model with the use of specially designed queries. The generated examination papers are made to adhere to the institution's approved template. The evaluation module follows an approach by incorporating semantic similarity measurement along with the incorporation of the MiniVLM open-source model. The implementation follows the FastAPI and MongoDB stack along with the Next.js approach.},
        keywords = {Retrieval Augmented Generation, FastAPI, MongoDB, MiniVLM, Google Gemini, Automation},
        month = {January},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 12
  • Issue: 8
  • PageNo: 8308-8314

All-in-One RAG Assessment Engine: Dynamic Creation, Automated Evaluation, and University-Centric Output

Related Articles