Authors :
S Zindove; S Chaputsira
Volume/Issue :
Volume 9 - 2024, Issue 7 - July
Google Scholar :
https://tinyurl.com/4sze97pp
Scribd :
https://tinyurl.com/wznfcv2k
DOI :
https://doi.org/10.38124/ijisrt/IJISRT24JUL1710
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
Automated grading of short answer questions
is a challenging task that involves understanding and
evaluating free-text responses. This research presents an
innovative model that combines the capabilities of the
language model all-mpnet-base-v2 with a machine
learning-based lenience adjustment mechanism to
enhance the accuracy and fairness of automated grading
systems. The proposed model utilizes all-mpnet-base-v2
for natural language understanding and feature
extraction from student responses. To address the
variability in acceptable answers and provide a fair
grading system, a machine learning-based model is
integrated to adjust the level of lenience dynamically. This
dual approach ensures that the grading system can handle
a wide range of responses while maintaining consistency
and reliability. The experimental results demonstrate that
the combination of all-mpnet-base-v2 with the lenience
adjustment model significantly improves grading
accuracy compared to traditional methods. This model
represents a significant advancement in the field of
educational technology, offering a robust solution for
automated grading systems that can adapt to diverse
educational contexts and requirements.
Keywords :
All-Mpnet-Base-V2, Lenience, Convolutional Neural Networks, Pretrained Models.
References :
- Baker, R., & Smith, L. (2019). Evaluating the fairness of automated grading systems: Bias, transparency, and explainability. Journal of Educational Technology Research, 35(2), 123-140.
- Burrows, S., Gurevych, I., & Stein, B. (2015). The eras and trends of automatic short answer grading. International Journal of Artificial Intelligence in Education,
- Chen, L., & He, Z. (2013). A machine learning based approach for automatic short answer grading. Proceedings of the 2013 International Conference on Artificial Intelligence, 534-539.
- Gao, Y., & Zhu, J. (2021). Enhancing short answer grading with transformers and knowledge distillation. IEEE Transactions on Learning Technologies
- Hijikata, Y., & Matsushita, K. (2017). A survey of natural language processing techniques for automatic short answer grading. Journal of Information Processing
- Ichikawa, H., Fujii, H., & Tokunaga, T. (2020). Estimating justification cues in student answers using BERT for automatic grading. Proceedings of the Annual Meeting of the Association for Computational Linguistics
- Liu, H., Luo, C., & Zhu, Y. (2019). Multiway attention networks for automatic grading of student essays.IEEE Transactions on Learning Technologies
- Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence Unleashed: An argument for AI in Education. Pearson Education
- Magooda, A., Farag, M., & Hussein, M. (2019). Automatic short answer grading using semantic similarity measures. Computers & Education
- Mohler, M., Bunescu, R., & Mihalcea, R. (2011). Learning to grade short answer questions using semantic similarity measures and dependency graph alignments. Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies
- Moore, S., Nguyen, H. A., Chen, T., & Stamper, J. (2023). Assessing the quality of multiple-choice questions using GPT-4 and rule-based methods. In European Conference on Technology Enhanced Learning
- Nielsen, R. D., Ward, W., & Martin, J. H. (2008). Annotating students' understanding of science concepts. Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC
- Phandi, P., Chai, K. M. A., & Ng, H. T. (2015). Flexible domain adaptation for automated essay scoring using correlated linear regression. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
- Pulman, S. G., & Sukkarieh, J. Z. (2005). Automatic short answer marking. Proceedings of the Second Workshop on Building Educational Applications Using NLP
- Ramanathan, V., & Di Eugenio, B. (2014). Lightly supervised learning of procedural dialogue systems. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing
- Riordan, B., & Klein, D. (2014). Unsupervised system for short answer grading using clustering. Proceedings of the 9th Workshop on Innovative Use of NLP for Building Educational Applications
- Roy, D., & Roy, K. (2021). Short answer grading using machine learning: A survey.
- Tan, H., Wang, C., Duan, Q., Lu, Y., Zhang, H., & Li, R. (2020). Automatic short answer grading by encoding student responses via a graph convolutional network. Interactive Learning Environments
- van der Waa, J., Nieuwburg, E., Cremers, A., & Neerincx, M. (2021). Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence
- Weston, J., & Hermann, K. M. (2015). Artificial Intelligence: Deep learning for answering
- Zhang, Z., & VanLehn, K. (2016). Using learning technologies to support computer-based grading of student work. Journal of Educational Computing Research
- Zhou, G., & Yang, M. (2017). Automatic short answer grading via multi-layered semantic matching. EEE Transactions on Knowledge and Data Engineering
- Ramachandran, G., & Chakrabarti, A. (2020). Hybrid models for automatic short answer grading using NLP and deep learning. Journal of Educational Technology Development and Exchange (JETDE)
- Saha, A., & Dey, L. (2012). Automatic grading of short descriptive answers in medical domain. Proceedings of the 13th International Conference on Intelligent Text Processing and Computational Linguistics
- Silvestri, G., & Ferilli, S. (2013). Automatic grading of short student answers by semi-supervised short text clustering. Journal of Computing and Information Technology
- Raman, M., & Yadav, N. (2022). Using machine learning for automated assessment of short-answer questions
- Farag, M., & Younis, M. (2018). Neural network-based methods for short answer grading: A survey. Information Processing & Management
- Mutlu, E., & Aleven, V. (2012). Enhancing automated essay scoring with discourse structure and sentence specificity features. Journal of Educational Computing Research
- Flor, M., & Futagi, Y. (2012). Automatic detection of preposition and determiner errors in ESL writing. Journal of Educational Computing Research
- Yaneva, V., & Temnikova, I. (2017). Evaluating the readability of automatic short answer grading: A comparative study. Journal of Computing and Information Technology
- Dascalu, M., & Trausan-Matu, S. (2014). Automatic feedback for improving student writing skills using linguistic features. Journal of Educational Technology & Society
- Gao, Y., & Zhu, J. (2021). Enhancing short answer grading with transformers and knowledge distillation. IEEE Transactions on Learning Technologies,
- Hijikata, Y., & Matsushita, K. (2017). A survey of natural language processing techniques for automatic short answer grading. Journal of Information Processing,
- Ichikawa, H., Fujii, H., & Tokunaga, T. (2020). Estimating justification cues in student answers using BERT for automatic grading. Proceedings of the Annual Meeting of the Association for Computational Linguistics, 48-55.
- Liu, H., Luo, C., & Zhu, Y. (2019). Multiway attention networks for automatic grading of student essays. IEEE Transactions on Learning Technologies,
- Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence Unleashed: An argument for AI in Education. Pearson Education.
- Magooda, A., Farag, M., & Hussein, M. (2019). Automatic short answer grading using semantic similarity measures. Computers & Education, 129, 234-245.
Automated grading of short answer questions
is a challenging task that involves understanding and
evaluating free-text responses. This research presents an
innovative model that combines the capabilities of the
language model all-mpnet-base-v2 with a machine
learning-based lenience adjustment mechanism to
enhance the accuracy and fairness of automated grading
systems. The proposed model utilizes all-mpnet-base-v2
for natural language understanding and feature
extraction from student responses. To address the
variability in acceptable answers and provide a fair
grading system, a machine learning-based model is
integrated to adjust the level of lenience dynamically. This
dual approach ensures that the grading system can handle
a wide range of responses while maintaining consistency
and reliability. The experimental results demonstrate that
the combination of all-mpnet-base-v2 with the lenience
adjustment model significantly improves grading
accuracy compared to traditional methods. This model
represents a significant advancement in the field of
educational technology, offering a robust solution for
automated grading systems that can adapt to diverse
educational contexts and requirements.
Keywords :
All-Mpnet-Base-V2, Lenience, Convolutional Neural Networks, Pretrained Models.