Automatic Essay Scoring with Context-based Analysis with Cohesion and Coherence


Authors : Winarsih; Adang Suhendra; Ana Kurniawati

Volume/Issue : Volume 9 - 2024, Issue 5 - May


Google Scholar : https://tinyurl.com/3bp9mka8

Scribd : https://tinyurl.com/3bmjvbbp

DOI : https://doi.org/10.38124/ijisrt/IJISRT24MAY200

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : Automatic Essay Scoring (AES) with context- based analysis with cohesion and coherence aims to develop a model that can assess essays automatically or by translating language diversity and student understanding. AES with context-based analysis using methods and applications based on Natural Language Processing (NLP) and the Machine Learning Framework (MLF), apart from being able to provide essay answers automatically, can also assess student understanding. Student or student understanding is the value obtained from answering questions according to the level of understanding that comes from the answer. By using the concepts of cohesion and coherence in the essay assessment system, the teacher can assess the quality of the answers obtained. The context-based essay assessment system was built to facilitate and speed up the process of assessing essay exam answers, to obtain standards and consistency in essay assessment according to the diversity of answers and the diversity of assessors if they have more than one subject. An essay exam is a learning evaluation given in the form of essay questions which have more varied answers than multiple choice questions. These variations in answers create difficulties for lecturers or teaching staff in assessing answers.

Keywords : AES; Cohesion ; Coheresion; NLP; Machine Learning.

References :

  1. Cutrone, Maiga Chang and Kinshuk, “Auto-Assessor:Computerized Assessment System for Marking Student’s Short-Answers Automatically”, IEEE International Conference on Technology for Education, 2011
  2. Thomas N. T., Ashwini Kumar, and Bijlani, “Automatic Answer Assessment in LMS (Learning Management Systems) using Latent Semantic Analysis”, Procedia Computer Science , 2015
  3. Ms. Shweta M. Patil and Prof. Ms. Soanl Patil, “Evaluating Student Descriptive Answers Using Natural Language Processing”, International Journal of Engineering Research & Technology (IJERT), Vol. 3, Maret, 2014
  4. Senthil Kumaran and A. Sankar, “Towards an Automated System for Short-Answer Assessment Using Ontology Mapping”, International Arab Journal of Technology, Vol.4, January, 2015
  5. Emad Fawzi Al-Shalabi, “An Automated System for Essay Scoring of Online Exams in Arabic based on Stemming Techniques and Levenshtein Edit Operations”, International Journal of Computer Science Issues (ICJSI)”, Vol. 13, September, 2016
  6. Darwish SM, Mohamed SK (2020) Automated essay evaluation based on fusion of fuzzy ontology and latent semantic analysis. In: Hassanien A, Azar A, Gaber T, Bhatnagar RF, Tolba M (eds) The International Conference on Advanced Machine Learning Technologies and Applications
  7. Dasgupta T, Naskar A, Dey L, Saha R (2018) Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In: Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications p 93–102
  8. Ding Y, et al. (2020) "Don’t take “nswvtnvakgxpm” for an answer–The surprising vulnerability of automatic content scoring systems to adversarial input." In: Proceedings of the 28th International Conference on Computational Linguistics
  9. Ajetunmobi SA, Daramola O (2017) Ontology-based information extraction for subject-focussed automatic essay evaluation. In: 2017 International Conference on Computing Networking and Informatics (ICCNI) p 1–6. IEEE
  10. Alva-Manchego F, et al. (2019) EASSE: Easier Automatic Sentence Simplification Evaluation.” ArXiv abs/1908.04567 (2019): n. pag
  11. Chen M, Li X (2018) "Relevance-Based Automated Essay Scoring via Hierarchical Recurrent Model. In: 2018 International Conference on Asian Language Processing (IALP), Bandung, Indonesia, 2018,  378–383, doi: https:// doi. org/ 10. 1109/ IALP. 2018. 86292 56
  12. Chen Z, Zhou Y (2019) "Research on Automatic Essay Scoring of Composition Based on CNN and OR. In: 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, p 13–18, doi: https:// doi. org/ 10. 1109/ ICAIBD. 2019. 88370 07
  13. Contreras JO, Hilles SM, Abubakar ZB (2018) Automated essay scoring with ontology based on text mining and NLTK tools. In: 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), 1-6
  14. D. Grimes and M. Warschauer, “Utility in a Fallible Tool: A Multi-Site Case Study of Automated Writing
  15. Evaluation,” Journal of Technology, Learning, and Assessment, vol. 8, no. 6, Mar. 2010, publisher: Technology and
  16. Assessment Study Collaborative. [Online]. Available: https://eric.ed.gov/?id=EJ882522
  17. Y. Attali and J. Burstein, “Automated essay scoring with e-rater® v.2,” The Journal of Technology, Learning and
  18. Assessment, vol. 4, no. 3, Feb. 2006. [Online]. Available: https://ejournals.bc.edu/index.php/jtla/article/view/1650
  19. Y. Tay, M. Phan, L. A. Tuan, and S. C. Hui, “Skipflow: Incorporating neural coherence features for end-to-end
  20. [automatic text scoring,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
  21. P. Wangkriangkri, C. Viboonlarp, A. T. Rutherford, and E. Chuangsuwanich, “A comparative study of pretrained language models for automated essay scoring with adversarial inputs,” in 2020 IEEE REGION 10 CONFERENCE (TENCON), 2020, pp. 875–880.
  22. K. Taghipour and H. T. Ng, “A neural approach to automated essay scoring,” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas: Association for Computational
  23. Linguistics, Nov. 2016, pp. 1882–1891. [Online]. Available: https://www.aclweb.org/anthology/D16-1193
  24. Z. Ke and V. Ng, “Automated essay scoring: A survey of the state of the art,” in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19. International Joint Conferences on Artificial

Automatic Essay Scoring (AES) with context- based analysis with cohesion and coherence aims to develop a model that can assess essays automatically or by translating language diversity and student understanding. AES with context-based analysis using methods and applications based on Natural Language Processing (NLP) and the Machine Learning Framework (MLF), apart from being able to provide essay answers automatically, can also assess student understanding. Student or student understanding is the value obtained from answering questions according to the level of understanding that comes from the answer. By using the concepts of cohesion and coherence in the essay assessment system, the teacher can assess the quality of the answers obtained. The context-based essay assessment system was built to facilitate and speed up the process of assessing essay exam answers, to obtain standards and consistency in essay assessment according to the diversity of answers and the diversity of assessors if they have more than one subject. An essay exam is a learning evaluation given in the form of essay questions which have more varied answers than multiple choice questions. These variations in answers create difficulties for lecturers or teaching staff in assessing answers.

Keywords : AES; Cohesion ; Coheresion; NLP; Machine Learning.

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe