Potential of Artificial Intelligence Tools for Text Evaluation and Feedback Provision
The article aims to explore the potential of generative artificial intelligence (AI) for assessing written work and providing feedback on it. The goal of this research is to determine the possibilities and limitations of generative AI when used for evaluating students’ written production and providi...
Saved in:
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
MGIMO University Press
2025-03-01
|
Series: | Дискурс профессиональной коммуникации |
Subjects: | |
Online Access: | https://www.pdc-journal.com/jour/article/view/442 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The article aims to explore the potential of generative artificial intelligence (AI) for assessing written work and providing feedback on it. The goal of this research is to determine the possibilities and limitations of generative AI when used for evaluating students’ written production and providing feedback. To accomplish the aim, a systematic review of twenty-two original studies was conducted. The selected studies were carried out in both Russian and international contexts, with results published between 2022 and 2025. It was found that the criteria-based assessments made by generative models align with those of instructors, and that generative AI surpasses human evaluators in its ability to assess language and argumentation. However, the reliability of this evaluation is negatively affected by the instability of sequential assessments, the hallucinations of generative models, and their limited ability to account for contextual nuances. Despite the detailisation and constructive nature of feedback from generative AI, it is often insufficiently specific and overly verbose, which can hinder student comprehension. Feedback from generative models primarily targets local deficiencies, while human evaluators pay attention to global issues, such as the incomplete alignment of content with the assigned topic. Unlike instructors, generative AI provides template-based feedback, avoiding indirect phrasing and leading questions contributing to the development of self-regulation skills. Nevertheless, these shortcomings can be addressed through subsequent queries to the generative model. It was also found that students are open to receiving feedback from generative AI; however, they prefer to receive it from instructors and peers. The results are discussed in the context of using generative models for evaluating written work and formulating feedback by foreign language instructors. The conclusion emphasises the necessity of a critical approach to using generative models in the assessment of written work and the importance of training instructors for effective interaction with these technologies. |
---|---|
ISSN: | 2687-0126 |