AI vs Teacher Feedback
Today, I would like to share a recent AI SoTL article entitled, “Generative AI offers more, but students revise less: Comparing the effects of teacher and AI feedback on student essay revisions” by Farrokhnia et al. (2026) (https://doi.org/10.1186/s41239-026-00579-9 ).
The authors examined the effects of AI feedback on student essay revision compared to traditional teacher feedback in the context of graduate argumentative writing. The study randomly assigned 70 graduate students to three conditions which included human teacher feedback, GenAI feedback using a standard zero-shot prompt, and GenAI feedback using a chain-of-thought (CoT) prompt designed to elicit step-by-step reasoning. All essays were evaluated with a rubric based on Toulmin’s model of argumentation, providing an evidence-based learning sciences lens focused on argument structure development, cognitive engagement, and scaffolding of higher-order thinking.
Findings
Results showed that CoT prompting produced higher quality feedback than both zero-shot AI and human teacher feedback, indicating that prompts aligned with complex reasoning tasks can lead AI to generate outputs more closely connected to the cognitive demands of argumentative writing. However, higher feedback quality did not translate into significantly better revision outcomes compared to the teacher feedback condition. Teacher feedback, though rated lower in quality, supported revisions just as effectively as AI feedback, suggesting that the mere quality of feedback does not guarantee learning transfer; learners’ engagement with and uptake of feedback are equally critical. Additionally, AI feedback quality was correlated with the quality of students’ initial drafts, whereas teacher feedback quality was not, highlighting differences in how feedback sources relate to learners’ baseline performance. These findings point to the potential of hybrid intelligent feedback systems where teachers guide students in interpreting and applying AI outputs as part of scaffolded learning experiences.
From the perspective of the learning sciences, this research underscores the central roles of metacognitive engagement, feedback interpretation processes, and cognitive scaffolding in writing development. It suggests that AI systems should be designed to support not only content generation but also self-regulated revision strategies, such as prompting students to reflect on the alignment between feedback and learning objectives. Explicit integration of AI feedback within curriculum frameworks. For example, through instructional support that helps learners translate AI suggestions into domain-specific improvements may amplify cognitive gains and help sustain deeper learning outcomes.
Reference
Farrokhnia, M., Latifi, S., Papadopoulos, P. M., Hogenkamp, L., Gijlers, H., Khosravi, H., & Noroozi, O. (2026). Generative AI offers more, but students revise less: Comparing the effects of teacher and AI feedback on student essay revisions. International Journal of Educational Technology in Higher Education, 23, Article 6.

