News Details

img

AI and Feedback Risks

Academics risk ‘losing craft of feedback’ if outsourced to AI

Artificial intelligence tools risk undoing “20 years’ worth of progress” in developing effective marking and feedback practices within universities, with academics themselves at risk of losing core skills because of heavy reliance on technology, a paper has argued.

The study, published in the journal Assessment & Evaluation in Higher Education, argues that AI tools that merely produce written comments reduce feedback from a “process” to a “product”.

“Designing AI systems that generate comments, no matter how copious or instantaneous, represents the replication of outdated representations of feedback within new modes of technology,” it says.

It comes as universities increasingly look to AI tools to reduce workloads amid dwindling resources and job cuts, with a UK trial by non-profit organisation Jisc currently assessing how AI feedback tools could be utilised in assessment processes to relieve workload pressures from academics. 

Naomi Winstone, professor of educational psychology at the University of Surrey and lead author of the report, told Times Higher Education that there was a risk that “the craft of feedback” and the pedagogical research behind it may begin to “erode” if it is outsourced to AI.

“There’s about 20 years of research which has been really pushing this message that feedback is not a product, it’s a process. Then AI comes along, which is able to produce that product very, very quickly and promises to solve all these challenges with large class sizes and resource constraints, but what happens to that scholarship we must respect?”

She said while it may be “attractive” for universities to turn to AI assessment tools for productivity purposes, she believed it was a “false economy”, as “we have to think about what’s lost, not only about what’s gained”.

“If we think about all of the bits that go into feedback – conversations in the classroom, when a student chats to us in corridor, how we think about what we’re writing and think about who we’re writing for – all those parts of the process AI can’t at the moment replicate and I don’t think it ever could because there’s a there’s a real psychological connection that’s very difficult to replicate.”

Instead, the paper calls for a “care-full” approach to feedback, which has a “respect for scholarship”, by centring a multilayered approach to marking.

It says AI can still play a role in assessment processes. For example, it may be effective in providing longitudinal analysis of the feedback a student is given across their university experience, which is currently limited as students often have different lecturers for different modules and assessments.

AI may also be able to “synthesise multiple bits of feedback and pull out key messages” from across their assessments, which may provide greater insight into their progression, said Winstone.

The paper also argues that outsourcing feedback to AI could lead to risks of “stagnation in educators’ skill in the craft of feedback”.

Winstone noted that the impact of outsourcing core university functions on academics themselves is “very rarely discussed”, and the effect on their ability to interact with students needs to be more carefully considered. 

“What I very rarely hear people talk about in this conversation is what’s lost for teachers if they don’t engage in assessment and feedback? One of the most important purposes of assessment for me as a teacher is getting a sense of where my students are, or where they might be struggling. If we outsource assessment and feedback too much, we’re losing one of the most important sources of feedback for us as teachers on our teaching, which is what we get through assessment.”

  • SOCIAL SHARE :