News Details

img

AI in Research Ethics

AI can help research have more real-world impact, argues paper

Researchers can harness the power of artificial intelligence to ensure their work has a bigger real-world impact but must follow ethical steps to ensure that trust in academia is not “eroded”, a new paper has warned. 

Translational research – which looks to bridge the gap between scientific discovery and real-world application – could be “accelerated” by AI and used to improve outcomes, according to the Higher Education Policy Institute (Hepi) and Taylor & Francis paper.

AI could make research “more discoverable”, it argues, by developing search functions that utilise concepts and ideas, rather than just keywords, enabling faster analysis of large and complex datasets, and improving links between disciplines.

For example, the paper highlights work done at the University of Warwick where researchers have developed an AI tool to assist police investigations into gender-based violence and stalking.

It can analyse thousands of text messages to highlight conversations relevant to the investigation, including threats of harm, saving officers hundreds of hours because they do not manually have to read through all the evidence.

Using the analogy of a river, the authors write: “Using AI in translational research holds great promise, but also risks polluting the water: compromised data quality, murky algorithms, ethical quandaries and biases can seep in through unnoticed side channels. Unchecked, these pollutants can cloud judgements, distort results and erode trust in the research process.”

It warns that the “over-reliance on AI tools may reduce critical thinking and practical skills” and could result in early-career academics “failing” to develop the critical skills needed to evaluate the effectiveness of AI results. 

Rose Luckin, professor of learner-centred design at UCL, told the report authors that AI risks creating an “expertise paradox” whereby “AI tools are most safely used by experts who can spot errors, but are most attractive to novices who lack the skills to do tasks manually”.

Consequently, Luckin argues “metacognition is the antidote to deskilling”, and that “the solution is not to avoid using AI, but to pair this use with explicit metacognition training”.

The paper claims “metacognitive training turns AI from a deskilling risk into an upskilling opportunity”, and that “the AI revolution represents a pivotal moment where humans need to become more intelligent, not less”.

Rose Stephenson, director of policy and strategy at Hepi and co-author of the report, said while the UK has “extraordinary” research qualities, “too many ideas struggle to make the journey from discovery to real-world use”. 

“AI has the potential to support this process by speeding up analysis, connecting disciplines and improving access to research. However, these benefits will only be realised if AI is used transparently, ethically and in ways that strengthen, rather than replace, human expertise.”

The paper notes that “some previously relied-on frameworks are not seamlessly transitioning into the world of AI-enabled research”, including intellectual property laws.

“In translational research outdated intellectual property agreements and under-examined accountability norms can act like boulders lodged in the current. They do not contaminate the water itself, but they can force abrupt detours and create bottlenecks. As AI tools become more widely used, removing or redesigning these obstructions will be critical to keeping the river navigable.”

The paper recommends that funders should consider requiring adherence to the UK Research Integrity Office guidance as a condition of funding for research involving AI, and that the Research Excellence Framework should provide incentives for the use of open-source datasets. It further recommends that institutions should enforce mandatory training for research staff on how to use AI with integrity.   

“By investing in interdisciplinary expertise, ethical governance and the infrastructure needed to support and share AI-enabled tools, the UK can strengthen the entire research ecosystem,” it says.

  • SOCIAL SHARE :