Looming EU AI act could force universities to ‘change everything’
Many European universities may have to “change everything” about how they use artificial intelligence once landmark European Union regulations come into force, an expert has warned.
Thomas Jørgensen, director of policy coordination and foresight at the European University Association, singled out the growing practice of academics using tools such as ChatGPT to assess student work as a particular concern. Such informal uses of AI could potentially fall foul of the EU Artificial Intelligence Act, he said.
“If you do assessment with AI, according to the AI act, there’s a whole range of requirements that you need to meet, both as a user and as a provider,” Jørgensen told Times Higher Education. “And I think it’s fair to say that the big large language models [LLMs] do not fulfil the criteria because it has to do with transparency and the data that’s being trained on it.”
Academics sometimes use AI tools to grade students without any institutional oversight, but the compliance challenges would also affect universities that have set up their own formal AI tools, he added.
“For teachers using ChatGPT for assessment, there is a risk that it is illegal; but we do not know yet as we do not have the guidelines from the European Commission’s AI Office,” he said, referring to the body that oversees the implementation of the act.
The EU AI act, a sweeping piece of legislation that covers a wide range of sectors, came into force in August 2024. Its remaining provisions are set to take effect over the next six to 36 months.
In March, the European Parliament voted to delay key provisions of the legislation to give companies and regulators more time to prepare for the changes. These provisions relate to high-risk artificial intelligence systems and were originally set to come into force in August this year. Under the act, AI used for student assessment is classified as high-risk – it falls into the same category as AI used for hiring decisions and credit scoring.
Jørgensen pointed out that universities had responded to the arrival of generative AI tools in 2022 and 2023 with task forces and guidelines and had developed reasonably clear strategies for responsible use in learning and teaching.
But a regulatory reckoning was now approaching. “The next step will be when the AI act really comes into force and the guidelines from the AI Office land. That’s going to be a big challenge.”
Once the provisions fully come into force, he said, the biggest risk will be that universities have to “change everything” and revisit a lot of the practices they have implemented with regards to AI in order to comply with the act.
Jorgensen said AI use in research was already substantial, particularly in STEM fields where researchers often worked with large and complex datasets. He was cautiously optimistic about what this could mean for the speed of scientific progress, which he said had been slow in recent years, but there were questions around data privacy.
“Typically, in the health space, we need to have policies on this because otherwise there is a risk that we don’t comply. But that is a conversation that’s just starting,” he said.
In January, the EUA published a report, spearheaded by Jørgensen, that examined the use of AI across European universities. Pointing out that the main source of data for training many models comes from the English-speaking world, and overwhelmingly from the US, it stressed that there was a need for European AI models.
Jørgensen warned that universities across Europe using the same US-trained commercial models risked eroding the intellectual diversity that made collaboration worthwhile. “Things become bland and uniform,” he said. “The idea of cooperation, because you come up with something different, goes away, because you ask the same model, trained on American data.”
He pointed to the emergence of European large language models, including EuroLLM, as a potentially more suitable alternative for universities. “The big US models don’t really cut it for us. They’re very US-centric,” he said. “If you speak a [different] language, they may not be as good for you.”