News Details

img

AI in Higher Education

Dismissing the academic merits of ChatGPT is intellectually dishonest

The warnings are coming thick and fast: beware the “dangerous delusions” of passive artificial intelligence (AI) users, says Microsoft’s AI chief Mustafa Suleyman; Massachusetts Institute of Technology (MIT) researchers claim cognitive decline; and academics cry plagiarism.

That recent MIT study, for example, warns that regular users of large language models (LLMs) may face a 47 per cent decline in brain activity (a statement contested but widely quoted), as though conversing with a machine were equivalent to handing over your mind in a zip file. Elsewhere, some scholars now claim that using AI in any academic work, including research, amounts to a new form of plagiarism. The message is blunt: if you use AI, you are cheating, and your thinking will suffer.

This framing is far too narrow. It assumes that AI can only be used passively, as a shortcut for lazy “copy-and-paste” writing. In truth, many of us are exploring far more creative and dialogic uses of AI in higher education. LLMs, when used critically, can function not as ghostwriters, but as collaborators, a kind of provocateur that sharpens our thinking and challenges our assumptions.

I see this tension every day. As a peer reviewer, I’ve recently flagged papers riddled with fabricated references and superficial phrasing, unmistakable signs of careless AI use. But I also use AI daily in my own research as it can support rigour and creativity rather than erode it. It all depends on the approach: are we outsourcing thought, or using these tools to think better?

This distinction matters more than ever. When universities respond to AI with blanket suspicion, we risk driving students into secrecy – the very behaviour we claim to abhor. Prohibition creates shame, and shame makes cheats.

But if we create spaces for transparency and critique, AI can be integrated as a tool of intellectual curiosity rather than submission. Clearly it is of vital importance that students understand how crucial it is to do one’s own research, one’s own reading and writing, and that the more you know, the more you can have intellectual fun with AI.

This is not just theoretical for me. It is possible my life as a film-maker has prepared me for working with machines in a creative way – you cannot make any film whatever without technology. I have also always written both academically and creatively all my life so I am not anxious about collaborating with AI, and using its suggestions or throwing them away. 

It is also a fun way to think, and to write. My forthcoming book AI Intimacy and Psychoanalysis (Routledge, later this year) explores the cultural and psychological dimensions of our evolving relationship with language models. What disturbs us, I argue, is not just the fear of plagiarism but the uncanny feeling that AI now occupies the symbolic space of dialogue itself. It is no longer just a tool. It speaks. And in doing so, it unsettles the traditional structures of teaching and authority.

This discomfort is understandable but it’s no excuse for intellectual timidity. Consider a few possibilities: a doctoral student in postcolonial theory might ask an LLM for counter-readings of a text, not to plagiarise them, but to refine their own position. An undergraduate in environmental policy might use AI to simulate stakeholder debates, gaining insight into opposing arguments. In my own practice, I often run my drafts past a number of LLMs for critical feedback. They do offer ideas to consider and ask challenging questions, if they are set up and encouraged to do so. Also, when an LLM misinterprets something, it often means a human reader might too. These are not shortcuts. They are tools for precision and creativity.

Every new technology has sparked anxiety: the calculator, the internet, the word processor. But history has shown that what matters is not the technology itself, but how we use it. For me this is also a question of educational freedom. When I last wrote for Times Higher Education, it was about the state of academic freedom in Poland. Now, the stakes are different but no less urgent. We are being asked to think with machines.

This shift is already happening. A recent New Yorker piece profiles Luke Chang, an academic at Dartmouth, who spent a long drive home from the lab discussing his research problem with ChatGPT. Years ago, he might have mused on the problem alone. This time, he brainstormed with the machine, explaining, probing, listening as it suggested refinements. It wasn’t just “using” a tool. It was co-thinking. “It is a delight. I feel like I’m accelerating with less time,” he told the writer. “I’m accelerating my learning and improving my creativity...enjoying my work in a way I haven’t in a while.”

The language is telling: delight, acceleration, joy. Not words typically associated with moral panic. This is the tension we face in higher education: AI is here, for us to collaborate with. The question isn’t whether this will happen. It is already happening. The question is whether we meet it with suspicion or pedagogy.

Historically, moral panics have always accompanied new technologies of thought. Plato warned that writing would erode memory and produce only the illusion of wisdom. The printing press was accused of corrupting the soul. The real anxiety, then and now, is symbolic: a fear that the locus of knowledge might shift, that dialogue might leave the room.

AI reopens the very question Plato feared: can wisdom survive its separation from the speaker? We are no longer just reading. We are talking to the text.

Today, that dialogue might sometimes take place between a researcher and a machine on a drive through New Hampshire. If we’re honest, we know this already. AI is not just a search engine. It’s becoming a partner in the form of thought.

Our job now is to ensure that this partnership is conscious, ethical and transformative, not left to develop in silence or, worse, fear.

  • SOCIAL SHARE :