News Details

img

AI: Rethink Education

The wrong battle is being fought. Across lecture halls and headlines, artificial intelligence is being framed as the enemy of education – blamed for eroding critical thinking, enabling plagiarism, and displacing human teachers.

But beneath the backlash lies a deeper confusion about what’s really changing. “I want my tuition back.” That was the protest voiced by a student at Northeastern University, in The New York Times, after learning that his professor had used ChatGPT to generate parts of the syllabus and course content.

It wasn’t just about cost – it was about meaning. When machines participate in teaching, what, exactly, are students paying for? This outrage reflects more than a grievance; it reveals a moment of epistemic disorientation. The crisis is not about AI replacing learning – it’s about not knowing how to define learning anymore.

The Northeastern student’s protest struck a nerve not because it was unique but because it was familiar. Across classrooms and campuses, students, faculty and administrators are confronting a disorienting question: If machines can teach, what does it mean to learn?

The backlash against AI in education – especially generative AI – has surged in response to this uncertainty. Critics warn that it will erode critical thinking, accelerate plagiarism, reduce student motivation, and displace the human teacher.

But these concerns, while often valid, point to something deeper than policy missteps or ethical risks. They point to a crisis in how we understand the very architecture of education.

As headlines multiply and institutional reports issue grave warnings, one thing becomes clear: we are having the wrong conversation. The problem is not that AI is entering the classroom. The problem is that we have not yet reimagined the classroom for a world in which intelligence is no longer the sole domain of humans.

The critiques are not wrong because they lack urgency. They are wrong because they misdiagnose the condition. They treat AI as an external threat rather than recognising it as a mirror – one that reflects the fragmentation, inertia and epistemic drift already at work within our educational systems.

This article responds to the backlash not by dismissing the concerns but by challenging their underlying assumptions. It argues that many of today’s reactions to AI reflect a longing for a model of education rooted in control, hierarchy and a singular source of truth – models that are increasingly out of step with the complexity of the world our students inhabit.

Rather than asking whether AI belongs in education, we must ask a more urgent question: What kind of education belongs in a world already shaped by intelligent systems – and what kind of human agency must we cultivate within it?

The backlash: What the critics got wrong

The backlash against AI in education has taken on the intensity of a moral panic. Commentators, teachers and institutions alike have cast artificial intelligence as an existential threat to learning – as though a machine in the classroom signals the end of human thought itself.

In The New York TimesJessica Grose declared that AI could “destroy” students’ ability to think critically, warning that tools like ChatGPT undermine the development of genuine intellectual engagement.

But such pronouncements often reveal more about cultural anxieties than about pedagogy. They frame AI not as a tool requiring thoughtful integration but as a corrosive force destabilising the moral and cognitive foundation of education.

A similarly charged account appeared in The Economic Times, where a teacher went viral after resigning with the warning that AI was “destroying our kids”. The teacher’s concerns – about screen fatigue, cognitive dulling and declining literacy – echo a widespread fear that the classroom is being colonised by machines.

But these stories often romanticise a past that never quite existed: one where attention was undivided, books were sacred, and classrooms were immune to distraction.

In reality, students now inhabit a fractured media ecosystem, navigating attention economies and digital overload well before they set foot in a lecture hall. To blame AI alone is to overlook the broader, more structural disruptions already transforming how learning happens.

Institutional critiques, too, have amplified the alarm. Reports from the Center for American Progress and from Pew Research warn of AI’s risks – cheating, inequity, disengagement – often urging regulation or restriction.

These concerns are not unfounded. AI tools can be misused. When deployed without ethical foresight or pedagogical guidance, they can reinforce existing gaps and flatten educational experience into gamified metrics.

But framing AI primarily as a threat misses a deeper question: What are we really trying to preserve? And why do we assume that the values we cherish – curiosity, rigour, creativity – cannot coexist with intelligent systems?

The Harvard Independent’s counterpoint article takes a sharper stance, targeting AI’s “confident inaccuracies” and its potential to encourage superficial engagement. If students can produce polished but shallow responses with minimal effort, it asks, what becomes of academic rigour?

Yet this line of critique assumes a passive student body and an inert pedagogical framework. It treats technology as fate, not context. In doing so, it neglects the educator’s role in shaping how tools are used – how students are taught to verify, critique and interpret machine-generated content with discernment.

Taken together, these critiques converge around a common concern: the loss of control. They signal not just discontent with technology but a deeper discomfort with change – particularly changes that unsettle the symbolic authority of the teacher and the institutional rhythms of traditional learning.

They long for a classroom where meaning flows from human to student in a straight line. But that line has already been broken. Today’s knowledge landscape is non-linear, porous, and increasingly shaped by interactions with systems that think, respond and evolve.

The real crisis is not that AI is replacing thought – it’s that we have not yet redefined what thinking means in an age of augmentation. The backlash does not mark the failure of AI in education. It marks the failure of our educational discourse to evolve.

In clinging to outdated binaries – human vs machine, deep vs shallow, authority vs automation – we avoid confronting the real task: to build a pedagogical model capable of integrating artificial intelligence without surrendering the human stakes of learning.

The critics are not wrong to worry. But they are, quite simply, fighting the wrong battle.

The real crisis: Imagination, not technology

Calls to halt AI integration in education often masquerade as calls for caution. But beneath the surface lies a deeper impulse: a retreat from imagination. The fear is not just of machines – it is of what machines expose. When critics demand a pause, what they are really pausing is the capacity to rethink, to redesign, and to reimagine learning beyond inherited paradigms.

The crisis, in other words, is not technological. It is philosophical. It is the failure of our educational systems to develop relational, ethical and cognitive frameworks capable of engaging with a world already transformed by intelligent tools.

We remain tethered to false binaries – human versus machine, analogue versus digital, knowledge versus automation – while the actual terrain of learning has shifted beneath our feet.

We are now entering a third space: where cognition is not displaced but extended; where teachers evolve from content transmitters to meaning-makers and guides; and where students encounter intelligence that is distributed, iterative and co-produced.

To fear AI is to fear a mirror. It reflects the disjunctions already embedded in our institutions – the bureaucratisation of learning, the ritualism of assessment, and the widening gap between pedagogy and lived reality.

AI did not create these problems. It reveals them. It accelerates our need to confront what we have too long ignored: that the architecture of learning must evolve – not just to accommodate machines, but to accommodate the kind of human beings we are becoming.

This is not an invasion. It is an invitation. AI forces us to ask enduring questions with new urgency: What is knowledge? What is teaching? What is thinking when thinking is no longer confined to the human brain? The challenge is not to protect the old model from collapse but to design a new one worthy of this moment. That requires more than critique. It requires imagination.

The rest of the world is not waiting

While debates in the West remain mired in moral panic and binary thinking, other parts of the world are moving decisively toward a future where AI is not an adversary but an integrated part of educational transformation.

The question elsewhere is not whether AI belongs in the classroom – it is how to design systems that prepare students for a world shaped by artificial intelligence without surrendering ethical grounding or pedagogical depth.

In South Korea, AI education begins in elementary school. The government has launched dedicated AI high schools and is investing heavily in training thousands of teachers in both technical fluency and digital ethics. AI is not seen as a novelty – it is treated as a new literacy (Kim, 2023Park, 2024).

In China, AI was embedded into the national curriculum as early as 2018. Students from primary grades onwards participate in robotics programmes, national AI competitions, and hands-on labs. The goal is not just fluency – it is leadership in shaping AI’s development and governance by 2025.

Estonia and Finland have taken a holistic approach. In Estonia, AI and computational thinking are taught alongside civic education and digital citizenship.

In Finland, the now-global Elements of AI course – translated into more than 20 languages – has been adapted for secondary schools, blending technical literacy with ethical reflection.

Across the European Union, the Digital Education Action Plan (2021-2027) supports cross-border curriculum design, educator training and ethical standards for AI use in classrooms.

Rather than seeing AI as a threat to be managed, these efforts treat it as a structural feature of education’s future – one that demands careful, values-based integration.

What unites these systems is not technological enthusiasm but strategic imagination. They are not outsourcing their educational futures to corporations, nor are they paralysed by nostalgia for a pre-digital past. They are building frameworks that treat AI as a catalyst for pedagogical renewal, not decline.

Meanwhile, in education systems still clinging to yesterday’s hierarchies, we ask questions that no longer matter: Should AI be allowed in class? Will it erode traditional learning? These are not visionary questions. They are rear-guard defences. The rest of the world is not waiting for us to resolve our anxieties. It is already designing the next paradigm.

Žižek and the symbolic role of the professor

Slavoj Žižek, the Slovenian philosopher and cultural critic, rarely appears in conversations about education technology. But he should – because few thinkers are better equipped to diagnose what is really at stake in the backlash against AI.

While most commentators focus on outcomes, tools or teaching strategies, Žižek attends to something deeper: the symbolic scaffolding that holds the system together – the unconscious structures that determine not just what we think, but how we know (McMillan, 2015).

This is precisely what makes Žižek more relevant than edtech theorists or policy wonks. He reminds us that ideology is not simply a set of beliefs – it is the invisible frame that tells us what counts as valid, credible or real. In the classroom, this means that AI doesn’t just disrupt content delivery. It disrupts the symbolic architecture of education itself. It destabilises the very performance of authority.

For Žižek, the professor is not merely a conveyor of information. The professor occupies a symbolic position – the one who bestows meaning, who decides what matters, and who legitimises the act of knowing. The lecture, the grade, the seminar – their deeper function is not instructional but ritualistic. They organise the flow of knowledge around a stable, human centre.

But what happens when an AI language model – the illusion of intelligence – can answer a question faster, more fluently and sometimes more persuasively than the professor?

The authority of the professor begins to fray – not because they become obsolete, but because their monopoly on the performance of knowing is broken. Students are no longer bound to a singular, human source. They now traverse a knowledge terrain populated by non-human intelligences.

This symbolic rupture produces anxiety – not just among professors, but throughout the institution. The backlash against AI is not merely a defence of pedagogy. It is a defence of the ritual of pedagogy – the idea that education requires a designated authority to sanctify its process. We are not mourning the loss of learning. We are mourning the loss of a symbolic order where learning felt anchored, linear and legible.

But in that rupture lies opportunity. If we follow Žižek’s provocation to its conclusion, we are invited to reimagine the professor not as the unassailable centre of knowledge but as a guide through epistemic uncertainty.

The professor becomes an interpreter of an unstable symbolic field – someone who helps students navigate a world where truth is no longer delivered from the top down but encountered through dialogue, contradiction and discernment.

In this reconfigured role, the professor is not diminished by AI – they are elevated. No longer forced to perform mastery, they can instead perform meaning. They become the human who contextualises the non-human, who reveals the logic and limits of intelligent systems, and who models the kind of critical imagination machines cannot replicate.

Žižek offers us not a solution but a lens: a way to see the AI backlash for what it is – not just a debate over technology but a confrontation with the fragility of our symbolic institutions. In the age of simulated intelligence, we don’t just need educators who know how to teach. We need educators who know how to mean – who understand the power of interpretation in a world awash with answers.

Let universities lead – Not corporations

As artificial intelligence weaves its way into classrooms, a pivotal question looms: Who decides what counts as learning in the age of intelligent systems? So far, that power has tilted toward those least equipped to answer it – corporations and policymakers.

Ed-tech companies build the platforms, ministries chase innovation metrics, and pedagogy becomes an afterthought. When efficiency becomes the goal and market share the metric, education risks being hollowed out from within.

This is not merely a matter of commercialisation. It is a struggle over epistemic authority. When corporations design the architecture of AI-enhanced learning – the platforms, the algorithms, the content delivery systems – they do more than digitise instruction. They encode assumptions about what knowledge is, how it should be accessed, and which forms of intelligence deserve recognition.

In such systems, learning becomes something to be tracked, nudged and optimised. Metrics like click-through rates, completion percentages and engagement scores stand in for understanding, reflection and discernment. This is not education. It is data-driven instruction masquerading as innovation.

Universities must reclaim the future – not to preserve tradition, but to protect the moral and intellectual purpose of education. Unlike corporations, universities are not (in theory) beholden to shareholders or quarterly earnings.

They are institutions designed to serve the public good, to cultivate critical thinking, and to question the very assumptions upon which technological systems are built. That makes them uniquely suited to shape AI’s integration – not as passive adopters, but as epistemic stewards.

To lead meaningfully, universities must stop treating AI as a distant disruption and start treating it as a central research frontier. Schools of education must work alongside departments of computer science, cognitive science, philosophy and design to create interdisciplinary programmes that train educators in both AI fluency and ethical pedagogy.

Research centres must examine how intelligent systems reshape cognition, assessment and access – asking not just what is possible, but what is just. Ethics institutes must move beyond abstract guidelines and into applied frameworks that grapple with real classrooms, real disparities, and real risks.

We are already seeing signs of this shift. The University of Helsinki’s Elements of AI initiative – developed in collaboration with Reaktor – offers a free, multilingual, ethics-aware course on AI fundamentals that has been adopted by over one million learners worldwide.

Stanford’s Institute for Human-Centered AI (HAI) is pioneering cross-disciplinary research that merges machine learning with philosophy, education and law, prioritising societal impact over technological hype.

These are not side projects. They are blueprints for how universities can lead AI integration on their own terms – balancing innovation with moral clarity and relevance with reflection.

Universities must also invest in building open-source, human-centred alternatives to proprietary AI systems – tools that prioritise transparency, interpretability and collective authorship.

Projects like OpenMined, which focuses on privacy-preserving machine learning, or Hugging Face’s open models and datasets show how academic institutions can collaborate with the open-source community to build alternatives grounded in civic values.

These systems should be co-designed with educators and students, not imposed from above. They must be culturally responsive, pedagogically flexible, and designed not to extract data but to cultivate meaning.

Finally, the university must remember its civic role. It should not wait for governments to regulate AI responsibly. It should model what responsible use looks like. By doing so, it does more than ensure AI enhances rather than distorts education. It reasserts what kind of institutions we need to shape the human-machine future – with wisdom, care, and democratic imagination.

Because if we let corporations lead, we will build systems that teach students to serve machines. But if we let universities lead, we have a chance to build systems that teach students how to understand, critique and co-evolve with machines. The stakes are not just technological – they are profoundly human.

Conclusion: Let go of the binary, build the future

The debate over whether AI is ‘good’ or ‘bad’ for education is no longer meaningful. It is a distraction – a binary framework imposed on a reality that has already outgrown it.

Artificial intelligence is not arriving. It is here. The question that matters now is not if we integrate AI into education but how we do so – with intentionality and vision or with fear and denial.

To reject AI wholesale is not a principled stand for learning. It is a refusal to evolve. It is the abdication of responsibility at a moment when moral clarity, intellectual courage and pedagogical imagination are most needed.

The presence of intelligent systems in our classrooms forces us to revisit our most fundamental assumptions – not only about tools and teaching but about what it means to think, to know, and to be human in a co-intelligent world.

This is not a moment for reactive regulation or nostalgic retreat. It is a moment for reimagining pedagogy from the ground up. We need frameworks that move beyond the human-versus-machine binary.

We must begin to understand intelligence as co-authored – emerging through dynamic interplay between human consciousness and computational systems. Learning is no longer a linear transaction between student and teacher. It is relational, networked, and increasingly mediated by non-human agents.

In this new terrain, education must prioritise more than outcomes. It must prioritise the quality of attention, the depth of discernment, and the integrity of meaning-making – across both human and artificial forms of cognition.

This is not a turn away from the humanities. It is their rediscovery, refracted through the challenges of the algorithmic age. Ethics, imagination and critical theory are not adjuncts to AI education – they are its compass.

This responsibility cannot be outsourced to platforms or postponed by policymakers. It must be led by the institutions still entrusted with the cultivation of knowledge: universities.

If they rise to the challenge – not to defend tradition but to design new models of thought – they can become laboratories for a future in which learning remains a fundamentally human act, precisely because it adapts to new forms of intelligence without surrendering its purpose.

We must stop asking whether AI belongs in education. That question is obsolete. The real question is this: What kind of education belongs in a world where intelligence is no longer exclusively human?

Our answers will shape more than curricula. They will shape consciousness. And ultimately, they will shape the kind of society we are becoming.

James Yoonil Auh is the chair of computing and communications engineering at KyungHee Cyber University in South Korea, specialising in AI-driven education and ethical technology in global learning contexts. He has worked across the United States, Asia and Latin America on projects linking ethics, technology and education policy.

  • SOCIAL SHARE :