Beyond tools: An AI governance roadmap for universities
The novelty of generative AI is behind us. In 2026, Indian higher education will no longer be asking whether AI will disrupt campuses but how to embed it responsibly into everyday academic life.
With India’s tech industry expected to cross US$280 billion in annual revenue and AI projected to add about US$1.7 trillion to the economy by 2035, universities are emerging as key sites where India’s sovereign AI ambitions will either take shape or stall.
The IndiaAI Mission embodies this ambition. Backed by more than INR103 billion (US$1.1 billion) and a national compute backbone of approximately 38,000 GPUs, it aims to build an open, affordable AI ecosystem with strong domestic capabilities.
For vice-chancellors and directors, this changes the conversation. It is no longer sufficient to license Copilot, Gemini, or the next foundation model and hope for the best results. The central question is governance: how to harness these tools while managing the legal, ethical, social and psychological risks that accompany them.
1. From ‘ban it’ to ‘disclose it’
By early 2026, close to six in 10 Indian higher-education institutions had adopted some form of AI policy, driven in part by the reality that a large majority of students already used AI for assignments, coding and exam preparation. The blanket ban era is effectively over. Campuses are moving towards a disclosure-based regime built on radical transparency.
IIT Delhi is a bellwether. It was among the first IITs (Indian Institutes of Technology) to issue formal generative AI (GenAI) usage guidelines requiring the mandatory disclosure of AI assistance.
Students must clearly specify how AI was used – whether for proofreading, ideation, data visualisation, debugging or drafting – rather than merely citing final sources.
This approach shifts the burden of verification back to the human author and reinforces a simple but critical rule: AI may generate content, but humans must own and validate it.
Some Indian universities anticipated this shift well before GenAI became a formal regulatory concern.
Woxsen University, for example, constituted an institutional AI Policy Task Force in 2022, introducing an Artificial Intelligence Assessment Policy that operationalised graded AI use, compulsory disclosure, and explicit human accountability in student assessments – nearly three years before national governance frameworks began to converge.
Law schools are evolving in parallel with these developments. National Law University Delhi, for instance, has signalled that the use of AI to fabricate case law or ‘hallucinate’ authorities is unacceptable in legal training, echoing global incidents in which lawyers faced sanctions for filing briefs built on fictitious citations.
This stance is less about demonising AI and more about aligning student behaviour with the professional ethics expected in high-stakes disciplines.
2. Governing by sutras: The national ethic
Institutional choices are now being reframed through the lens of AI Governance Guidelines 2025 from the Ministry of Electronics and Information Technology of India (MeitY), which articulate seven guiding ‘sutras’ as India’s normative AI compass.
These principles – trust, people first, innovation over restraint, fairness and equity, accountability, understandability, and safety and resilience – are meant to be operational, not ornamental.
On campuses, three of these sutras are already proving foundational:
• Trust and accountability: Universities are being nudged towards creating auditable trails of AI use, especially in research. If a doctoral student uses a sovereign AI model, such as BharatGen, to generate literature summaries or codes, there should be a record of prompts and versions so that another scholar can reasonably reproduce or interrogate the work.
This blends traditional academic reproducibility with a new layer of machine accountability.
• People first: The guidelines insist that AI should augment human capabilities rather than hollowing them out. Faculty committees, including those at the Indian Institutes of Science and IITs, have stressed the importance of preserving the ‘productive struggle’ at the heart of learning.
If GenAI removes all cognitive friction from problem-solving, universities risk graduating students who can prompt but cannot think critically.
• Fairness and equity: In a country where caste, gender, language and regional hierarchies are woven into historical data, unexamined models can easily automate discrimination. The sutras explicitly call for bias detection and mitigation in AI systems.
For higher educational institutions, this translates into bias audits for tools used in admissions, scholarships, recruitment, and even automated grading, to ensure that digitisation does not silently legitimise old inequities.
3. The compliance trap: Universities as data fiduciaries
The Digital Personal Data Protection Act, 2023, and the draft DPDP Rules, 2025, have quietly reshaped universities’ legal posture. Educational institutions that collect and process student and staff information now clearly fall within the definition of data fiduciaries under the Act.
This has deep implications for the way Gen AI is deployed on campus.
Whenever a university pipes student essays, grades, behavioural data or counselling records into an external large language model for ‘personalisation’, it is processing personal data in a high-risk context. Under the DPDP framework, education institutions must do the following:
• Obtain clear, specific consent for each purpose and issue standalone, simple notices rather than burying data use in generic terms of service.
• Honour purpose limitation – data collected for admissions or examination cannot be quietly repurposed to train an internal prediction engine unless students have been explicitly informed and have agreed.
• Security safeguards and regular audits should be implemented, particularly if they qualify as Significant Data Fiduciaries, which attract additional duties such as annual Data Protection Impact Assessments.
Leading institutions are responding with ‘private AI’ architectures: hosting open-weight models inside Virtual Private Clouds or on-premise clusters so that sensitive research and student data do not transit international servers unnecessarily.
This approach aligns AI deployment with data sovereignty expectations while reducing the regulatory risk.
4. Sovereignty in syntax: The BharatGen moment
If the India AI mission is about computing and infrastructure, BharatGen is about language and identity.
As part of the broader sovereign AI push, the government has supported the development of public, Indian-trained language models capable of working across 22 scheduled languages and multiple modalities.
Early pilots were anchored in institutions such as IIT Bombay and allied research centres, with a clear mandate: Indian languages must nt remain second-class citizens in the AI era.
This is transformative for campuses on three fronts.
First, it levels the playing field for non-English medium students, who can now access AI support in their strongest languages.
Second, it opens the door for domain-specific tools, such as Adi Vaani and similar frameworks, that preserve and process tribal languages such as Gondi, Santali and others.
Third, it brings governance questions regarding Indigenous data sovereignty squarely into the academic domain.
Projects that digitise oral histories, folk knowledge or community practices for training AI systems are now expected to follow CARE principles – Collective benefit, Authority to control, Responsibility and Ethics – so that Indigenous communities retain agency over their own data and share materially in any downstream benefits.
Indian universities can no longer treat such data as free raw materials; they are co-owned assets that demand negotiated consent, benefit sharing, and culturally sensitive stewardship.
5. Building a psychological firewall
The invisible frontier of AI governance is mental health issues. Emerging literature on AI and labour markets points to the psychological toll of automation anxiety; workers and students who anticipate displacement often report heightened stress, identity loss, and feelings of reduced control over their futures. Academia is not immune.
When grading, evaluation, career guidance or even research supervision are heavily mediated by opaque AI systems, faculty can experience a loss of professional autonomy, while students may begin to feel that an algorithm, not their effort, will determine their worth.
The result is a mix of cognitive overload – “there’s always a better AI-optimised answer I am missing” – and anticipatory rumination about employability.
Therefore, a credible governance roadmap in 2026 must go beyond policies and procurement to include:
• Campus-wide AI literacy programmes that explain the capabilities, limits and failure modes of Gen-AI tools in plain language.
• Integrated mental health support, where counsellors and mentors are equipped to discuss AI-related anxiety as a legitimate concern, not a trivial complaint.
• Clear institutional messaging that positions AI as a co-pilot – a tool that extends human judgement rather than an infallible arbiter of merit.
This cultural framing is as important as the underlying technology. It determines whether AI will be perceived as an instrument of empowerment or a silent assessor that constantly watches and ranks students.
The way forward: Character, not just capacity
As India heads into the India-AI Impact Summit in February 2026 – the IndiaAI mission’s flagship event – the infrastructure story is impressive: Tens of thousands of GPUs have been deployed, sovereign models are in development, national guidelines are in place, and sector-specific consultations are well underway.
However, the true differentiator for Indian higher education will not be access to hardware or model weights but the character of the governance frameworks that sit on top of them.
The campuses that will lead share a few traits. They will replace bans and blanket enthusiasm with structured disclosures and human accountability. They will embed MeitY’s seven sutras into concrete practices in curricula, research and administration. They will treat data protection as a strategic responsibility rather than a compliance afterthought.
Such campuses will embrace BharatGen and other Indian models to widen participation while respecting the rights of communities whose knowledge underpins these systems. They will invest as seriously in psychological firewalls as in technical ones.
In short, the winners of 2026 will not be the universities that buy the most tools but those that govern them with trust, equity, and an unwavering commitment to human dignity.
Dr Hemachandran Kannan is the director of the AI Research Centre and vice-dean of the School of Business at Woxsen University in Hyderabad, India. Dr Raul Villamarin Rodriguez is vice-president of Woxsen University and a prominent cognitive technologist. Both Kannan and Rodrigues hold advisory positions on numerous national and international boards.
This article is a commentary. Commentary articles are the opinion of the authors and do not necessarily reflect the views of University World News.