As AI rapidly reshapes the education landscape, institutions across India are grappling with a critical shift, which is from simply adopting technology to ensuring it is used responsibly and effectively. While policy frameworks like the National Education Policy (NEP) 2020 are laying a strong foundation, the real challenge lies in balancing innovation with governance, integrity, and equitable access. In this interaction with Tech Achieve Media (TAM), Chaitali Moitra, Regional Director – South Asia at Turnitin, shares her perspective on India’s readiness for AI in education, the urgent need to redefine academic integrity, and how institutions can move from detection-led approaches to building a culture of transparency, trust, and meaningful learning outcomes in an AI-driven world.
TAM: Is India’s education system ready for AI, or is adoption outpacing governance and policy frameworks?
Chaitali Moitra: India is decisively ready for the AI era, underpinned by a forward-looking policy foundation like National Education Policy (NEP) 2020. While readiness varies across institutions, the country is rapidly operationalising its vision to be an inclusive globally relevant model for AI in education. The success of this rapid adoption, however, depends on governance that prioritizes transparency and integrity. The government has signalled clear intent through the NEP 2020, integration of AI into CBSE/NCERT curricula from Grade 9. We are also seeing strong on-ground momentum, with regions like Chhattisgarh and Nagpur advancing teacher training and AI-enabled classrooms.
TAM: Should institutions move from policing AI use to redefining what academic integrity means in the AI era?
Chaitali Moitra: Yes, the conversation must evolve from detection to definition. Academic integrity in the AI era is no longer just about originality; it’s about transparency, clear attribution, and ethical collaboration with AI tools. Institutions need to clearly articulate what responsible AI use looks like, and embed a culture of transparency into their pedagogy. This shift is necessary because we are seeing significant changes in student work, such as increased submission of papers with high percentages of AI-generated content. Policing alone creates fear, redefining integrity builds trust and accountability.
TAM: How can educators ensure AI enhances critical thinking rather than becoming a shortcut for students?
Chaitali Moitra: The key is to design for process, not just outcomes. When educators focus on the learning journey, not just the final papers, AI becomes an enhancement, not a shortcut. This success is achieved by incorporating transparency measures like drafts, reflections, and discussions to make student thinking visible. This ensures students actively engage, think critically, and maintain ownership of their learning, leveraging AI only as a co-pilot rather than passively relying on it. Making generic AI-generated content insufficient requires real-world, application-based assignments that require analysis and problem-solving.
TAM: What does “responsible AI adoption at scale” realistically look like in a country as large and diverse as India?
Chaitali Moitra: Responsible AI adoption at scale in India is taking shape through a federated approach. While National Education Policy (NEP) 2020 provides a national policy backbone, initiatives like SOAR and IndiaAI Mission enable infrastructure and access at scale.
At the same time, states and institutions are driving execution through teacher skilling programs, localised pilots, and AI-led classroom initiatives, adapting broad principles to regional needs. This reflects India’s approach of setting direction at the centre while enabling execution at the local level.
Going forward, scaling responsibly will depend on expanding educator training, embedding AI literacy into curricula, ensuring equitable access across urban and rural institutions, and establishing clear ethical guardrails. In a system as diverse as India’s, success will come from combining national direction with local adaptability.
TAM: Are current assessment models already outdated in the age of generative AI? What needs to change first?
Chaitali Moitra: Yes, traditional models built around a one-time, final output are fundamentally outdated in the age of generative AI. The first priority is shifting to process-driven, verifiable learning, where the journey of thinking is as visible as the final output. This shift requires building strong transparency and integrity frameworks. Assessments also need to be AI-ready. Moving toward real-world and interdisciplinary assignments encourages analysis and problem-solving, while collaborative and experiential formats make learning more authentic. Clear guidelines on AI use, encouraging disclosure, and equipping educators with the right tools and training will help ensure assessments remain fair, relevant, and focused on genuine learning.
TAM: How should universities draw the line between acceptable AI assistance and academic misconduct?
Chaitali Moitra: The line is drawn using three core principles that create a foundation of transparency and trust:
- Clear Policies Around Responsible AI Use: Universities need to move away from ambiguity and clearly define what constitutes acceptable AI assistance.
- Clear Authorship and Disclosure: Transparency requires students to be open about how they have used AI, allowing educators to assess the balance between human effort and AI assistance. We’re seeing this shift play out, as highlighted in Turnitin’s Learning Integrity Insights Report 2026, where institutions are moving toward guided AI use with disclosure rather than outright bans.
- Verification and Oversight: Maintaining human oversight is important as AI can produce very fluent but sometimes inaccurate or even biased content. The responsibility still lies with the student to fact-check, validate sources, and ensure academic rigor.
TAM: How is Turnitin evolving its solutions to go beyond plagiarism detection and support institutions in building transparency and trust in AI-assisted learning?
Chaitali Moitra: Turnitin’s evolution has really been about moving from a detection-first to a transparency-first approach. In an AI-enabled world, simply flagging content isn’t enough; educators need context, visibility, and tools that support learning.
Today, Turnitin’s solutions are designed to give educators a deeper understanding of the writing process. For instance, AI writing detection and reports act as indicators for review rather than final verdicts, ensuring that human judgment remains central to academic decisions. Our solutions like Turnitin Clarity, go a step further by showing how a student’s work evolves over time, from drafts to final submission. This “proof of process” enables more informed, process-based evaluation. Our evolution ensures that the future of learning integrity is defined by context, not control, enabling meaningful conversations and supporting a culture of trust and transparency in every AI-assisted environment.















