India Charts Balanced Path for AI Regulation with New Governance Guidelines
The Ministry of Electronics and Information Technology (MeitY) has released comprehensive governance guidelines for Artificial Intelligence that could shape how India regulates this transformative technology. The framework aims to strike a careful balance between fostering innovation and ensuring accountability, while promoting growth alongside safety considerations.
These guidelines, unveiled in New Delhi on November 11, 2025, suggest risk management within India's existing legal framework under the guiding principle of 'Do No Harm'. The government had previously indicated it would avoid stringent AI regulation, believing the technology could drive innovation economy in the country.
Six Pillars of India's AI Governance Framework
The report organizes its key recommendations around six fundamental pillars that form the backbone of India's approach to AI governance:
Infrastructure: The guidelines call for expanding access to critical data and computing resources, including subsidized graphics processing units (GPUs) and India-specific datasets through platforms like AIKosh. They emphasize integration with Digital Public Infrastructure such as Aadhaar and Unified Payments Interface (UPI), while urging tax rebates and AI-linked loans to incentivize private investment and MSME adoption.
Regulation: India will adopt an agile, sector-specific approach that applies existing laws like the IT Act and Digital Personal Data Protection Act while addressing gaps through targeted amendments. The framework rules out immediate need for standalone AI legislation but calls for updates on classification, liability, and copyright matters.
Risk Mitigation: The guidelines propose an India-specific risk assessment framework reflecting local realities, along with voluntary frameworks and techno-legal measures embedding privacy and fairness rules directly into system design.
Accountability: A graded liability regime is recommended, with responsibility tied to function and risk level. Organizations must implement grievance redressal systems, transparency reporting, and self-certification mechanisms.
Institutions: The framework envisions a whole-of-government approach led by an AI Governance Group (AIGG), supported by a Technology & Policy Expert Committee (TPEC), and technically backed by the AI Safety Institute (AISI).
Capacity Building: The guidelines emphasize AI literacy and training for citizens, public servants, and law enforcement, recommending scaling up existing skilling programs to bridge gaps in smaller cities.
Addressing Deepfakes and Synthetic Content
The guidelines highlight the urgent need for effective content authentication as synthetically generated images, videos, and audio flood the internet. The government has already proposed legal amendments requiring platforms like YouTube and Instagram to add visible labels to AI-generated content.
According to the draft IT Rules amendments, social media platforms would need to ensure users declare whether uploaded content is synthetically generated, deploy technical measures to verify such declarations, and prominently display appropriate labels when content is confirmed as AI-generated. Non-compliance could result in platforms losing legal immunity from third-party content.
Government Concerns and Internal Debates
Even as the government encourages AI adoption with minimal regulatory burden, internal red flags have been raised about data privacy and inference risks, particularly when systems are used by government officials. Key concerns include scenarios where government officers upload internal notes to AI chatbots for summarization, police departments use AI assistants to optimize city CCTV networks, or policymakers employ conversational models to draft inter-ministerial briefs.
Two broad areas are under discussion within government circles: whether queries by top functionaries could be mapped to identify priorities, timelines, or weaknesses, and whether anonymized mass usage data from millions of Indian users could benefit global firms. This has sparked debates about protecting official systems from foreign AI services.
Expert Perspectives and Implementation Strategy
Professor Ajay Kumar Sood, Principal Scientific Advisor to the Government of India, stated during the launch: "The guiding principle that defines the spirit of the framework is 'Do No Harm'. We focus on creating sandboxes for innovation and on ensuring risk mitigation within a flexible, adaptive system."
S Krishnan, Secretary of MeitY, emphasized: "Our focus remains on using existing legislation wherever possible. At the heart of it all is human centricity, ensuring AI serves humanity and benefits people's lives while addressing potential harms."
The guidelines were prepared by a high-level committee chaired by Professor Balaraman Ravindran of IIT Madras. According to Abhishek Singh, Additional Secretary at MeitY and CEO of IndiaAI, the committee conducted extensive deliberations and public consultations before refining the final guidelines.
The launch precedes the India–AI Impact Summit 2026, which will mark the first-ever global AI summit hosted in the Global South, positioning India as a key player in shaping the future of artificial intelligence governance.