India Enforces Strict New Rules for AI-Generated Content and Digital Platforms
In a decisive move to combat the proliferation of deepfakes and synthetic media, the Government of India has significantly tightened its digital regulatory framework. The Centre has mandated compulsory labeling, enhanced traceability, and user declarations for all artificial intelligence-generated content through amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules.
Accelerated Content Removal and Platform Accountability
The amended regulations dramatically reduce response timelines for removing unlawful content. Certain lawful takedown orders must now be complied with within a mere three hours, a substantial reduction from the previous 36-hour window. Additional deadlines have been tightened across the board: a 15-day window has been halved to 7 days, and a 24-hour requirement has been compressed to just 12 hours.
Social media platforms and their senior officers now bear direct compliance responsibility. These intermediaries must acknowledge user grievances within two hours and resolve them within seven days, creating a more responsive digital ecosystem.
Comprehensive Framework for Synthetic Content
For the first time in India's regulatory history, AI-generated material—including deepfake videos, synthetic audio, and manipulated visuals—has been formally brought under government oversight. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 will officially come into force on February 20, 2026.
The amended rules introduce a statutory definition of "synthetically generated information" (SGI) and impose mandatory obligations on intermediaries to:
- Identify AI-generated or AI-altered material
- Ensure such content carries clear and prominent disclosures visible to users
- Embed persistent metadata and unique identifiers to enable traceability of content origin and creation tools
Once applied, these mandatory disclosures cannot be altered, hidden, or removed by any party. The rules specifically exclude routine technical edits such as color correction, noise reduction, compression, or translation—provided these modifications do not change the fundamental meaning of the content. Clearly hypothetical or illustrative drafts are also exempt from these requirements.
Enhanced Compliance for Large Platforms
Large social media platforms face particularly stringent compliance requirements under the new framework. Before any content goes live, platforms must ensure users declare whether it is AI-generated. Intermediaries must deploy automated tools to verify these declarations by analyzing the content's format, source, and technical characteristics.
When content is identified as synthetic, visible labeling becomes mandatory. Platforms that knowingly allow unlabeled AI-generated content to remain online will be treated as having failed their due-diligence obligations, potentially facing significant penalties.
Enforcement Mechanisms and User Protection
Oversight and enforcement authority rests with the Ministry of Electronics and Information Technology, while users retain the right to appeal platform decisions to the Grievance Appellate Committee. The rules establish clear consequences for misuse of synthetically generated information, particularly when linked to:
- Child sexual abuse material
- Obscene content
- False electronic records
- Impersonation using a real person's identity or voice
- Explosives-related material
Such violations will attract action under multiple criminal laws. Additionally, platforms must warn users at least once every three months about penalties for misuse of AI-generated content, creating ongoing awareness about responsible digital behavior.
This comprehensive regulatory overhaul represents India's most significant step yet toward creating a safer digital environment while balancing innovation with accountability in the rapidly evolving artificial intelligence landscape.
