India Enacts Landmark Regulations for AI-Generated Content
The Central Government of India has officially notified significant amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, bringing artificial intelligence-generated content under a structured regulatory framework for the first time. The new rules, set to come into effect from February 20, 2026, specifically target deepfake videos, synthetic audio, and algorithmically altered visuals that have proliferated across digital platforms.
Mandatory Labeling and Traceability Requirements
Under the updated regulatory framework, social media platforms and digital intermediaries must ensure that all synthetically generated information (SGI) is clearly and prominently labeled so users can immediately distinguish it from authentic content. Platforms are required to embed persistent metadata and unique identifiers to make such content traceable back to its original source. Crucially, intermediaries cannot allow the removal or suppression of these labels or metadata once they have been applied, creating a permanent digital trail for synthetic media.
Enhanced Responsibilities for Major Platforms
Significant social media intermediaries—including platforms like Instagram, YouTube, and Facebook—face stricter obligations under the new regulations. Before any content upload, these platforms must obtain a user declaration regarding whether the material is synthetically generated and deploy automated tools to verify those claims. If content is flagged as AI-generated, it must carry a visible disclosure before going live. The government notably dropped an earlier proposal from the October 2025 draft that would have required visible watermarks covering at least 10% of screen space on AI-generated visuals, following industry pushback that called the rule too rigid and technically impractical across different formats.
Sharply Compressed Takedown Timelines
The amendments dramatically reduce response times for content removal. In specific cases, platforms now have just three hours to act on lawful takedown orders, down from the previous 36-hour window. Other response timelines have been compressed from 15 days to seven days and from 24 hours to just 12 hours, significantly accelerating the enforcement process against problematic content.
Regular User Warnings and Compliance Measures
The rules also mandate that platforms warn users at least once every three months about penalties for violating the new provisions, including the misuse of AI-generated content. This regular notification requirement aims to increase user awareness about the legal consequences of creating or sharing synthetic media without proper disclosure. The comprehensive regulatory approach represents India's most significant step yet toward addressing the growing challenges posed by advanced AI technologies in the digital information ecosystem.
