India's New IT Rules Mandate Clear Labeling for AI-Generated Content
India has officially implemented new Information Technology rules targeting AI-generated content, which came into effect today. These regulations require social media platforms to label deepfakes, synthetic audio, and altered visuals with visible markers that users can easily identify immediately.
Amendments to the IT Rules and Key Definitions
The amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, were notified earlier this month on February 10 via Gazette Notification G.S.R. 120(E). This notification was signed by Ajit Kumar, Joint Secretary of the Ministry of Electronics and Information Technology.
Under the new framework, platforms must embed metadata and unique identifiers in all synthetically generated content to ensure traceability back to its source. Once applied, these markers cannot be altered, hidden, or deleted. This marks the first time AI-generated content has been brought under a formal regulatory framework in India.
The government has formally defined "synthetically generated information" for the first time. It refers to any audio, visual, or audio-visual content created or altered using a computer that appears real and depicts people or events in a way that could be mistaken for genuine. This includes:
- Deepfake videos
- AI-generated voices
- Face-swapped images
- AI-generated images of fictional scenarios involving real people
However, not all digital editing work falls under this category. Exempted categories include normal changes that do not affect the original message, such as:
- Colour correction
- Noise reduction
- File compression
- Text translation
- Accessibility improvements
Additionally, conceptual or illustrative content in documents, research papers, PDFs, and presentations is exempted. The notification specifically excludes content created for "hypothetical, draft, template-based or conceptual" purposes. For example, an office PowerPoint with a stock AI illustration does not qualify, while a deepfake of a politician delivering a speech they never gave does.
Impact on Social Media Users and Platforms
For users of platforms like Instagram, YouTube, or any major social media service, the most visible change will be the addition of labels. Any AI-generated post, reel, video, or audio clip will now carry a clear tag indicating it was machine-generated, visible before users like, share, or forward it.
When uploading content, users may be asked by platforms whether it was developed or modified by AI. Providing a false statement is no longer just a violation of terms of service; it may also invite legal consequences under the Bharatiya Nyaya Sanhita (BNS) or POCSO Act, depending on the content. Platforms must remind users of this requirement at least once every three months.
Services that host or distribute AI-generated content must clearly mark it as such, not in small print or metadata, but directly on the content itself. They must also imprint permanent markers and unique identifiers to ensure traceability and prohibit any attempt to delete or alter these markers. This closes a loophole that could have allowed markers to be erased upon re-upload.
Large services such as Instagram, YouTube, and Facebook have additional obligations. Before posting any file, they must require users to state if the content is AI-generated and use automated software to verify this statement. If a service knowingly hosts unmarked AI-generated content, it will forfeit its legal safe harbour rights.
Initially, the rules specified that visual markers must appear on at least 10% of the screen and audio markers must play during the first 10% of a video. However, this requirement was removed after pressure from industry groups. Marking is still mandatory but without the size specification.
Tightened Response Timelines and Legal Updates
Response timelines have been significantly tightened. For certain government orders, platforms now have three hours to act, down from 36 hours. Other deadlines have been reduced from 15 days to seven and from 24 hours to 12.
Platforms must actively use automated tools to block AI-generated content that violates the law. This includes:
- Child sexual abuse material
- Obscene content
- Fake electronic records
- Content related to weapons or explosives
- Deepfakes designed to misrepresent real people or events
The rules also update legal references, replacing mentions of the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023. The draft rules were first published in October 2025, and platforms had until February 20 to comply with the final notification.
