India's New IT Rules Regulate AI-Generated Content, Mandate Clear Labeling
India's New IT Rules Regulate AI Content, Mandate Labeling

India's Landmark IT Amendment Regulates AI-Generated Content for the First Time

In a significant move to combat digital misinformation, the central government officially notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, on February 10. These changes, filed as G.S.R. 120(E) and signed by MeitY Joint Secretary Ajit Kumar, bring synthetically generated content under formal regulation for the first time in India, with the rules set to take effect from February 20.

Defining Synthetically Generated Information

The gazette notification provides a clear definition of synthetically generated information (SGI) as any audio, visual, or audio-visual content that is artificially or algorithmically created, modified, or altered using a computer resource. Crucially, this content must appear real or authentic and depict people or events in a manner that could be mistaken for genuine. This broad definition encompasses deepfake videos, AI-generated voiceovers, and face-swapped images—essentially any machine-generated media designed to mimic reality.

However, the government has included specific exemptions to avoid overregulation. Routine editing tasks such as color correction, noise reduction, compression, transcription, translation, and accessibility adjustments are excluded, provided they do not distort the original meaning. Additionally, content created for illustrative or conceptual purposes in documents, research papers, PDFs, presentations, or training materials is not classified as SGI. The notification also explicitly excludes content intended for hypothetical, draft, template-based, or conceptual uses.

Platform Obligations and Labeling Requirements

Under the new rules, any intermediary that enables or facilitates the creation or dissemination of SGI must label it clearly, prominently, and unambiguously. The label must be visible directly on the content itself, not hidden in fine print or metadata. Platforms are also required to embed persistent metadata and unique identifiers into SGI, to the extent technically feasible, ensuring traceability back to their systems. Once applied, these markers cannot be removed or tampered with, closing a previous loophole where labels could disappear upon re-upload.

Significant social media intermediaries—such as Instagram, YouTube, and Facebook—face additional stringent obligations. Before any upload goes live, they must require users to declare whether the content is synthetically generated and deploy automated tools to verify these declarations. If confirmed as AI-made, the platform must display it with a visible label or notice. Failure to comply could result in liability, as knowingly permitting or promoting unlabelled synthetic content is deemed a failure of due diligence, potentially jeopardizing safe harbor protections.

Notably, the final version of the rules has rolled back a controversial proposal from the October 2025 draft, which mandated that visual labels cover at least 10% of the display area and audio markers play during the first 10% of clips. Industry feedback, including from bodies like IAMAI, criticized this as rigid and unworkable, leading to its removal in the final notification.

Stricter Timelines and User Warnings

The amendments introduce compressed timelines for platform compliance with government orders. In certain cases, platforms now have just three hours to act, down from 36 hours previously. Other deadlines have been reduced from 15 days to seven and from 24 hours to 12. Additionally, platforms must use automated tools to actively block SGI that violates the law, including categories such as child sexual abuse material, obscene or pornographic content, false electronic records, content related to explosives or weapons, and deepfakes intended to deceive by misrepresenting real people or events.

On the user front, intermediaries are now obligated to issue warnings at least once every three months—through terms of service, privacy policies, or other means, in English or any Eighth Schedule language—about the penalties for misusing AI content. Consequences range from account termination to mandatory reporting to law enforcement under the Bharatiya Nyaya Sanhita, 2023 (BNS) or the Protection of Children from Sexual Offences Act (POCSO Act). The gazette also updates legal references, replacing the Indian Penal Code with the BNS to align with India's new criminal law framework.

Impact on Social Media Users

For everyday social media users, the most noticeable change will be the introduction of clear labels on AI-generated posts, reels, videos, and audio clips across major platforms. This disclosure aims to inform users about the nature of content before they engage with it through likes, shares, or forwards. Users uploading content may also face a declaration step, requiring them to confirm whether AI tools were used in creation or alteration. Misrepresenting this declaration could lead to penalties under the BNS or POCSO Act, depending on the content's nature.

Platforms are further mandated to send periodic reminders—at least quarterly—about the rules governing AI content and the repercussions of violations. These updates are expected to appear in revised terms of service, privacy policies, or in-app notifications, ensuring users are consistently informed.

The draft rules were initially published in October 2025, with public feedback invited until November 13 after an extension. With the final notification now live, platforms have until February 20 to implement these comprehensive changes, marking a pivotal step in India's efforts to regulate the rapidly evolving landscape of artificial intelligence and digital media.