India's Revised IT Rules Take Aim at AI Deepfakes with Strict Deadlines
India is intensifying its regulatory approach to combat the misuse of artificial intelligence, particularly deepfakes, through amended Information Technology Rules set for 2026. The new regulations mandate that online platforms must remove non-consensual sexual imagery, including AI-generated deepfakes, within a stringent two-hour window after receiving a complaint. This marks a significant shift from the government's earlier strategy of calibrated restraint, which relied on AI Governance Guidelines and the Digital Personal Data Protection Act to foster innovation while testing corporate compliance.
Key Provisions of the 2026 IT Rules
Under the revised framework, platforms are required to take swift action against various forms of unlawful content. Specifically:
- Non-consensual sexual imagery and deepfakes must be removed within two hours of a complaint.
- Other unlawful content must be taken down within three hours following a government or court order.
- AI-generated content must be clearly labeled to ensure transparency.
- Platforms offering AI tools must implement measures to prevent the creation or dissemination of child sexual abuse material (CSAM), explosives-related content, and fraudulent deepfakes.
- User complaints must be resolved within seven days, adding a layer of accountability for tech companies.
Global Context and India's Stringent Approach
India is not alone in tightening oversight of digital content. Countries like Germany, under its NetzDG law, allow platforms 24 hours to remove "manifestly illegal" content. The European Union's Digital Services Act requires expeditious action without specific timeframes, while Australia's eSafety regime permits 24-hour takedown notices in serious cases. However, India's two-hour deadline stands out as particularly rigorous, reflecting a growing urgency to address AI-driven harms such as deepfakes that can clone faces and voices for fraud or harassment.
Challenges in Implementation and Enforcement
Despite the laudable intent, the effectiveness of these rules faces several hurdles:
- Execution Difficulties: India's linguistic diversity, cultural complexity, and vast content volume make contextual judgment of fraudulent posts challenging. Automated systems used by large platforms may struggle to accurately detect synthetic content, leading to potential false positives where legitimate content is removed hastily to meet deadlines.
- Labeling and Traceability Issues: While labeling AI-generated content seems straightforward, much of it is edited and reposted across platforms, complicating enforcement. Traceability is also problematic, as metadata can be stripped or altered, watermarks degraded, and open-source models trained to avoid detectable markers.
- Risk to Free Expression: There is a concern that traceability tools meant to catch fraudsters could be misused for surveillance, potentially exposing whistleblowers or citizens sharing lawful but sensitive content. In a diverse society, satire might be misinterpreted as blasphemy, raising questions about the clarity of definitions in the rules.
Balancing Regulation with Innovation and Rights
The rules emphasize cracking down on clearly unlawful content like CSAM, non-consensual imagery, incitement to violence, and fraud. However, they must not stifle freedom of expression or innovation. The success of these measures will depend less on strict timers and more on:
- Clear definitions of unlawful content.
- Transparent enforcement mechanisms.
- Independent oversight bodies.
- Credible redressal systems for false alarms.
As AI technology continues to evolve, cryptographic provenance chains and platform monitoring may help raise the cost of deception, but they may not suffice against advancing AI capabilities. Ultimately, India's approach highlights a global trend toward stricter AI governance, yet its effectiveness remains to be seen in practice.
