In a significant regulatory move, the Ministry of Electronics and Information Technology (Meity) has issued a stern warning to major social media companies operating in India. The government has threatened legal action against platforms like Facebook, Instagram, and YouTube for their 'below par' reporting and handling of sexually explicit and pornographic material.
Government's Stern Advisory and Legal Threats
Late on Monday, Meity sent an advisory to significant social media intermediaries, expressing deep concern over the circulation of content that violates India's laws on decency and obscenity. The ministry stated that persistent reports from the public, stakeholders, and judicial observations have highlighted this systemic failure.
This marks the third such regulatory action by New Delhi in 2024. Earlier, in February, the Ministry of Information & Broadcasting directed OTT and social media platforms to block explicit content strictly. This was followed by a major crackdown in July, where the government banned over 20 streaming platforms for knowingly hosting such material.
The advisory emphasized the growing public anxiety about the responsible use of digital spaces. It stressed that the constitutional right to free speech is subject to reasonable restrictions, and platforms must ensure greater consistency in identifying and removing unlawful content.
Consequences of Non-Compliance: Loss of Legal Shield
The ministry's notice carries substantial legal weight. It explicitly states that failure to adhere to stricter due diligence obligations will result in platforms losing their crucial 'safe harbour' protection under Section 79 of the IT Act, 2000, and the IT Rules, 2021.
This legal immunity protects intermediaries from liability for user-posted content, provided they follow government-prescribed takedown protocols. Without it, companies could face direct criminal prosecution. The advisory specifically mentions potential charges under the new Bharatiya Nyaya Sanhita, 2023.
Furthermore, the government is in parallel discussions with Big Tech about mandating labels for AI-generated and modified sexually explicit content, indicating a broader push for content accountability.
How Platforms Are Responding: A Look at the Numbers
When approached for comment, social media firms directed attention to their transparency reports. The data reveals a mixed picture of content moderation efforts.
Google's YouTube reported removing a staggering 12.1 million videos globally between July and September 2024. A massive 98% of these were taken down by automated systems. Notably, over 62% of all removals were for child abuse and pornographic content.
Meta's report for the same quarter showed a nuanced trend. While the company flagged 40.4 million pieces of content for sexual violations across Facebook and Instagram (a 15% drop from the previous quarter), it acknowledged an increase in the 'prevalence' of adult nudity and sexual activity. Meta attributed this rise to improved reviewer training and workflows, which changed how such content is measured and labeled.
Emails seeking India-specific data and direct responses to Meity's latest directive sent to Meta and Google did not receive an immediate reply.
The government's latest advisory signals a hardening stance, moving beyond warnings to outlining clear legal repercussions. It places the onus squarely on social media giants to significantly bolster their content moderation systems to align with Indian laws and societal expectations.