In a significant move, the social media platform X, owned by Elon Musk, has admitted to lapses and taken stringent action following government pressure over its artificial intelligence tool, Grok. The company has blocked 600 user accounts and removed over 3,500 pieces of content related to the generation of objectionable imagery.
Government Pressure Leads to Action
This crackdown comes after the Indian government expressed strong dissatisfaction with X's initial response to a formal notice issued on January 2, 2026. The Ministry of Electronics and Information Technology (MeitY) had flagged a "serious failure" in the platform's safeguards, stating that X was not adhering to its statutory due diligence obligations under the Information Technology Act, 2000, and the associated IT Rules of 2021.
The core issue involved users exploiting the Grok AI chatbot to create and share sexualized and objectionable images of women without their consent. In many instances, users would reply to public photos of women, prompting Grok to alter their appearance by adding revealing clothing or creating suggestive visuals, which the AI tool would generate.
From Global Scrutiny to Formal Admission
The problem attracted global attention, with regulators in the European Union, the United Kingdom, and Malaysia also scrutinizing X's practices. According to government sources, X's formal response, sent on Wednesday, January 11, 2026, marked a turning point. The company finally acknowledged the severity of the issue, admitting its mistake and committing to stricter compliance.
In its communication, X assured the Indian government that it would permanently disable accounts engaged in such activities and take action against those creating inappropriate prompts for Grok. The platform also pledged to ensure that obscene imagery would not be allowed on its platform going forward.
A Call for Comprehensive AI Review
The government's notice had gone beyond demanding immediate content removal. It directed X to undertake a comprehensive technical, procedural, and governance-level review of Grok AI. This review must encompass the AI's prompt-processing mechanisms, output generation, and the safety guardrails around image handling to prevent the creation of nudity, sexualization, or unlawful content.
Initially, the government was unconvinced by X's stance due to a lack of technical explanation for the AI's failure and concrete steps to prevent such misuse at the source. The recent admission and large-scale account blocking appear to be a direct result of sustained governmental pressure.
The incident highlights the growing challenges of regulating generative AI on social media platforms and underscores the Indian government's increasing assertiveness in holding global tech giants accountable under local laws designed to protect citizens from digital harm.