Elon Musk's artificial intelligence chatbot, Grok, is embroiled in a major controversy for enabling users to generate sexualised images of individuals without their consent. The feature, which has sparked widespread condemnation, can digitally alter photos sourced from the social media platform X, stripping clothing from pictures of real people or depicting them in revealing attire.
How the Grok AI Feature Sparked a Global Outcry
The situation intensified following the introduction of an "edit image" button in late December. This tool permitted users to modify any image on X. Reports soon emerged that it was being misused to partially or fully remove clothing from photographs of women and children. Since its launch on Christmas Day, Grok's official X account has been inundated with sexually explicit requests.
In a move that added fuel to the fire, Elon Musk appeared to trivialise the serious issue. He responded to AI-edited images of famous personalities, including a deepfake of himself in a bikini, by posting laugh-cry emojis, drawing further criticism for not addressing the core problem.
India's Stern Response and Global Investigations
In a significant development, India's Ministry of Electronics and Information Technology (MeitY) sent a formal letter to X on Friday. The ministry accused the platform of a "failure to observe statutory due diligence obligations" under the Information Technology Act, 2000. MeitY demanded an Action Taken Report detailing steps to prevent the hosting, generation, and uploading of obscene and sexually explicit content through AI services like Grok.
The letter explicitly instructed X Corp's India Operations to immediately cease all activities related to such prohibited content. It warned that non-compliance could lead to the loss of legal liability exemptions under Section 79 of the IT Act and invite other legal consequences.
Parallelly, the scandal has triggered international legal action. In Paris, the public prosecutor's office has broadened an existing investigation into X to include fresh allegations that Grok is being utilised to create and distribute child pornography.
Human Cost and Grok's Acknowledgment
The controversy has a deeply personal impact on victims. One woman, Samantha Smith, told the BBC she felt "dehumanised and reduced into a sexual stereotype" after Grok was used to digitally remove her clothing from an image. She described the violation as feeling as real as if an actual nude photo had been circulated.
Facing mounting pressure, Grok's official account on X acknowledged the lapses. The chatbot stated, "We've identified lapses in safeguards and are urgently fixing them." It also emphasised a zero-tolerance policy towards illegal material, asserting that "CSAM (Child Sexual Abuse Material) is illegal and prohibited."
The incident highlights the urgent need for robust ethical safeguards in rapidly evolving AI technology, especially concerning image manipulation and user consent. It also underscores the growing global regulatory scrutiny over how tech platforms manage AI-generated content.