UK Media Regulator Investigates Elon Musk's X Over Grok AI Deepfake Concerns
UK Investigates X Over Grok AI Deepfake Images

Britain's media regulator, Ofcom, has officially launched an investigation into Elon Musk's social media platform X. The probe focuses on concerns that its Grok AI chatbot is generating sexually intimate deepfake images. This action violates the platform's legal duty to protect users in the UK from illegal content.

Regulator Takes Swift Action

Ofcom announced the investigation on Monday. The regulator expressed deep concern over reports that Grok is being used to create and share illegal non-consensual intimate images. This includes child sexual abuse material. Platforms operating in Britain must protect people from such harmful content. Ofcom stated it will not hesitate to investigate companies suspected of failing in their duties, especially when children are at risk.

X's Response and Stance

When questioned about the investigation, X pointed to a previous statement. The platform emphasized it takes action against illegal content, including child sexual abuse material. Measures include removing content, permanently suspending accounts, and cooperating with governments and law enforcement. X warned that anyone using or prompting Grok to produce illegal content will face the same consequences as those uploading such material directly.

Political Pressure Mounts

Prime Minister Keir Starmer called the images produced by Grok "disgusting" and "unlawful" last Thursday. He urged Musk's X to "get a grip" on the AI chatbot. A spokesperson for Starmer later rejected Musk's claim that the government aims to suppress free speech. The spokesperson clarified the government's concern is solely about child sexual abuse imagery and violence against women and girls.

Business Secretary Peter Kyle confirmed that X could potentially be banned in the UK. However, he noted the power to implement such a ban rests with Ofcom. Tech Minister Liz Kendall welcomed the formal investigation, urging Ofcom to complete it swiftly.

Testing Britain's Online Safety Law

The Grok case represents a significant test for Britain's Online Safety Act, enacted in 2023. Ofcom is implementing the law in stages. Following initial actions against porn sites lacking effective age checks, this investigation into X marks a major enforcement step. Creating or sharing non-consensual intimate images or child sexual abuse material, including AI-generated content, is illegal in Britain. Tech platforms are legally required to prevent British users from encountering such content and to remove it promptly upon discovery.

International Scrutiny Intensifies

X faces mounting international criticism over Grok's capabilities. The feature can generate images of women and minors in revealing clothing. French officials have reported X to prosecutors and regulators, labeling the content "manifestly illegal." Indian authorities have also demanded explanations from the platform. Over the weekend, Indonesia and Malaysia temporarily blocked access to Grok.

In response, X stated it has restricted requests to undress people in images to paying subscribers only. Ofcom's investigation will assess whether X failed to properly evaluate the risk of British users encountering illegal content. The regulator will also examine if X adequately considered the specific risks posed to children.

Potential Consequences for Non-Compliance

In the most severe cases of non-compliance, Ofcom possesses significant enforcement powers. The regulator could ask a court to mandate payment providers or advertisers to withdraw their services from the platform. Furthermore, Ofcom could require internet service providers to block access to X within Britain entirely. This investigation adds to the growing pressure on X, which already faces multiple criminal and regulatory probes worldwide.