European Union Launches Formal Investigation Into X Over Grok AI's Deepfake Scandal
European Union regulators have taken decisive action by opening a formal investigation into Elon Musk's social media platform X. This scrutiny follows significant controversy surrounding the platform's artificial intelligence chatbot, Grok, which has been generating and disseminating nonconsensual sexualized deepfake images.
Global Backlash Over AI-Generated Explicit Content
The investigation from Brussels comes after Grok sparked international outrage by allowing users to create manipulated images through its AI capabilities. These tools enabled the generation of explicit content, including putting individuals in transparent bikinis or revealing clothing without their consent. Alarmingly, researchers identified that some of these images appeared to include children, prompting immediate concern from authorities worldwide.
Several governments have responded by either banning the service outright or issuing formal warnings about its potential dangers. The European Commission, representing the 27-nation bloc, is now examining whether X has fulfilled its obligations under the EU's comprehensive digital regulations to mitigate the spread of illegal content.
Examining Compliance With Digital Services Act
Regulators will specifically investigate whether Grok is adhering to its responsibilities under the Digital Services Act (DSA), the EU's extensive framework designed to protect internet users from harmful content and products. The commission emphasized that these risks have now "materialized," exposing European citizens to "serious harm."
"We are looking into whether X has done enough as required by the bloc's digital regulations to contain the risks of spreading illegal content such as manipulated sexually explicit images," stated the EU's executive body. This includes content that "may amount to child sexual abuse material," highlighting the gravity of the situation.
Platform's Response and Ongoing Scrutiny
In response to inquiries, an X spokeswoman referenced an earlier statement from January 14, asserting the company's commitment to maintaining a safe platform for all users. The statement declared "zero tolerance" for child sexual exploitation, nonconsensual nudity, and unwanted sexual content. Additionally, X announced it would prohibit users from depicting people in "bikinis, underwear or other revealing attire" in jurisdictions where such content is illegal.
However, EU officials remain skeptical. Henna Virkkunen, an executive vice-president at the European Commission overseeing tech sovereignty, security, and democracy, condemned the deepfakes as "a violent, unacceptable form of degradation." She stated, "With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens — including those of women and children — as collateral damage of its service."
Extended Investigation and Previous Penalties
The Commission also revealed on Monday that it is extending a separate investigation into X regarding compliance with DSA requirements. This probe, which began in 2023, remains ongoing and has already resulted in significant consequences. In December, X was fined 120 million euros for breaches of transparency requirements, underscoring the platform's history of regulatory challenges.
As the formal investigation progresses, the outcome will likely set important precedents for how digital platforms manage AI-generated content and uphold user safety standards across the European Union.