Apple Reportedly Threatened to Ban Grok AI App Over Sexualized Deepfake Scandal
In a significant development earlier this year, technology giant Apple reportedly threatened to remove the Grok artificial intelligence application from its App Store. The app is owned by Elon Musk's companies X (formerly Twitter) and xAI. According to an exclusive report from NBC News, Apple sent a detailed letter to United States senators explaining its behind-the-scenes efforts to address viral incidents involving sexualized deepfakes generated by the Grok AI platform.
Backlash Over AI-Generated Sexualized Content
Elon Musk's Grok AI chatbot and the X platform faced substantial public backlash after the artificial intelligence system generated sexually explicit images without consent. The controversy intensified when these images particularly targeted children and women, raising serious ethical and safety concerns among users and regulators alike.
This controversy placed Apple under mounting pressure to take decisive action by removing both the Grok and X applications from its App Store marketplace. While Apple maintained public silence during the initial phase of the scandal, the NBC News investigation reveals that the company had determined both X and Grok were in violation of its strict content guidelines. Apple privately communicated its intention to potentially remove the Grok application entirely from its digital storefront.
Apple's Intervention and Content Moderation Demands
According to the detailed report, Apple initiated contact with the development teams behind both X and Grok after receiving numerous complaints and observing extensive news coverage of the deepfake scandal. The technology company required the application developers to create and implement a comprehensive plan to significantly improve their content moderation systems and practices.
The X platform submitted an updated version of the Grok application for Apple's review process, but this submission faced rejection. Apple determined that the implemented changes did not adequately address the content moderation concerns and failed to meet the required standards for compliance with App Store policies.
Apple's Official Statement and Resolution
In its official correspondence to Elon Musk's X and xAI companies, Apple provided specific details about its evaluation process. The company stated: "Apple reviewed the next submissions made by the developers and determined that X had substantially resolved its violations, but the Grok app remained out of compliance. As a result, we rejected the Grok submission and notified the developer that additional changes to remedy the violation would be required, or the app could be removed from the App Store."
The letter continued: "Following further engagement and changes by the Grok developer, we determined that Grok had substantially improved and therefore approved its latest submission." This statement confirms that Apple maintained its enforcement position until satisfactory improvements were implemented by the development team.
Ongoing Concerns and Current Status
Despite these interventions and improvements, a separate NBC News investigation indicates that the Grok AI system continues to generate sexualized images of individuals without their consent. The report documented dozens of such cases occurring over the past month, though it noted that the volume of problematic images has decreased significantly since January of this year.
This incident highlights the growing challenges technology companies face in regulating artificial intelligence systems and their outputs, particularly concerning deepfake technology and non-consensual image generation. It also demonstrates the increasing scrutiny application marketplaces like Apple's App Store are applying to AI-powered applications and their content moderation capabilities.



