Apple & Google App Stores Host Dozens of AI 'Nudify' Apps, Report Reveals
AI 'Nudify' Apps Found on Apple & Google Stores

A recent investigation by the Tech Transparency Project has uncovered a disturbing trend on major app platforms. According to the report, both Apple's App Store and Google's Play Store are hosting dozens of applications that utilize artificial intelligence to create non-consensual nude images of individuals.

Widespread Presence of Nudification Apps

The watchdog group's investigation revealed that these so-called "nudify" apps are alarmingly prevalent. Researchers found 55 such applications on the Google Play Store and 47 on the Apple App Store. These apps employ advanced AI algorithms to digitally remove clothing from photographs or superimpose faces onto nude bodies, creating realistic deepfake images without the subject's consent.

Alarming Statistics and Revenue Generation

What makes this discovery particularly concerning is the massive scale of these applications' reach. The report indicates that these nudify apps have been downloaded more than 700 million times collectively. Even more troubling is the financial aspect – these applications have generated over $117 million in revenue, with both Apple and Google receiving their standard percentage cut from these earnings.

The investigation further revealed that many of these apps are inappropriately rated for younger audiences. For instance, an application called DreamFace is rated suitable for ages 13 and up on Google's platform and ages nine and up on Apple's store, despite its capability to create sexually explicit content.

Security Concerns and International Connections

The Tech Transparency Project identified significant security implications in their findings. According to their report, 14 of the nudify applications originated from China, adding another layer of concern regarding data privacy and potential security threats. This international dimension complicates the regulatory landscape for these problematic applications.

Platform Responses and Regulatory Context

Both technology giants have responded to the investigation findings. Apple stated that it has removed 24 applications from its store, though this falls significantly short of the 47 apps identified by researchers. Google confirmed that it has suspended several apps referenced in the report for violating store policies but declined to specify exact numbers.

The report comes amidst growing global concern about AI-generated explicit content. In December, the UK government announced plans to ban nudification apps by making it illegal to create or distribute AI tools that digitally remove clothing from images. This move is part of broader efforts to combat misogyny and reduce violence against women and girls.

Broader Implications and Mental Health Concerns

Nudification apps represent a significant threat to individual privacy and mental health. These tools primarily target women and children, with women and girls comprising the vast majority of victims in sexually explicit deepfake content online. The psychological impact on victims can be severe and long-lasting.

While possessing AI-generated sexual content involving children is illegal in most jurisdictions, the AI models themselves that create these images often exist in a legal gray area. This regulatory gap allows these applications to proliferate despite their harmful potential.

The investigation also highlighted the role of prominent AI tools in this ecosystem. Both platforms continue to offer access to xAI's Grok, which researchers identified as one of the most prominent tools used to create non-consensual deepfake images. In one alarming finding, Grok reportedly generated approximately three million sexualized images and 22,000 images involving children over just 11 days.

As technology continues to advance, the ethical implications of AI applications become increasingly complex. The presence of these nudify apps on mainstream platforms raises serious questions about content moderation, platform responsibility, and the need for stronger regulatory frameworks to protect individuals from digital exploitation.