AI Deepfakes Pose Growing Threat to India's Top Influencers
India's creator economy faces a serious new challenge. Artificial intelligence tools now generate convincing fake images and videos of popular influencers. These deepfakes circulate widely on social media without permission. They damage hard-earned reputations built over years of dedicated work.
Recent Cases Highlight the Problem
Several prominent creators have already taken legal action. Influencers Bhuvan Bam, Payal Dhare (known as Payal Gaming), and Slayy Point members Gautami Kawale and Abhyudaya Mohan all faced unauthorized AI-generated content. Some fakes showed obscene material. Others aimed for commercial misuse of their identities.
While some achieved temporary takedowns through courts and cyber police, a critical gap remains. Most influencers still lack permanent personality rights protections. This leaves them vulnerable in India's booming ₹4,500 crore creator economy.
The Scale of the AI Deepfake Threat
Tech giants like Meta, Google, and X race to improve their AI tools. Unfortunately, these same tools can generate sexually explicit deepfakes with alarming ease. The Indian government recently directed platform X to crack down on misuse of its Grok AI for creating "sexualized and obscene" images of women.
A November 2025 McAfee cybersecurity report revealed startling numbers. It found that 90% of Indians encountered fake or AI-generated celebrity endorsements. Victims lost an average of ₹34,500 to related scams. Furthermore, 60% of Indians saw AI-generated content from influencers and online personalities, not just mainstream celebrities.
Current Legal Protections Against Deepfakes
India addresses this threat through existing laws that cover all such harms, not just AI specifically. The Information Technology Act, 2000, punishes creating deepfakes to impersonate, steal identities, invade privacy, or share obscene content.
The 2021 IT Rules require social media platforms to act swiftly. They must remove misleading deepfakes, hate speech, or privacy-violating posts within hours of complaints. Platforms must also label suspicious AI tools and allow user appeals to government panels.
Newer legislation adds more teeth. The 2023 Digital Personal Data Protection Act fines AI firms for using personal information without consent. The Bharatiya Nyaya Sanhita imposes jail terms for spreading deepfake rumors that cause public panic.
In November, the Ministry of Electronics & IT introduced AI governance guidelines. These further regulate high-risk AI systems, including deepfake generators. The guidelines mandate declaration of AI content across platforms.
Why Creators Need Stronger Safeguards
Influencers and online personalities represent valuable digital assets today. Their name, image, voice, and likeness drive commercial value through endorsements, sponsorships, and brand deals worth millions. Just as celebrities like Amitabh Bachchan and Anil Kapoor protected their "personality rights" in court, influencers now seek similar safeguards.
A landmark November 2025 ruling offered hope. The Delhi High Court made podcaster Raj Shamani the first Indian influencer to secure comprehensive personality rights protection. The court restrained platforms from hosting AI-generated videos, chatbots, or morphed content exploiting his persona without consent. This affirmed that creators' goodwill constitutes protectable intellectual property amid rising digital impersonation threats.
Potential Solutions to the AI Threat
Technical solutions like content labeling and watermarking show promise. These methods embed visible or invisible identifiers into digital content. A logo or unique code asserts ownership, deters unauthorized use, and enables tracking.
Prime Minister Narendra Modi advocated for this approach in a 2024 conversation with Microsoft co-founder Bill Gates. The IT Ministry now expects to issue AI-generated content labeling guidelines soon.
The Other Side: AI as a Creative Tool
Interestingly, content creators themselves use AI tools productively. They build digital avatars that slash production time and costs while boosting creativity. Apps like ElevenLabs enable realistic voice cloning. Creators generate natural-sounding narrations or podcasts in seconds without studio sessions.
OpenAI's Sora crafts hyper-realistic video clips from simple text prompts. What once required days of filming and editing now becomes polished visuals quickly. This demonstrates AI's dual nature—both threat and opportunity for the creator economy.
The challenge remains balancing innovation with protection. As AI capabilities grow, so must legal frameworks and technical safeguards. India's expanding creator economy needs robust defenses against reputation-damaging deepfakes while embracing AI's creative potential.