Ahmedabad Experts Sound Alarm on AI Weaponization Against Vulnerable Groups
In today's digital age, a mere few seconds of audio, a brief video clip, or an apparently harmless photograph can catastrophically disrupt lives. This stark reality was underscored at the Hacked 2.0 event in Ahmedabad, where cybersecurity and forensic specialists issued urgent warnings. As India's legal frameworks grapple to keep pace with rapid technological advancements, women and children remain particularly exposed to what experts term the "weaponisation of technology" through artificial intelligence (AI)-driven abuse.
Deepfake Dangers and Social Media Vulnerabilities
The session, titled "A Toolkit for Women and Children Against Deepfakes and AI Threats," was organized by the Institute of Chartered Accountants of India (ICAI) Ahmedabad in collaboration with the National Forensic Sciences University (NFSU). This event, part of a strategic partnership between The Times of India and NFSU, brought together multimedia forensic scientists and legal experts to address growing concerns.
Dr. Surbhi Mathur, associate dean at NFSU and a multimedia forensic expert, highlighted the real-world repercussions of such offences. She cited the case of Rajathi Kamalakannan, a Chennai-based food truck owner who endured a protracted legal battle after deepfake images circulated online in 2025, severely impacting her livelihood. Dr. Mathur emphasized that an active social media presence significantly elevates risks, noting that many women influencers on platforms like Instagram have confronted deepfake-related issues.
Quantifying the risk, she revealed that vulnerability increases by approximately 15.7% for every 10,000 followers, demonstrating how digital visibility is systematically exploited by predators. "Women are often coerced into silence through threats of social shame and defamation," she added, underscoring the psychological toll.
Detection Strategies and Family Safeguards
Early detection of deepfakes and AI-driven attacks, especially within familial settings, serves as the primary defence line. Dr. Mathur pointed out common flaws in synthetic media: machines frequently fail to replicate natural blinking patterns, which typically occur every five to ten seconds, and there is often noticeable "latency" between audio and lip movements.
To combat voice-cloning scams, she advocated for families to establish a "safe password"—such as "gulab jamun"—to verify identities during distress calls. Additionally, she recommended utilizing breach-checking services like Have I Been Pwned and free online deepfake-detection tools to assess video authenticity.
For child safety, Dr. Mathur stressed that parents must actively supervise gaming and video platforms, implement robust parental control settings, and educate children that personal identifiers—like school names or birth dates—are high-value targets for online predators.
Legal Lacunae and Critical Response Measures
However, when harm occurs, technological solutions alone prove insufficient. Dr. Rajdeep Ghosh, assistant professor at NFSU's School of Law Forensic Justice and Policy Studies, highlighted a significant legal gap: unlike the United States, India lacks a standalone "take it down" law. Content removal currently operates through a fragmented system of statutory provisions and procedural safeguards.
Under Rule 3(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, amended in 2025, intermediaries are mandated to remove unlawful content within 36 hours of receiving a valid court order or government notice. Photographs posted without consent must be taken down within 24 hours; failure to comply results in the loss of "safe harbour" protection under Section 79 of the IT Act, exposing web platforms to criminal liability.
Dr. Ghosh emphasized that the initial 48 hours after reporting abuse are decisive. Preserving URLs, metadata, and hash values is crucial for investigations under the Bharatiya Sakshya Adhiniyam, 2023. Victims are advised to immediately contact the 1930 cybercrime helpline or access the Sahyog portal. "Delay is the biggest factor in weakening judicial remedies," he cautioned, urging prompt action to mitigate long-term damage.
The insights from Ahmedabad's Hacked 2.0 event underscore an urgent need for enhanced digital literacy, robust legal reforms, and proactive community engagement to shield vulnerable populations from the escalating threats of AI-driven exploitation.