Google Simplifies Reporting of Non-Consensual Images Amid India's Stricter Online Rules
Google Eases Reporting of Non-Consensual Images as India Tightens Rules

Google Enhances Tools to Combat Non-Consensual Imagery as India Implements Tighter Online Regulations

In a significant move to bolster online safety, Google has announced a simplified process for users to report and request the removal of non-consensual explicit images from its search engine. This development comes as the Indian government enforces stricter rules for digital platforms, mandating quicker action on unlawful content and clearer labeling of AI-generated material.

Google's New Reporting Mechanism

Google revealed on Tuesday, February 10, that it has made it easier for individuals to flag non-consensual intimate imagery. Users can now click on the three dots above an image in Google Search, select 'remove result,' and then choose the option 'It shows a sexual image of me.' This streamlined approach aims to reduce barriers for victims seeking to protect their privacy.

Key improvements include:

  • The ability to report multiple images simultaneously through a single form, eliminating the need for individual submissions.
  • Enhanced tracking features under the 'Results about you' tab, where users can monitor the status of their removal requests.
  • Email notifications for any updates on request statuses, ensuring transparency throughout the process.

Google emphasized that removal is only part of the solution. The company is offering optional safeguards to proactively filter out similar explicit results in future searches, providing ongoing protection. Additionally, Google will direct users who submit removal requests to expert organizations for emotional and legal support, addressing the broader impact of such violations.

India's Updated Online Regulations

Coinciding with Google's announcement, the Indian government has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. These changes impose stricter timelines for content removal and introduce new requirements for AI-generated content.

The new rules stipulate:

  1. Online platforms must remove non-consensual intimate imagery within two hours, a significant reduction from the previous 24-hour deadline.
  2. Other forms of unlawful content must be taken down within three hours, down from 36 hours.
  3. Mandatory labeling of AI-generated content on social media platforms like YouTube, Instagram, and Snapchat, with labels required to be 'prominently' visible, though the specific size requirement from draft rules has been relaxed.

These regulations respond to the growing challenge of non-consensual explicit imagery, exacerbated by advancements in generative AI. Earlier incidents, such as those involving Grok AI, have highlighted the urgent need for robust safeguards.

Snapchat and OpenAI Introduce Additional Safety Measures

Beyond Google, other tech giants are also stepping up their safety initiatives. Snapchat has expanded its 'Home Safe' feature, now called 'Arrival Notifications,' to allow users to send alerts when they reach various locations, not just home. This update aims to enhance personal safety by automating notifications for routine activities like travel or meetings.

OpenAI, meanwhile, is focusing on protecting teen users in India through a comprehensive blueprint. The company plans to implement age prediction technologies, age-appropriate response policies, and parental controls. OpenAI is collaborating with policymakers, regulators, educators, and child safety experts to ensure these measures are effective.

Existing safeguards in ChatGPT include:

  • In-app reminders for users to take breaks during extended sessions.
  • Directing users to real-world resources if they express suicidal intent.
  • Preventing the generation of child sexual abuse material (CSAM) and child sexual exploitation material (CSEM).

OpenAI stressed that safety takes precedence over privacy and freedom, noting that responses to minors should differ from those to adults. This approach is particularly crucial in India, where AI adoption is accelerating, necessitating integrated AI literacy in education.

Overall, these announcements mark a concerted effort by major tech companies to address online safety concerns, aligning with regulatory changes in India to create a more secure digital environment.