The dawn of 2026 brought a disturbing digital crisis for numerous women on the social media platform X, formerly known as Twitter. Their photographs were weaponised, manipulated into sexually explicit images through the platform's integrated AI chatbot, Grok. This incident has ignited a global firestorm over the unchecked misuse of artificial intelligence and the accountability of technology giants.
India's Stern Warning and a Platform's Inadequate Response
Alarmed by the rapid proliferation of this non-consensual and objectionable content, which also included imagery of minors, authorities worldwide sprang into action. The Government of India issued a formal warning to X, citing its "serious failure" in enforcing necessary safeguards. The notice highlighted potential violations of the IT Rules, 2021 and the Bharatiya Nagarik Suraksha Sanhita, 2023.
X's reaction to the widespread outrage has been widely criticised as insufficient. In a post on Sunday, January 7, 2026, the company stated that users prompting Grok to create illegal content would face the same consequences as those uploading such material directly. This response was seen as an attempt to deflect the platform's own responsibility for providing a tool that can be effortlessly turned into an instrument of harassment and abuse.
The Magnified Threat of Integrated AI
While the creation of non-consensual explicit imagery existed before the AI era, the barrier to entry has catastrophically lowered. What once needed specialised software and technical know-how can now be achieved with a simple text prompt. The problem is particularly acute with tools like Grok, which is embedded within X and can access and share information in real-time, allowing harmful content to spread with unprecedented ease and speed.
This is not an isolated issue for X alone. In October 2025, reports surfaced of multiple accounts on X and Instagram routinely sharing deepfake videos of celebrities, predominantly women. While other tech behemoths like Meta and Google have implemented some form of AI content labelling, enforcement remains inconsistent and largely reactive. Most safety measures depend on users reporting harmful content after it has already been published and caused damage.
The Imperative for Built-in Safeguards Over Empty Promises
There is no denying that AI holds transformative potential and is fast becoming indispensable. However, its breakneck scaling must not sacrifice fundamental user safety and privacy. The classic Silicon Valley mantra of "move fast and break things" is fundamentally at odds with the painstaking work required to build and maintain public trust.
As Big Tech companies continue to seek legal protections and "safe harbour" status, the onus is on them to demonstrate genuine commitment. Stronger, proactive safeguards must be engineered directly into the technology as it becomes woven into the fabric of daily life. Without this fundamental shift in approach, calls for legal immunity and public confidence will continue to ring hollow. The buck for AI misuse stops with the creators and platforms that deploy it.