India's law enforcement agencies are increasingly turning to artificial intelligence (AI) to modernize their operations, promising faster investigations and better resource management. However, this technological shift brings with it significant concerns about privacy, bias, and the lack of legal safeguards.
AI Tools on the Frontlines: From Maharashtra to Delhi
In a significant development, the Maharashtra Police has begun using a predictive AI tool named MahaCrimeOS AI. Unveiled earlier in December 2025, this system was built with support from Microsoft on its Azure OpenAI Service and Microsoft Foundry platforms. The tool integrates AI assistants and automated workflows to help investigators link cases, analyze digital evidence like PDFs, images, and videos, and respond to threats more swiftly.
The development was a collaboration between the Microsoft India Development Center (IDC), Hyderabad-based firm CyberEye, and MARVEL (Maharashtra Research and Vigilance for Enhanced Law Enforcement). Initially deployed as a pilot across 23 police stations in Nagpur Rural, including cybercrime stations, the AI acts as a copilot. It generates immediate investigation plans, guiding officers on steps like which statements to record or bank accounts to freeze in complex cases involving narcotics, cybercrime, or financial fraud.
Simultaneously, the Delhi Police is planning a major expansion of AI-assisted facial recognition technology (FRT). As part of a proposed Integrated Command, Control, Communication and Computer Centre (C4I), AI systems will analyze live CCTV feeds to identify suspects, track missing persons, and flag vehicles using automated number-plate recognition.
The Double-Edged Sword: Efficiency Gains vs. Inherent Risks
For police forces grappling with rising cybercrime and uneven resources, AI's appeal is clear. These systems can process vast amounts of data—call records, CCTV feeds, financial trails—far quicker than humans, spotting patterns and flagging suspects in real time. This promises enhanced efficiency and modernized policing without a massive increase in manpower.
Other government agencies are also testing AI solutions. The Centre for Development of Advanced Computing (C-DAC), under the IT Ministry, has developed a deepfake detection software called 'FakeCheck'. This desktop application, which works without the internet, has been provided to select law enforcement agencies for testing. Furthermore, the Bengaluru police used an AI system during Diwali to monitor live CCTV feeds for firecracker ban violations, addressing over 2,000 incidents.
However, critics warn that AI-driven policing risks amplifying existing biases. Since these systems often rely on historical police data, they can reinforce patterns of over-policing in certain neighborhoods, leading to the unfair targeting of specific communities. The use of facial recognition in identifying individuals connected to the 2020 Delhi riots has already raised alarms.
Predictive Policing and the Accountability Challenge
The trend points towards 'predictive policing,' where AI analyzes data to anticipate where crimes might occur or who might be involved. While this allows for proactive deployment of patrols, it has sparked serious debates about wrongful suspicion and increased surveillance.
Core concerns revolve around transparency, accuracy, and data quality. There is an absence of clear laws governing how AI-driven decisions are made or challenged. Privacy experts fear that real-time analytics, as planned in Delhi, could allow agencies to build profiles of people at scale. Broad exemptions for law enforcement under India's data protection laws further complicate accountability, creating a landscape where intrusive technologies may expand without robust safeguards.
As India's law enforcement architecture keenly explores generative AI and related technologies, the balance between harnessing its benefits for public safety and protecting citizens' rights remains a critical and unresolved challenge.