India's AI Leadership Test: Deployment Governance Over Frontier Models
As momentum builds around the upcoming AI Impact Summit, the mainstream conversation often overlooks a critical aspect: while the West engages in a frontier-model race and debates systemic risks, India's AI advantage is frequently framed as a purely "applied AI for development" narrative. This contrast is not only overdrawn but also misses broader implications for policy, markets, and India's unique constraints.
Beyond the Frontier: India's Frugal AI Pathway
India is actively investing in sovereign compute, foundational-model capabilities, and research ecosystems. However, even if the country trains larger models, its political economy and social context mean that impact will be determined downstream in the AI stack—specifically in procurement, integration, evaluation, and governance. This becomes crucial once AI begins to influence entitlements and market access.
The Economic Survey 2025-26 officially charts a "frugal, application-focused" pathway, moving away from chasing the frontier at prohibitive fiscal costs. This approach presents a scalable market opportunity rooted in bottom-up, small-scale, sectoral adoption rather than prestige competition. For instance, the Budget's Bharat-VISTAAR announcement proposes a multilingual AI tool for farmers that integrates AgriStack portals with ICAR's agricultural practices, prioritizing usefulness over benchmarks.
In e-commerce, companies like Meesho are scaling AI-powered chat and vernacular voice agents to support first-time online shoppers, including those in rural and non-English contexts. Here, AI serves as workflow infrastructure for customer discovery, conversion, and support. Yet, frugal does not mean low stakes; the stakes are high precisely because deployment and societal diffusion affect rights, entitlements, and livelihoods.
Deployment Risks: A Closer Look at Governance Challenges
In the US and Europe, AI governance debates often center on frontier model safety frameworks, compute concentration, and rules for a small set of general-purpose model providers. India will face many of the same underlying risks—such as opacity, misuse, security breaches, privacy leakage, and weak redressal mechanisms—but these will manifest through deployment embedded across welfare, education, hiring, lending, healthcare, and compliance.
Risk categories do not neatly split into "frontier risks" versus "application risks"; many transcend across the layers of the AI stack. For example, a frontier model's unreliability can lead to wrongful exclusion in welfare systems, while lab opacity can result in a citizen's inability to appeal for transparency.
Real-World Use Cases Highlighting Urgent Issues
Deployment use-cases underscore why these issues matter. Telangana's AI-led welfare de-duplication exercise through the Samagra Vedika Programme reportedly cut off subsidized food support for thousands due to faulty proxies and data errors. Meanwhile, the Comptroller and Auditor General's AI-based audits have detected large numbers of fraudulent cases in state beneficiary schemes, enhancing governmental efficiency but also highlighting the rapid deployment of algorithmic tools in high-stakes governance.
A DPI-style approach to AI infrastructure, as advocated in a December 2025 white paper by India's Principal Scientific Advisor's Office, could democratize deployment through shared portals for data, models, and integration. However, this also changes risk and harm dynamics, necessitating careful attention to three key risks:
- Procurement Capture and Vendor Lock-in: In a deployment-first economy, procurement contracts lacking mandates for auditability, portability, and interoperability could create vendor-dependency risks with incremental switching costs. In a DPI-like ecosystem, lock-in might extend from application vendors to marketplace operators or approved integration layers.
- Interpretability Risk and Black Boxes in High-Stakes Decisions: During the Special Intensive Revision in West Bengal, draft electoral rolls had large-scale deletions and flags marking voters as "dead" or "missing," with discrepancies released after judicial guidance. This shows how verification labels, if not explainable, can lead to de-facto administrative exclusion. The issue extends beyond model bias to unreasoned authority, complicating matters in DPI-style architectures where multiple schemes rely on shared models.
- Privacy Risk: Applied AI relies on data expansion through access to high-quality metadata, database linkages, broader data retention, and profiling. While this can improve fraud detection, without provisions like purpose limitation, data minimization defaults, access controls, and retention restrictions, interoperability can lead to data repurposing at scale with ratchet effects.
Building a Robust Evaluation and Accountability Framework
A deployment-first strategy demands a robust evaluation infrastructure—the institutional ability to test, audit, and monitor systems across languages and contexts. It also requires sharper attention to the application layer, where harms are produced. This entails defining intended purposes, mapping affected stakeholders, specifying harm categories upfront, and ensuring human fallback channels.
AI deployments should be treated as lifecycle systems, not one-off pilots. Contestability must feature in product choices, as a system can be "accurate" on aggregate yet illegitimate if it offers no mechanisms to challenge outcomes. Essential elements include clear notice-and-consent banners, accessible grievance mechanisms, audit-trails for human review, and escalation routes beyond vendor helpdesks.
With DPI-style deployment, roll-out acceleration must be complemented by standardized and federated accountability. This involves embedding decision logs, incident reporting, continuous monitoring for model drift, and appeal mechanisms as platform capabilities that travel with the user, not the vendor.
Market Implications and the Path Forward
A deployment-first AI economy shifts value away from the best model and toward implementation through integration, data pipelines, compliance, and trust. For example, Infosys has released an open-source Responsible AI Toolkit that integrates capabilities like privacy, explainability, fairness, and hallucination detection as reusable APIs. These early signals suggest that durable advantage will come not only through model access but through auditability and trustworthiness.
India does not need to chase the frontier to lead, as frontier capability without robust deployment governance is merely latent power. If "impact" is the goal, then privacy, explainability, and contestability must be baked in ex-ante, not bolted on as ex-post patchwork. The writer works with the Center for Security and Emerging Technology (CSET) in Washington DC on global AI governance research.
