The narrative surrounding artificial intelligence in India's healthcare sector is often filled with visions of revolutionary change: lightning-fast diagnoses, medicine tailored to the individual, and services reaching millions. However, a significant national dialogue recently steered the conversation toward more pressing and complex questions of fairness, oversight, and real-world implementation.
Moving Beyond Pilot Projects to Real-World Systems
The inaugural Winter Dialogue on RAISE (Responsible AI for Synergistic Excellence in Healthcare) was held at Ashoka University last week. This event marked a pivotal shift in focus. Instead of celebrating potential, experts concentrated on the gaps between technological promise and ground reality. The dialogue was organized by the Koita Centre for Digital Health at Ashoka University (KCDH-A), in partnership with NIMS Jaipur and with WHO SEARO as the technical host. The ICMR-NIRDHS and the Gates Foundation also participated.
This two-day gathering served as an official Pre-Summit Event for the AI Impact Summit 2026. It was the first of four national RAISE dialogues planned across India this month, with this edition centered on the theme of Health AI: Policy and Governance.
A recurring theme was the chasm between what AI can technically do and what institutions are prepared to handle. Dr. Karthik Adapa, Regional Adviser for Digital Health at WHO, highlighted the persistent issue of "pilotitis"—where digital health solutions get stuck in endless pilot phases and never integrate into public healthcare systems. He stressed that frameworks like SALIENT are critical because they push developers to think beyond mere algorithm accuracy and consider long-term integration and evaluation.
The Central Dilemma: Optimization vs. Equitable Outcomes
The tension between creating the most accurate AI model and ensuring it works fairly for everyone was a core debate. In his opening address, Dr. Anurag Agrawal posed a provocative question that resonated throughout the conference: ‘Would you choose a model with higher average accuracy but poor performance for women, or one with slightly lower accuracy that delivers equitable outcomes for all?’
This inquiry crystallized into a guiding principle for the discussions: ‘AI for Health, not Healthcare for AI.’ The subsequent panels revealed how difficult it is to translate this principle into practice. Case studies spanning tuberculosis screening, cancer detection, and maternal health monitoring across various Indian states showcased both promise and peril.
Key challenges identified include:
- Fragile and inconsistent data pipelines.
- Uneven digital infrastructure across regions.
- A landscape of regulatory uncertainty.
- Deep-seated social biases that AI models can unintentionally amplify.
Defining a Cautious and Accountable Path Forward
Discussions on mental health applications urged particular caution. Dr. Prabha Chand noted that large language models are primarily "optimised for user engagement, not for reliable clinical outcomes." Dr. Smruti Joshi emphasized that "mental health judgment cannot be fully automated." The consensus was that the challenge is not about whether AI has a role, but about defining that role with extreme care, especially for vulnerable populations.
The need for robust validation and clear accountability was paramount. Dr. Mary-Anne Hartley pointed out that imperfect or biased data inevitably leads to flawed models, a significant risk in a diverse country like India. Panellists agreed that continuous monitoring, active bias mitigation, and human-in-the-loop systems must become standard, non-negotiable components of any health AI deployment.
Reflecting on the ethical imperative, Dr. Anurag Agrawal concluded, "The real test of health AI is not peak accuracy in a lab, but equitable performance in the real world. If AI systems work well on average but fail women or marginalised groups, we have failed our purpose."
Vice-Chancellor Somak Raychaudhury echoed this, stating that building Responsible AI in health requires collaboration, not silos. He affirmed that universities have a crucial role in advancing not just research, but also the intellectual and institutional frameworks needed to ensure AI serves the public good, promotes equity, and builds trust at scale.
As described by Aradhita Baral, the RAISE initiative aims to be a platform for sustained, meaningful dialogue. With upcoming editions at IIT Delhi, Bengaluru, and Hyderabad, India's conversation on AI in healthcare is maturing—moving from speculative hype to the essential homework of building responsible, equitable, and effective systems.