As Artificial Intelligence (AI) becomes a staple in daily life, a leading expert has issued crucial guidance for its safe and effective use in the coming year. Dr Shruti Patil, Director of the Symbiosis Artificial Intelligence Institute, emphasized that while AI is a powerful assistant, it must not be seen as a replacement for professional expertise, especially in critical fields like healthcare.
Navigating the AI Landscape: Assistant, Not Replacement
In a recent interview, Dr Patil clarified the practical roles AI can play. She highlighted its strength in automating repetitive, research-intensive tasks. "If you want to travel somewhere and want to create an itinerary within a particular budget... All these things can be done in just two minutes using ChatGPT," she explained. Similarly, AI excels in content generation, such as designing event invitations quickly using tools like Gemini or Notebook LM.
However, she drew a firm line on its limitations. "When you visit a doctor, and if you want to better understand what they said, you can make use of AI tools for an explanation. But you cannot replace your doctor with AI," Dr Patil stated. This principle extends to other sensitive areas, underscoring that AI should augment, not substitute, human judgment.
Guarding Privacy and Combating Hallucinations
A major concern with widespread AI adoption is data security. Dr Patil advised users to be extremely cautious about the information they share. Sensitive personal details, financial data, passwords, or anything that can reveal identity should never be disclosed to general AI platforms. These large language models, trained on vast global datasets, are not secure vaults for private information.
Another significant risk is AI hallucination—where tools generate plausible but incorrect information. Dr Patil noted that free AI models should be avoided for critical office work, with paid versions offering more reliability for complex tasks. "For single-page results, AI tools work well. If a PDF has hundreds of pages, then AI hallucinates," she pointed out. The consistency of outcomes is still evolving, making independent verification essential. "Yes, of course," she affirmed when asked if cross-checking AI results is necessary for important tasks.
The Call for Robust AI Policy in India
Beyond individual responsibility, Dr Patil stressed the urgent need for systemic safeguards. She addressed the malicious use of generative AI, such as editing faces onto photos to target women online. "More than users, it is important for a country to come up with an AI policy, which should be enforced by every AI service-providing company," she asserted.
She called for India to establish a strong policy framework with clear guardrails. This policy must explicitly define what user data can be used for training and what is strictly off-limits, ensuring robust privacy protection. "The government has to put up guardrails, specifying which kind of data is allowed to be shared and which kind of data is simply banned," Dr Patil concluded, highlighting the collective effort required from policymakers, companies, and users to harness AI's potential safely in 2026 and beyond.