OpenAI's ChatGPT Promoted as Student Aid, But Overuse Sparks Critical Thinking Concerns
ChatGPT Student Aid Sparks Critical Thinking Concerns

OpenAI Presents ChatGPT as a Comprehensive Student Assistant

OpenAI has positioned its ChatGPT platform as an all-purpose educational aide, specifically targeting students with innovative prompts. These include transforming flashcards into engaging stories, converting dense biology notes into lively talk show scripts, and coaching learners on public speaking techniques for classroom presentations. The underlying pitch is straightforward: generative artificial intelligence makes the learning process remarkably effortless.

Student Adoption Turns from Assistance to Dependency

However, students were already ahead of this marketing campaign. In Gurgaon, 14-year-old Sachi Aggarwal began utilizing generative AI platforms such as Claude, Perplexity, and ChatGPT last year to accelerate research, brainstorm ideas, and complete homework assignments. Her peers at The Shri Ram School, Aravali, followed similar practices. What started as light assistance quickly evolved into full-scale offloading of academic tasks. "Students weren't using AI as a last resort any more," Sachi observes. "It became their first option."

Juggling coursework, extracurricular activities, tuition classes, exam preparation, Model United Nations conferences, and community service—often pursued to strengthen college applications abroad—many students succumbed to the easy appeal of AI. Sachi admits her frequent use led to a loss of personal thought. "My thought process became robotic, and I was no longer framing answers the way I originally did, using my own creativity and perspective, because I grew accustomed to the AI's way of formulating answers," she explains. The tipping point occurred when her English teacher requested an explanation of an analysis she had submitted; she went blank because the AI had done the thinking for her.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

The Debate: AI as Enabler Versus Creativity Killer

Generative AI is promoted as an enabler and pathfinder, a tool that builds upon human logic and creativity to inspire new ideas and forms of expression. Conversely, critics label it the death knell of critical thinking and creativity.

A recent study from the MIT Media Lab, titled Your Brain on ChatGPT, discovered that users relying on ChatGPT for essay writing consistently underperformed neurologically, linguistically, and behaviorally compared to those using only Google or no tools at all, with the latter group outperforming the rest. Although criticized for its small sample size of 54 participants aged 18–39 and lack of peer review, lead author Nataliya Kosmyna informed Time that the findings were released early due to concerns over the rapid rollout of large language models without thorough evaluation and their swift mainstream adoption. She warned that long-term brain development could be adversely affected, especially in young people, who are the primary adopters of generative AI applications.

Neurological and Cognitive Implications for Youth

Over half of ChatGPT users in India are under 24, primarily utilizing the app for studying. Neurologist Dr. Sid Warrier notes that this technology can have dual effects. On the positive side, the Mumbai-based doctor states: "Someone growing up with AI may have a better ability to harness it, rather than if they were exposed to it later in life." On the negative side: "If (at an early age) you outsource your core critical thinking skills, why would your brain bother to develop those skills at all? For instance, we stopped remembering phone numbers when we had phones to store them."

Pickt after-article banner — collaborative shopping lists app with family illustration

This phenomenon of 'cognitive offloading'—delegating mental work to notes or devices—can disrupt daily life by fostering a growing inability to focus, analyze, reason, reflect, and question. "That is the real danger, because we're talking about our core purpose as human beings," Dr. Warrier emphasizes. Young people gradually develop these skills through learning and experiences. "And if those cognitive stimuli are unidimensional or restrictive, it could affect brain development impacting, among other things, creativity and memory," explains Dr. Rajesh Sagar, professor of psychiatry at AIIMS, Delhi. "The brains of young people are neuroplastic—they're still developing; what they do or see during the developing years will affect their neuroplasticity, and its implications will be felt later in adulthood. Adults, on the other hand, have already reached developmental maturity." Yet, today's adults were yesterday's teens raised on multiple tabs, familiar with shrinking attention spans, even if they engage with LLMs less than contemporary teens.

Lessons from Social Media's Impact

While mainstream use of LLMs is relatively new, social media provides a precedent, as its effects on young people have been extensively researched. "We have not yet reached the stage where cognitive decline is evident in society because there's usually a lag between the availability of a service and its effects," Dr. Warrier clarifies. "But we can reasonably predict the direction we're headed as a society."

A 2024 study published in Nature on the long-term impact of digital media on brain development in children found that "social media users often contend with constant distractions, which can significantly impact their behaviour, leading to inattention symptoms. Additionally, these users can become easily diverted from tasks like reading or homework, etc."

Social media incentivizes quick, superficial interactions with bite-sized content designed for skimming rather than deep engagement. It triggers and rewards immediate emotional reactions over profound reflection and analysis. Predictably, attention deficit is a major concern reported by psychologists treating tech-addicted children—it impairs memory and can lead to learning difficulties. Since cognitive skills and mental health are symbiotic, one influences the other. "Emotions are the brain's instinctive response to stress. Cognitive skills are what help us deal with our emotions effectively," says Dr. Warrier. "Without these skills, emotions overrule rationality and lead to poor decision-making, which can in turn impact mental health."

AI's Integration into Educational Systems

Artificial intelligence is already making inroads into classrooms. In August, OpenAI launched its Learning Accelerator in India "to bring advanced AI to India's educators and learners nationwide through AI research, training, and deployment." The initiative plans to distribute 500,000 ChatGPT licenses to students and educators, including those in government schools.

Some institutions are significantly ahead. Indus International, an IB school with campuses in Pune, Bengaluru, and Hyderabad, introduced an AI-driven humanoid as a supplementary teacher in 2019. "This humanoid helps deliver content and handles routine instructional tasks, allowing teachers to focus on more meaningful and emotionally supportive interactions with their students... cultivating ethics, building character, and developing entrepreneurial competencies," wrote Akshaya KB, head of curriculum, in an email. Their Collaborative Learning Model began with pre-programmed lessons, advanced to semi-programmed lessons with basic generative AI, and now operates with full conversational generative AI. The school reports a 15% annual improvement in average student performance.

Developing Protocols and Addressing Equity Concerns

As LLMs evolve, efforts to guide their use are intensifying. ChatGPT-5's Study Mode encourages learning by guiding students step-by-step through answers instead of providing complete solutions. While benefits like personalized learning, clearer explanations, and tailored feedback are undeniable, risks regarding ethical use and privacy persist without robust protocols.

Several countries have developed AI frameworks. For example, the European Commission and OECD drafted an AI Literacy Framework that "ensures students know how to evaluate, question, and apply AI responsibly in their academic lives."

In India, equitable access poses an additional challenge. Osama Manzar, founder of the Digital Empowerment Foundation, warns that intense focus on AI and digital tools in education could marginalize students in areas lacking proper digital infrastructure. "While the National Education Policy emphasises inclusivity, the emphasis on digital education could undermine the quality of teaching in schools, especially if teachers are not adequately trained to use digital tools effectively. This can also lead to a one-size-fits-all approach, ignoring local needs and educational contexts."

Raju Kendre, founder of Eklavya Foundation—a nonprofit empowering students from marginalized communities to access higher education—highlights another issue: "Current LLMs often carry the biases of their makers who come from mainstream, urban high socioeconomic backgrounds. Without diversity, elite perspectives risk perpetuating stereotypes," he cautions. This can affect how marginalized communities perceive themselves and their cultures through AI's monocular lens. He advocates for India to build its own AI models, with government policy and oversight to correct biases and ensure AI serves everyone equitably.

Students Negotiate Personal Boundaries

Amid broader discussions on access, accountability, and cognitive trade-offs in policy, school, and psychology circles, students are establishing their own limits. After her analysis debacle, Sachi now restricts AI use to studying, avoiding it for assignments. "I had ChatGPT use the Feynman technique to help me learn about India's Election Commission," she shares. Improved time management has been crucial. "I write down my priorities, and I approach school assignments with a different mindset. Earlier, my attitude was: 'I'm not going to look at this assignment in 10 years, so why bother?' Now I know it all adds up."

Guidelines for Schools and Parents

For Schools:

  1. Insist on a disclosure or process note for any substantial AI assistance. Ultimately, a student's work must remain authentic.
  2. Teach students how to acknowledge AI properly and establish clear dos and don'ts for use cases in subject guides and task sheets.
  3. Design assessments for integrity by prioritizing in-class writing and hands-on STEAM tasks that affirm conceptual understanding.
  4. Support ongoing teacher development on generative AI pedagogy and academic integrity norms.

For Parents:

  1. Create a family AI agreement defining tools, time, extent of use, and methods for acknowledging AI assistance.
  2. Review the child's process notes on AI usage more than the final product; praise authentic effort and revision.
  3. Focus on building relationships and experiences, watch for emotional outsourcing to chatbots, and route wellbeing concerns to teachers or counselors.

(Compiled from Indus International's AI policy recommendations)