For three years, chatbots have dominated how people interact with artificial intelligence. The magical experience of typing anything and getting personalized responses made them the face of generative AI. However, this conversational approach is now facing serious challenges as companies question whether it's the best way to harness large language models.
The Safety Concerns Driving Change
Major AI companies are discovering that even with safety guardrails in place, users can often 'jailbreak' chatbots and push them into harmful or unpredictable directions. This has led to growing concerns about liability and loss of control, prompting a strategic shift away from open-ended conversation interfaces.
Character.ai, one of the most popular consumer AI apps with roughly 20 million monthly active users, is taking dramatic steps to address these concerns. This month, the platform banned users under 18 from having conversations with its chatbots following complaints about psychological harm and dependency among young users.
Chief Executive Officer Karandeep Anand explained that "there is probably not enough tech or research to keep the under-18 companionship experience safe over a very long period of time." The company is pivoting toward becoming an entertainment platform with a more cautious approach.
Practical Alternatives Emerging
Instead of open-ended conversations that could lead anywhere, companies are adopting more controlled interfaces. Character.ai is changing its interface for teens to include less typing and more structured interactions. Similarly, Vitality Health, a unit of South African insurance group Discovery, has partnered with Google's Gemini but constrains the technology to process language behind the scenes.
The Vitality app now mostly shows text, buttons, and clickable options rather than open conversation. For instance, it might display a small box encouraging users to take 2,500 steps to earn rewards points. This approach allows Gemini to help Vitality 'talk' to customers without entering into unpredictable conversations.
Emile Stipp, managing director of Vitality AI, emphasized that "conversational AI simply introduces too much risk and unpredictability" in healthcare contexts, where precision and safety are paramount.
Broader Industry Implications
This shift represents a fundamental rethinking of how people should interact with AI systems. While chatbots offered engaging, human-like conversations, their unpredictability and safety risks are pushing companies toward more controlled, purpose-specific interfaces.
Constraint is breeding innovation, with companies discovering that buttons, prompts, and structured options can create safer, more focused products. Even Elon Musk's Grok platform features template suggestions in its image generation tool, providing handy prompts for users with limited technical knowledge.
The changes haven't been without consequences. Character.ai saw user numbers drop from their peak of 26 million when the company banned chatbot conversations involving sexual content or self-harm earlier this year. However, usage has gradually recovered, suggesting that safety-focused approaches can maintain user engagement.
As the AI industry matures, more businesses may prefer having greater control and insight into their digital services rather than embracing the uncanny fluency of chatbots. The future of AI interaction might involve less talking to machines and more clicking on carefully designed options that keep users safe while delivering valuable experiences.