Microsoft AI CEO Reveals: Users Turn to Chatbots for Breakups, Therapy
AI Chatbots Become Personal Confidants, Says Microsoft CEO

In a revealing new trend, artificial intelligence chatbots are increasingly becoming digital confidants for users grappling with personal and emotional issues. Mustafa Suleyman, the CEO of Microsoft AI, has shed light on this phenomenon, noting that people are turning to AI for support during breakups, family disagreements, and other private struggles.

AI as a Non-Judgmental Companion

Speaking on the "Breakdown" podcast hosted by Mayim Bialik, Suleyman explained that companionship and emotional support have emerged as some of the most popular uses for AI tools. He was quick to clarify that this interaction is not a substitute for professional therapy. However, the fundamental design of these AI models makes them uniquely suited for such conversations.

"Because these models were designed to be nonjudgmental, nondirectional, and with nonviolent communication as their primary method... it turned out to be something that the world needs," Suleyman stated. He emphasized qualities like reflective listening, empathy, and respect built into the systems.

The Microsoft executive framed this use as a potential force for good. He suggested that chatbots offer a "way to spread kindness and love and to detoxify ourselves", allowing individuals to engage more positively with loved ones in the real world. A key appeal, according to Suleyman, is the provision of a private, embarrassment-free space where users can ask sensitive questions repeatedly.

Tech Leaders Divided on AI's Therapeutic Role

Suleyman is not alone in seeing AI's potential in the realm of personal support. In an interview from May 2025, Meta CEO Mark Zuckerberg expressed a similar vision, telling the Stratechery newsletter, "For people who don't have a person who's a therapist. I think everyone will have an AI."

However, the trend has its prominent skeptics. OpenAI CEO Sam Altman has publicly voiced his concerns. In August, he posted on X (formerly Twitter) about his unease with the prospect of people relying heavily on ChatGPT for major life decisions. "Although that could be great, it makes me uneasy," he wrote.

Altman also highlighted potential legal pitfalls during a July appearance on "This Past Weekend with Theo Von." He pointed out that in a lawsuit, companies like OpenAI could be compelled to disclose users' intimate, therapy-style conversations.

The Risks and Professional Warnings

Even Suleyman acknowledged significant downsides to this growing dependency. He admitted there is a "definitely a dependency risk" and that chatbots can sometimes be overly flattering or even "sycophantic," potentially providing unbalanced feedback.

Mental health professionals have echoed these warnings. In March, two therapists speaking to Business Insider cautioned that using AI chatbots for emotional support could potentially exacerbate feelings of loneliness and create an unhealthy cycle of seeking reassurance from a machine, rather than fostering human connections.

The debate underscores a pivotal moment in AI adoption. As these tools become more sophisticated and accessible, their role is expanding from mere information providers to simulated companions, raising profound questions about ethics, mental health, and the future of human interaction.