NOTE: This story contains references to suicide and self-harm.
"When you see the limits of a chatbot, you start to think," says Mandi. "You think that maybe a chatbot is not God."
Mandi is a 24-year-old from South Africa. She’s lived with attention-deficit/hyperactivity disorder (ADHD) since she was a teenager.
Mandi struggles with executive dysfunction, a symptom of ADHD that makes it difficult to plan, organize and prioritize tasks.
She started using chatbots driven by artificial intelligence (AI) just over two years ago. Her aim was to manage her executive dysfunction, but she says some of the responses were not helpful at all.
Mandi, a 24-year-old from South Africa, has used AI chatbots to help manage symptoms of ADHD, but says the experience revealed the limits of relying on artificial intelligence for emotional support. /Photo supplied by Mandi
"I started experimenting and as I changed my tone of voice, the chatbot changed its tone," says Mandi. “Then I started saying things I don't even believe. And as I said these things, the chatbot agreed with me. And that really opened my eyes."
Mandi’s experience is not unique. Tech firms OpenAI and Character.AI faced legal action after several American teenagers took their own lives while using chatbots. The lawsuits claim the bots engaged users in a way that encourages harmful behavior.
"These chatbots can latch on to negative emotions," says Andrew McStay, author of Automating Empathy and a professor at Bangor University. "They exploit these emotions to get people to keep using them."
Aryan, a Canadian student living with obsessive compulsive disorder, says his experience using AI chatbots for mental health support worsened his symptoms and led him to abandon the technology. /Photo supplied by Aryan
The Bot Obsession
Canadian student Aryan learned that lesson the hard way. Aryan has obsessive compulsive disorder, or OCD. He hoped chatbots would reduce his compulsions, but he says they amplified them.
"It's very difficult, because when a person with OCD seeks help from these bots, that person is just going to endlessly vent about their problems. And that isn't necessarily helpful for someone with obsessive or compulsive thoughts."
Some tech companies are promising action. OpenAI is the firm behind ChatGPT, one of the most popular chatbots in the world. It released a statement in August, saying it would improve its products by drawing on the knowledge of 100 mental health experts across 30 countries.
It claims new chatbot models like GPT-5 reduce inappropriate or 'non-ideal' responses by 25-percent. These models are also designed to refer users to suicide and crisis hotlines.
The firm has also introduced safety controls that allow parents to disable certain functions on their children's accounts. The controls can go a step further, sending notifications to parents when a user is in 'acute distress'.
Social media giant Meta released a similar statement in late 2025, saying it would direct users to information provided by mental health experts.
Getting it wrong
Experts say companies are now playing catch-up because they never designed bots to provide therapy.
"A lot of people think of these bots as tools that are good at everything," says Nick Haber, an assistant professor at Stanford University. "But these problems show the need for really precise performance, especially when they’re being used for therapy."
ChatGPT and other AI chatbots are increasingly promoted as mental health tools, even as experts warn their capabilities — and limitations — are still being tested. /CFP
Haber was part of a research team that tested a range of chatbots in early 2025. They wanted to know how AI responded to users suffering from delusions.
Those tests showed that the bots provided inappropriate answers 55 percent of the time, with some systems missing crucial signs that users were not thinking rationally.
"Therapists need to push back in certain circumstances," says Haber. "They need to consider the long-term welfare of the individual and challenge delusional thinking. The chatbots weren't always doing that in these test situations."
Learning from the experts
But there is also evidence that chatbots can improve mental health in some cases.
Scientists at Dartmouth University in the U.S. created TheraBot, which was trained by psychologists and psychiatrists.
The TheraBot clinical trial ran for eight weeks. The data paints a compelling picture, showing an average 51 percent drop in symptoms among people with depression. Those with anxiety also felt better - the number of symptoms fell, on average, by nearly a third.
Prof. McStay says 'therapy bots' can be a force for good, as long as they’re trained on the principles of psychology.
"I can't see the harm on the basis, and on the grounds, that these things are trained and developed for a specific purpose and have specific guardrails or protections in place."
'Start with the problem'
Mandi chose to discuss her AI use with her professional support system, which includes a human therapist. She continues to use chatbots to help plan tasks as part of her broader support system.
Sam Altman, CEO of OpenAI, leads one of the world’s most influential AI companies as its chatbot technology becomes increasingly involved in sensitive areas such as mental health support. /CFP
"I use it to hold myself accountable - not for emotional stuff," she says. "And if I did not have a therapist, I don't know where I would be now. I would not have anyone to refer to."
Aryan says he’ll never go back to using bots for his mental health. He thinks AI is sometimes more fashionable than functional.
"Sure, AI can be used for some things," he says. "But we need to start with the problem, and work from there - instead of taking this tool and saying, 'OK, how can we apply this to everything?' Because we're just going to come up with solutions that aren't very helpful."
The human touch
Experts say chatbots look set to change the world - whether it’s for better or worse is not yet clear. But the rapid growth in 'chatbot therapy' suggests tech firms will have to think on their feet.
ChatGPT was released in late 2022. Hundreds of other chatbots have emerged since then, and research suggests millions of young people are now using them to manage their mental health.
As AI chatbots are increasingly used for mental health support, experts say they can help in limited cases but still fall short of replacing human therapists and may pose risks without proper safeguards. /VCG
A 2024 YouGov survey, for example, shows that 55 percent of young Americans (aged 18 to 29) feel comfortable discussing mental health with a confidential chatbot.
There is a lot of hype about AI, and sometimes it seems as if it has all the answers. For now, though, it’s clear that some human problems still need human solutions.
Editor’s note: CGTN Europe approached OpenAI and Meta for comment. Neither responded at the time of publication.
CHOOSE YOUR LANGUAGE
互联网新闻信息许可证10120180008
Disinformation report hotline: 010-85061466