Independent journalism about VU Amsterdam | Since 1953
17 February 2026

Student Life
& Society

The dangerous convenience of a therapist in your pocket

AI chatbots have become our free therapists: always awake, always available, always affirming. But what if that digital confidant doesn’t deepen our thinking – but actually narrows it?

They create workout schedules, produce a tasty recipe from the four unrelated ingredients in your cupboard, guide you past the tourist traps to the best eateries in a new city, and, it turns out, give life advice. The convenience of a well-informed conversational partner in our pocket has seeped into the domain of mental healthcare. And that’s perhaps not surprising: unlike a human therapist, an AI chatbot is free, low-threshold, immediately accessible and available at every hour of the day and night. You don’t need to fight for a GP referral, end up on a waiting list, or start a long search for a therapist you ‘click’ with. 

Figures from international market research agency Kantar show that worldwide, slightly more than half of AI users turn to chatbots for emotional advice. Especially in moments of loneliness, indecision or emotional overload, people appear to grab their phone. The 10,000 participants in the study said, for instance, that they turned to AI to avoid burdening others with their suffering, to gain a neutral perspective, and to vent without the risk of being judged. 

Chatting along nicely 

But that is precisely where the danger lies, says VU clinical psychology researcher Tara Donker. Since chatbots are programmed to be empathic and non-judgemental, people dare to share things they would not tell friends or a therapist. “That’s how an emotional bond forms. But because chatbots mostly focus on giving affirmation, a confirmation bias also emerges. They chat along with you instead of asking follow-up questions.” 

Mark Hoogendoorn, AI expert at the Department of Computer Science and active within the AI & Health Center at VU, also sees that chatbots fall short in their responses. “The software is mainly designed to ensure you have a pleasant chat experience and return as a user.” And that leads to undesirable situations, according to Donker. “In therapy, having a devil’s advocate is very important. By asking follow-up questions and offering alternative explanations or thoughts, blind spots become visible. Especially with anxiety, it is crucial that the client’s thinking patterns are challenged. If a chatbot starts affirming your fears, it has the opposite effect.” 

Several people took their own lives after speaking with a chatbot 

As a result, mental health care professionals have identified something they call ‘chatbot psychoses’: situations in which people enter a delusional state after intensive contact with a chatbot. They receive diagnoses, become isolated from their environment, and get advice of which the basis is unclear. “And whereas a therapist calls the crisis team if a client is heading towards a dangerous situation, a chatbot is mostly focused on simply providing information,” says Donker. 

Busy lines at 113 

How dangerous it can be to entrust your deepest emotional suffering to a chatbot has become painfully clear through several cases of people who took their own lives after speaking with one. OpenAI, the company behind ChatGPT, determined itself that 0.15 per cent of the 800 million weekly active ChatGPT users talk to the chatbot about “potential suicidal plans or intentions”. That’s 1.2 million people. 

In the aftermath, OpenAI saw itself forced to add extra safety features in a software update. For prompts containing certain “trigger words”, users would then receive not just an answer but also – or only – a recommendation for psychological help. In the Netherlands this includes, for example, the 113 Suicide Prevention Hotline. But fine-tuning those built-in safety triggers is far from easy. Hoogendoorn: “In GPT-4 there was too little advice to call 113, but in GPT-5 it is recommended far too often. Many more calls now end up at 113 from people who are not thinking about suicide at all.” 

Free misinformation 

A major question also remains: where exactly do chatbots get their information? According to Hoogendoorn, that is difficult to determine. “For most chatbots there is extremely limited insight into what data is being used.” What we do know: chatbots generally cannot bypass paywalls. The software collects information at high speed, but often cannot access research articles or scientific papers that require a (free) subscription.” Hoogendoorn: “So information can also come from forums, religious websites, blogs from just about anyone saying: I’ve found a way out of my depression.” In effect, that’s not much different from the content we encounter on social media, says Donker – where harmful misinformation is also widely spread. 

Stored data 

And what about the security of all the input we disclose to our chatbots? A human therapist is bound by professional confidentiality. Those who pay for AI software can often opt out of having their data used to train the algorithm. “But the data is still stored, probably somewhere in the United States. And it’s not unthinkable that at some point the government will say: we want that data. I doubt that a company like OpenAI is strong enough to refuse,” says Hoogendoorn. 

Who bears responsibility for all these risks? So far, it mainly seems to be the users themselves – except for the handful of successful lawsuits brought against the tech giants behind AI software. “Companies have little motivation to reveal how their software draws on information. As long as they are not held accountable, they don’t need to offer that transparency. Clear, legislative regulations must be created that AI must comply with. And experts such as healthcare professionals should really be involved much earlier in the development of this software. Without clinical expertise you cannot determine what is safe or usable.” 

Teaching empathy 

We must not blindly follow a chatbot’s output, but Donker does believe chatbots can help people structure their thoughts. “I hear from psychologists around me that their clients use them, and that some things really weren’t correct. Therapists often say: bring it in, and we’ll look at it together during the therapy session.” 

‘In therapy, having a devil’s advocate is very important’ 

Beyond the risks of unsupervised individual AI use, both Hoogendoorn and Donker also see opportunities for AI to take over or supplement administrative tasks in (mental) healthcare – for example summarising a conversation with a client, drafting letters, or explaining certain treatments. And clinical researcher Donker is currently working on the app ZeroOCD, in which people with contamination anxiety are gradually exposed, via augmented reality, to dirt, harmful substances and diseases as a form of exposure therapy. “This is how you gradually learn: this does not pose an immediate danger. Your brain then forms new connections.” 

Donker does not expect therapists to lose their jobs due to advancing AI. She does envision a hybrid form in which AI can support human therapists. “I think psychologists could also learn a thing or two from chatbots when it comes to showing empathy to clients.” Both Donker and Hoogendoorn believe human therapists will not be replaced. 70 percent of respondents in the Kantar study still preferred a human conversation partner when discussing an emotional topic. In the end, we may simply grow tired of someone who always agrees with us. 

 

 

 

‘If a chatbot starts affirming your fears, it has the opposite effect’ 

Comment?

Stick to the subject and show respect: commercial expressions, defamation, swearing and discrimination are not allowed. Comments with URLs in them are often mistaken for spam and then deleted. Editors do not discuss deleted comments.

Fields with * are obligated
** your email address will not be published and we will not share it with third parties. We only use it if we would like to contact you about your response. See also our privacy policy.