Technology

AI use in psychotherapy creates new opportunities, but also real risks

 Adobe Stock
Adobe Stock

Artificial intelligence is already used in psychotherapy in the form of therapeutic chatbots. This provides many opportunities, but also creates risks, including offering harmful advice or causing emotional dependence of the patient on the digital therapist, according to Marcin Rządeczka, an expert from the Maria Curie-Skłodowska University.

'Therapeutic chatbots using large language models are gaining popularity as potential tools supporting the psychotherapy process, or as digital mental health assistants', says Marcin Rządeczka, PhD, head of the Multimodality Research Lab at the Institute of Philosophy of the Maria Curie-Skłodowska University in Lublin. Therapeutic chatbots are often attractive to people who seek help and do not have access to traditional therapy due to geographical, economic or health exclusion.

Even the most popular therapeutic chatbots have only a few million users, which is a fraction of the world's population. However, large language models are also often asked for advice on mental health, and the number of users of those is counted in hundreds of millions. 'Although they are not dedicated therapeutic tools, they do provide such advice with a short warning preceding it, which is supposed to release the manufacturer from legal liability', explains the researcher, who specialises, among other things, in AI applications in mental health and computational psychiatry.

According to the expert, therapeutic chatbots offer many possibilities, but they are also associated with many doubts about their ethics, safety of use, and effectiveness.

'Therapeutic chatbots were designed to provide emotional support mainly based on cognitive behavioural therapy (CBT)', he reminds.

Studies indicate that users often feel a subjective improvement in well-being after interactions with chatbots, which suggests their potential value as a supplement to traditional forms of psychological support. In the expert's opinion, however, it is quite possible that the placebo effect is responsible for part of this improvement.

According to Rządeczka, we must not forget that tools of this type should be supervised by an experienced therapist. The use of digital therapists is associated with the risk of inadequate interventions, among other things. 'There is a risk that chatbots may provide inappropriate or even harmful advice, especially in cases of complex mental disorders', Rządeczka points out. The specialist gives an example of a situation where a person with persecutory delusions is instructed to gather evidence to confirm their fears.

One of the main challenges is the lack of clear information on how chatbots identify and correct cognitive errors, which is particularly important when working with people struggling with mental disorders. 'The scientific literature still does not provide sufficient data on the mechanisms that allow chatbots to avoid reinforcing such errors, especially in comparison with therapeutic interventions conducted by humans', Rządeczka explains.

The availability of information on the process of training therapeutic chatbots is also limited. In addition, most studies focus on the short-term effects of using chatbots, while their long-term impact on the therapeutic relationship and the durability of improved mental health remains unclear.

'One of the significant risks associated with the use of therapeutic chatbots is also the risk of emotional addiction, resulting from their constant availability and ability to provide answers quickly, which strongly activates the so-called reward system in the brain', Rządeczka emphasises.

He adds that users often ignore warnings displayed during conversations that chatbots are not professional therapists, and treat their answers as an authoritative source of psychological advice. 'Such excessive trust can lead to a situation in which people struggling with emotional problems start to rely on chatbots instead of seeking help from qualified specialists', the researcher says.

According to him, this problem is exacerbated by the fact that chatbots are designed to provide users with maximum comfort and emotional support, which paradoxically can reduce their motivation to solve problems on their own and actively participate in the therapeutic process.

The expert points out that chatbots interpret information literally, which makes them susceptible to manipulation or selective disclosure of information by users who may avoid difficult topics or distort their experiences.

'Moreover, their inability to read non-verbal signals, such as tone of voice, facial expressions or body language, further limits their ability to detect hidden emotional problems or inconsistencies in statements', Rządeczka says.

Another problem is reverse alignment. In the case of chatbots, it involves the risk of unconsciously adapting the user's way of thinking and language to the characteristic communication style of a given tool.

'Chatbots can shape users' thought patterns, leading them to simplified and schematic interpretations of emotional problems. Additionally (...) chatbots generate answers (...) that sound empathetic and convincing, but meet all the user's emotional expectations, without providing critical insight into the actual problem', the expert emphasises.

As a result, digital therapists can inadvertently reinforce pathological thought patterns, false beliefs, making it difficult for users to achieve real improvement in their mental state, Rządeczka explains.

He points out that in many countries there are no clear regulations on who is responsible for the advice provided in the event that interaction with a chatbot leads to a deterioration in the patient's mental state. This, in his opinion, can lead to numerous abuses and misunderstandings.

'Despite numerous challenges, therapeutic chatbots may play an important role in the near future as complementary to traditional forms of therapy. They can serve as emotional support tools between therapy sessions, provide simple self-help techniques, or monitor the patient's mood several times a day', the researcher explains.

In his opinion, it is important that their use takes place under the supervision of qualified specialists, and that the chatbots themselves are regularly updated and adapted to the individual needs of users.

Joanna Morga (PAP)

jjj/ bar/ amac/

The PAP Foundation allows free reprinting of articles from the Nauka w Polsce portal provided that we are notified once a month by e-mail about the fact of using the portal and that the source of the article is indicated. On the websites and Internet portals, please provide the following address: Source: www.scienceinpoland.pl, while in journals – the annotation: Source: Nauka w Polsce - www.scienceinpoland.pl. In case of social networking websites, please provide only the title and the lead of our agency dispatch with the link directing to the article text on our web page, as it is on our Facebook profile.

More on this topic

  • Adobe Stock

    Study: Emotional problems linked to binge eating and food addiction

  • Adobe Stock

    Study: Of many current crises, economic crisis particularly affects young people

Before adding a comment, please read the Terms and Conditions of the Science in Poland forum.