Can Artificial Intelligence Chatbots Replace Humans in Psychological Treatment?

July 1, 2023 14:55

Chatbot apps that specialize in mental health support using Artificial Intelligence (AI) technology are becoming increasingly popular around the world as medical resources remain limited following the COVID-19 pandemic.

These chatbots have notable advantages in terms of cost and the ability to listen and respond to humans. However, experts also expressed concerns about issues such as user data privacy and the accuracy of AI-generated advice.

Chú thích ảnh

Chatbot applications specializing in mental health support using Artificial Intelligence (AI) technology are becoming increasingly popular around the world.

Cost - the most important factor

Mental health support is a growing challenge around the world. The World Health Organization (WHO) estimates that before the COVID-19 pandemic, around 1 billion people worldwide were living with anxiety and depression, with 82% of them in low- and middle-income countries. The pandemic has increased the number of people suffering from mental health conditions globally by around 27%.

Even before the surge in demand in 2020, most people diagnosed with poor mental health never received treatment, the WHO said. Many others – with few support services in their countries or facing stigma when seeking them – chose not to seek treatment.

A major barrier for people seeking mental health treatment is the often high cost. In the United States, for example, the cost of a counseling session can range from $100 to $250 for people without insurance.

In that context, it’s understandable that many people are turning to AI-powered apps that can help with mental health. According to the Brookings Institution, patients with health insurance can get in-person therapy, while those without insurance are turning to chatbots because they’re cheaper and almost always available 24/7.

Digital tools dedicated to mental health support have been around for more than a decade. The International Journal of Medical Informatics reports that there are now more than 40 AI chatbots operating globally.

However, researchers warn that while the low — even free — cost of AI chatbots may be appealing to users, they must be wary of the disparity in ability between a human expert and a chatbot.

For example, the National Eating Disorders Association (NEDA) in May 2023 began letting a chatbot - named Tessa - replace the hotline that had been operated by humans for more than 20 years.

But earlier this month, NEDA suspended Tessa after the chatbot provided harmful advice to people struggling with mental illness. In some cases, Tessa advised people who already had eating disorders to lose weight.

Easier to open up to AI

Aside from cost, anonymity and lack of cognitive abilities are reasons why many people choose AI chatbots over a human therapist.

One user shared that they understood that the chatbot was just a big language model and that it wasn't capable of "recognizing" anything. But this made it easier for the user to express themselves.

Notably, a study published in the Journal of the American Medical Association in early 2023 surveyed 195 patients randomly selected from a social media forum to evaluate responses from chatbots and human doctors. The results found that the chatbot responses were rated “significantly higher in both quality and empathy” than those from human doctors.

However, as seen in the case of NEDA’s Tessa chatbot, while previous studies have shown that chatbots deliver positive results, the reality of operation has been different. NEDA also acknowledges that chatbots cannot be used to replace human-operated helplines.

The researchers concluded that AI-powered chatbots should be further explored in clinical settings, such as using them to draft pre-written responses that can then be edited by physicians. The study found that the use of AI-powered chatbots could improve physician responses, reduce physician burnout, and improve patient outcomes.

Data privacy concerns

However, in a study published in May 2023, the Mozilla Foundation, a non-profit organization that supports and develops open source projects, found that privacy issues remain a major risk for users of mental health support chatbots.

Of the 32 apps analyzed by the Mozilla Foundation, including popular apps Talkspace, Woebot, and Calm, 28 were flagged for “serious concerns about user data management.” Additionally, 25 failed to meet basic security standards, such as requiring users to have strong passwords.

Mozilla researcher Misha Rykov described these apps as “data-sucking machines masquerading as mental health apps.” It’s entirely possible that user information could end up being collected by insurance brokers, data brokers, and social media companies. Woebot, for example, shares personal information with third parties.

AI experts have warned that AI chatbots still face the same privacy risks as traditional chatbots or any online service that collects personal information from users. They hope that countries will soon have strict regulations to protect users from unethical AI practices, enhance data security and ensure consistent healthcare standards.

AI-powered chatbots are not the answer to the current mental health crisis, as more and more people struggle with severe anxiety and depression. But when paired with doctors and other professionals, AI chatbots can become an incredibly useful tool in the journey to mental health. Ultimately, humanity should use technology to work with, not replace, humans.

According to VNA

(0) Comments
Latest News
Can Artificial Intelligence Chatbots Replace Humans in Psychological Treatment?