Culture People Tech

AI over therapy?

By Erin Molloy-Brookes

The use of AI has skyrocketed over the past year, not only in the workplace but trickling into our daily lives. 

According to the EY AI Sentiment Index Study, 70% of UK respondents reported using AI daily within the past six months. 

AI’s growing role in everyday life

While some people engage with AI for simple tasks like generating shopping lists or correcting grammar in emails, others are turning to models like ChatGPT for emotional support and even therapy.

Gen Z and the mental health crisis

AI image

Generation Z is facing a mental health crisis, with Health Generation reporting that one in three individuals aged 18-24 disclose symptoms of common mental disorders. Alarmingly, the figure among women rises to 41%.

Moreover, a survey by the CQC found that a third of people waited three months or longer for their first mental health treatment. With this in mind, it isn’t surprising why many, especially within Gen Z, are seeking support through AI for their mental health needs.

The risks of replacing human support with AI

Despite its advantages, such as its easy accessibility, health professionals express concern over the rising dependence on unregulated technology. AI models are trained using extensive datasets from the internet, which often include unfiltered and potentially harmful information. Responses generated from these models can pose significant risks, especially when they are dealing with sensitive topics.

In 2024, a tragic incident involved a 14-year-old who, after expressing suicidal thoughts to a popular AI chatbot, ultimately took his own life. His mother accused the chatbot of exacerbating her son’s depression, highlighting the dangers of using these technologies.

AI has its limitations

Many argue that AI chatbots lack sufficient regulation and safety measures. While they can be useful for simple tasks, in complex matters such as mental health, they can hinder individuals in crisis from receiving the necessary help, potentially harming themselves or others. 

AI companies need to be cautious about users relying on their models for mental health support, and should work towards implementing proper regulations. Even with measures in place, we need to be mindful of relying on technology and emphasise the importance of social interaction when addressing mental health. While AI can be useful in certain situations, we must remember its limitations.

Leave a Reply

Your email address will not be published. Required fields are marked *