Could Artificial Intelligence be used to prevent suicide?

Olivia Brookhouse    19 August, 2019

The UK and Europe are experiencing a Mental Health crisis, where Suicide is the most common cause of death for men aged 20-49 in England and Wales. According to the World Health Organization, even 10% of children (aged 5-16 years) have a clinically diagnosable mental health problem. With Artificial Intelligence and Machine learning technologies being rolled out into many healthcare services, could AI also be the solution to prevent suicide?

Artificial Intelligence in Healthcare can be used to aid early detection, diagnosis and decision making for Heart Disease, Cancer and many more. Not only helping reduce strain on the NHS, but in many cases providing a more accurate and efficient service. The use of AI enables the review and translation of mammograms 30 times faster than conventional analysis, at a 99% accuracy rate, reducing the need for unnecessary biopsies. Could the same concept be applied to detect the often invisible illnesses that lead to suicide?

Employing Big Data to detect symptoms of depression is something LUCA has talked about before. Early detection of mental health problems could help local authorities provide appropriate services to those most in need. It is estimated that 70% of children and adolescents who experience mental health problems have not had appropriate interventions at a sufficiently early age. “#It’s okay not to be okay” went viral on social media last year, encouraging those struggling to reach out, regardless of the severity. However, with limited funding and long waiting lists, even those who seek help, often do not receive it fast enough, and in some cases, will not receive any help at all.

So how could AI help? Crisis Text Line in the US are using machine learning to extract words and emojis that highlight someone may be at a higher risk of suicide or self-harm. The system will prioritise incoming messages according to need. At the Vanderbilt University, a study was undertaken to assess the effectiveness of Machine Learning in anticipating high risk cases.  The outcomes showed that the system accurately anticipated future suicide attempts with 84 to 92 percent accuracy, within one week of a suicide event. Healthcare services must look closely at these kinds of studies, considering we live in a world where one person loses their life every 40 seconds to suicide, amounting to 800,000 every year.

One of the hardest aspects of suicide prevention is providing enough data to machine learning systems to ensure they can accurately assess the severity in each situation. Using patients records, Amazon has launched a new service called Amazon Comprehend Medical that uses machine learning to identify trends in diagnoses, treatments, medication dosage and symptoms. The sources include prescriptions, doctors’ notes, audio interviews and test reports. However sophisticated, the process required to input the data is still arduous.

Identifying this information today is a manual and time-consuming process, which either requires data entry by high skilled medical experts, or teams of developers writing custom code and rules to try and extract the information automatically.

Concerns over data privacy have been raised in response to many of these developments. Never a stranger to controversy, Facebook in the US has introduced suicide prevention algorithms which complies phrases found in posts on the platform to identify people at risk. However, medical experts are concerned that Facebook is compiling pseudo-health information and categorizing people accordingly. Since Facebook introduced livestreaming on the site, they have experienced complaints over the streaming of illicit content, including suicides. The site now, if it deems the case serious enough will contact emergency services to dispatch immediate help. Problems occur when ‘high risk’ individuals are wrongly identified and unnecessary measures are taken.

Is it better to be safe than sorry or does this cross a line? Whilst Facebook has rolled out its suicide prevention in the US, the EU has stalled the expansion across the Atlantic due to the data privacy concerns.

Communication experts are also sceptical about the ability to identify complex mental health problems from the social media platform alone, especially cases that are less overt. Even verbal components only convey 1/3 of human communication, hence nonverbal components such as facial expressions are important for recognition of emotion. Researchers at Stanford have recently explored the use of machine learning to measure the severity of depressive symptoms by analyzing people’s spoken language and 3-D facial expressions aiming to give a more rounded analysis. 

Whilst AI has a large role in the detection side of diseases, could it also be utilized for treatment. Woebot is a conversational chatbot which aims to identify symptoms of anxiety and depression in young teens. The chatbot tracks moods through graphs and displays the progress every week. Then, using Cognitive Behavioral Therapy (CBT) the chatbot creates an “experience of a therapeutic conversation for all of the people that use him.”

AI may become an advanced assistant but will it ever replace a doctor? The reality is that AI in healthcare is still at a developping stage and at an even earlier stage at detecting and responding to complex emotion, especially those emotions which individuals will often try their hardest to conceal. If AI could be used to detect those cases in which “nobody saw it coming” and personalize a response to console the individual or help doctors in their analysis, the benefits could be enormous.

To stay up to date with LUCA, visit our Webpage, subscribe to LUCA Data Speaks and follow us on TwitterLinkedIn YouTube.

Leave a Reply

Your email address will not be published. Required fields are marked *