The Last Word September/October 2021: Marie Buda

Use AI wisely in tackling mental health in the office.

In Recruiter’s July-August 2021 Special Report, we were introduced to the intriguing case of how Hydro Energy Group was using artificial intelligence (AI) to monitor the mental health of contractors working on isolated, off-shore sites.

As a cognitive neuroscientist working in the field of emotional and social intelligence, I am increasingly being consulted on how to use AI and other technologies to monitor well-being. For example, to track the stress levels of employees in high-risk jobs to reduce PTSD [post-traumatic stress disorder], or whether one can detect at-risk individuals for mental illnesses through monitoring simple behaviours such as typing speed and click rates.

It is interesting that companies such as the Hydro Energy Group have taken measures using AI to protect employee well-being. In this current Covid climate where ‘The Great Resignation’ is occurring, people are swapping jobs primarily due to burnout. Companies that make active efforts towards protecting employee well-being will find themselves ahead of the recruitment game.

However, it does bring up a wealth of ethical and practical questions. Firstly, is it even possible to detect and diagnose mental health through AI? And secondly, should companies be dipping their toes into this sphere in the first place?

This is not to say that AI will take over the job of psychiatrists and clinical psychologists – quite the opposite”

With regards to the former, it is looking increasingly like AI will indeed play a key part in mental health detection and diagnosis. We are already seeing reports of accurate detection of depression through social media posts history. Research labs have used AI to predict fluctuations in bipolar disorder through monitoring phone-based behaviours, such number of calls made, length of phone usage as well as typing speed.

This is not to say that AI will take over the job of psychiatrists and clinical psychologists – quite the opposite. Detecting mental illness is one thing, treatment is another. While there are start-ups developing ‘therapy bots’ designed to converse empathically and display active listening, we are still far from designing effective software that can completely replace human contact and conversation. Therefore, any company that claims to be using any AI software to monitor well-being must show evidence of being able to follow up detection with relevant treatment and care.

While AI shows promise in the field of mental health diagnosis, I do strongly question whether companies should use this technology to monitor worker well-being. It should be the employee’s choice whether they share their emotional health history with their company. I can’t see any employee being comfortable having their email content scanned, or their heartbeat or breathing rates continuously analysed. Emotional well-being is a very private issue and should remain as such, lest this information be abused in some form. Technologies are at risk of security issues, and the last thing people want is their most vulnerable pieces of information being maliciously shared.

AI currently shows much promise in helping us tackle mental health, and we must be equally vigilant about using these tools wisely. 

Marie Buda is a cognitive neuroscientist specialising in human machine understanding, and was formerly director of Studies in Psychological and Behavioural Sciences at Cambridge University

Special Report - Primis makes the best of the difference

For Ben Broughton, the months since March 2021, when he left permanent employment, have been trul

IT/Telecoms 27 September 2021

Special Report - The right set-up for your start-up?

Start-up founders typically have one thing on their minds: securing their first deal.

27 September 2021

Special Report - Current recruitment start-up landscape

It’s hard to think of anything good coming out of the pandemic, but as the world slowly attempts

27 September 2021
Top