Technological advancements coupled with a desire to reduce inefficiencies in the workplace, has led to an increase in the use of artificial intelligence (AI) by employers, typically in recruitment and performance management.
Data protection considerations
However, employers need to be aware of their data protection obligations and great care is needed when contemplating the use of AI processes to make decisions without human involvement.
The UK GDPR restricts employers from making solely automated decisions that have a significant impact on job applicants and workers except in limited circumstances, such as where the decision is necessary for entering into or performing the employment contract or where the data subject has consented. Employers are unlikely to meet these exemptions and should always ensure that there is a human influence on the outcome in any employment decisions involving AI. The ability to process workers’ health data in solely automated decisions is even more limited and must be avoided.
The overlap of data protection and employment law
The use of solely automated decisions in the workplace also presents a high risk of breaching UK equality law and recent examples have seen how the use of AI can lead to unlawful discrimination allegations.
In October 2018, an industry leading retailer was reported to have scrapped an algorithm for recruiting new staff after its machine learning system was configured in a way that saw male candidates as being preferable to female candidates and therefore creating bias. It was reported that the reason for this was that in order to create the algorithm the company had used data sets based on patterns in CVs that they had previously received over recent years. The overwhelming majority of CVs had apparently come from men leading to an algorithm which inadvertently discriminated on the basis of sex.
Another high profile case in March 2021 saw a company’s use of real-time facial recognition software in its app challenged by some of its drivers for being inconsistent and inaccurate for people with darker skin colours. The company required its drivers to identify themselves using software which matched the captured image to an image previously stored on the database. However, it is alleged that the software was not consistent in its recognition of darker skinned faces, leading to some drivers allegedly being denied the use of the app and ultimately losing access to shifts and the termination of accounts. The issue is the subject of an ongoing employment tribunal claim for race discrimination against the company which is being supported by The Independent Workers’ Union of Great Britain.
We would encourage employers to actively practice the following five tips when using AI to make automated decisions in the workplace:
Ensure fully trained, experienced individuals responsible for the development and use of AI to minimise the risk of bias.
Establish clear policies and practices around the use of any AI to make automated decisions.
Ensure that AI is used to assist in making workplace decision, not wholly relied on.
Identify appropriate persons to actively weigh up and interpret automated decisions in the workplace before applying it to the individual.
Remember that solely automated decision making provides data subjects with additional data protection rights and ensure that a data protection impact assessment is in place, any workplace privacy notice is updated and an effective system for individuals to challenge any solely automated decision.