Britain’s privacy watchdog is set to investigate whether employers using artificial intelligence in their recruitment systems could discriminate against ethnic minorities and people with disabilities.
John Edwards, the Information Commissioner, has announced plans to investigate automated candidate screening systems, including employer assessment techniques and the artificial intelligence software they use.
In recent years, concerns have grown that AI, in many cases, discriminates against minorities and others because of the speech or writing patterns they use. Many employers use algorithms to narrow digital job applications, saving them time and money.
Regulation has been seen as slow to respond to the challenge presented by the technology, with the TUC and the All Parliamentary Group on the Future Work keen to see laws introduced to curb any misuse or unintended consequences of its use. Frances O’Grady, General Secretary of the TUC, said: “Without fair rules, the use of AI at work could lead to widespread discrimination and unfair treatment, particularly for those in precarious and vulnerable work. ‘gig economy’.
Edwards promised that his plans for the next three years would take into account “the impact that the use of AI in recruitment could have on neurodiverse people or ethnic minorities, who were not part of the tests of this software. “.
Autism, ADHD and dyslexia are included under the umbrella term “neurodivers”.
A survey of executive recruiters conducted by a consulting firm Gartner last year found that nearly all reported using AI for some part of the recruiting and hiring process.
Using AI in the recruitment process is seen as a way to eliminate management bias and prevent discrimination, but could have the opposite effect, as the algorithms themselves can amplify human bias .
Earlier this year, Estée Lauder faced legal action after two employees were algorithmically fired. Last year, the facial recognition software used by Uber, linked to AI processes, was accused of being in fact racist. And in 2018, Amazon dropped a trial of a recruiting algorithm that was found to favor men and reject applicants on the grounds that they attended women-only colleges.
A spokesperson for the Information Commissioner’s Office said: “We will investigate concerns about the use of algorithms to screen recruitment applications, which could negatively impact employment opportunities for people from from various walks of life. We will also set our expectations through updated guidance for AI developers to ensure algorithms treat people and their information fairly.
These algorithms have essentially been left to their own devices, leading to thousands of people having negative impacts on their opportunities” – Natalie Cramp, Data Science Expert
The role of the ICO is to ensure that people’s personal data is kept safe by organizations and is not misused. It has the power to impose a fine of up to 4% of total turnover on them and to order them to make commitments.
Under the UK’s General Data Protection Regulation (which is enforced by the ICO), individuals have the right to non-discrimination in relation to the processing of their data. The ICO has warned in the past that AI-based systems could lead to results that disadvantage particular groups if the dataset on which the algorithm is trained and tested is not complete. The UK Equality Act 2010 also provides people with protection against discrimination, whether caused by a human or automated decision-making system.
In the United States, the Department of Justice and the Equal Employment Opportunity Commission warned in May that commonly used algorithmic tools, including automatic video interview systems, were likely to be discriminatory against people with disabilities.
Legal Commentary
Senior Attorney at Taylor Wessing, Joe Aistonsaid that in addition to the issues of unconscious bias “which inevitably have a regular impact on the hiring processes of companies where human decisions are made”, care should be taken when using any form of software. artificial intelligence when recruiting.
A particular concern for employers is that software they may choose to use to streamline the selection process could be using discriminatory selection processes without their knowledge” – Joe Aiston, Taylor Wessing
While some AI-based recruiting software is marketed to avoid bias and potential discrimination in the recruiting process, depending on the algorithms and decision-making processes used, there is a risk that such software may itself lead to recruitment issues. discrimination. For example, if recruiting software analyzes writing or speech patterns to determine who might be the weakest candidates, it could have a disproportionate negative impact on people who do not have English as their first language or who are neurodiverse. A decision by AI to reject such a candidate for a position solely on that basis could result in a discrimination claim against the employer even though that decision was not made by a human being.
“A particular concern for employers is that the software they may choose to use to streamline the selection process could be using discriminatory selection processes without their knowledge. It is therefore important that the software provider be made to clearly define the selection criteria and algorithms it is expected to use and how they will be applied so that the company can assess any potential risk of discrimination and that it can be rectified.
The law and regulators were catching up with this relatively new area of potential risk, Aiston added, but it was likely that new regulation would be introduced.
Nathalie Cramp, CEO of data science consultancy Profusion, said the ICO’s investigation into whether AI systems show racial bias is very welcome and overdue. This should only be a first step in combating the danger of discriminatory algorithms, she added.
“There have been a number of recent incidents where organizations have used algorithms for functions such as recruitment, and the result has been racial or sexiest discrimination. In many cases, the problem was not discovered for several months or even years. Indeed, the bias was either built into the algorithm itself or from the data that was used. Critically, there has then been little human oversight to determine whether the algorithm’s outputs are not only correct but also fair.
“These algorithms were basically left to fend for themselves, leading to thousands of people having negative impacts on their opportunities,” Cramp said.
“Ultimately, an algorithm is a subjective view into the code, not an objective one. Organizations need more training and education to both verify the data they use and challenge the results of any algorithms. There should be industry-wide best practice guidelines that ensure human oversight remains a key part of AI. Organizations cannot rely on a team or an individual to create and manage these algorithms. »
An ICO investigation alone will not solve these problems, she added. “Without this safety net, people will quickly lose faith in AI and with that will go the enormous potential for it to revolutionize and improve all of our lives.”