News

No quick tech fix for AI bias against women job applicants

  • Date

    Wed 3 Apr 24

Elisabeth Kelan

Experts developing AI systems used in job recruitment accept that these can be biased against women but mistakenly believe this can be completely solved by technological fixes, research says.

In a unique glimpse into the attitudes of AI developers, Professor Elisabeth Kelan interviewed 69 experts based in Britain and abroad on the use of the technology at work, including for recruitment.

Professor Kelan, from Essex Business School at the University of Essex, told the British Sociological Association’s online annual conference that the experts accepted that AI systems could create algorithmic bias against women.

Examples included Amazon’s shutting down of its AI tool used to recruit software developers, which it found automatically downgraded women applicants. This was because this AI had been trained on past CVs, which did not include many from women.

Professor Kelan told the conference: “What happens in machine learning is that you feed data into a computer. The computer builds a model based on this data, and this model then makes predictions. So what we know from prior research is that machine learning is very much about recognising past patterns and then potentially predicting those past patterns into the future.”

Although the AI experts interviewed by Professor Kelan accepted that bias could occur during this, almost all of them believed that this could be fixed. “In the interviews that I conducted most people could be described as techno-optimists.”

Many of them said that AI bias was simply a reflection or amplification of human bias that could be corrected by improving the data, training humans involved in creating data, ‘blinding’ the algorithm so that it would avoid gender bias, and auditing algorithms. Some believed that even with bias AI was still a better judge of job applicants than humans.

She did not accept that bias could be overcome entirely. “I’m a bit sceptical that algorithmic bias can in fact be completely mitigated because we know that any piece of data will already have the traces of society written onto it.

“We will see that these patterns are probably crystallized and repeated and often amplified over and over again.”

She said that AI could be used to detect the gender of applicants on their applications even if they had not stated this openly. “One of the most common examples was language, as certain words have been identified in datasets as used more regularly by women, such as ‘fantastic’, ‘wonderful’, ‘awesome’ and ‘happy’, while words such as ‘inexpensive’, ‘cheap’ and ‘best quality’ are used more by men than women.

“So from these words alone, AI might come to the conclusion that a person has a certain gender.” The system might then be biased against the applicant on the grounds that men were more often appointed to the job in the past than women.

In the case of Amazon, “one of the examples we regularly hear about is Amazon’s failed attempt to recruit new software developers through artificial intelligence. In this case the data were CVs from people who already work on software engineering at Amazon. These CVs were used to train a model, and this model predicted who should be hired in the future and who shouldn’t.

“And it became quite obvious that anybody who had anything related to women on their CV was filtered out. So if you were the captain of the women’s chess club you had no chance of being hired in this example because you were filtered out. So the technology had clearly learned a past pattern and has projected this pattern into the future."

Another example of bias was the way that people’s gender was classified as either male or female with no other options available. “Almost all of those classifications tend to follow a gender binary. So even though there is an acknowledgement in some of the machine learning and AI community that gender goes beyond the binary, most of the datasets used in machine learning have gender labelled as a binary.” AI predictions would probably predict gender as a binary because of this.

Of the 69 people Professor Kelan interviewed, 32 were based in the UK. They were all experts on technologies in the workplace, some working on AI ethics and others in hiring technology companies.

“What makes this research unique is that I spoke to people actually involved in shaping AI tools in hiring and beyond. Most research has previously focused on rather theoretical concerns and lacks a perspective from people actually thinking about and working on AI technologies in hiring.”

The research was funded by a Leverhulme Trust Major Research Fellowship [MRF-2019-069] and the British Academy [SRG20\200195].

Latest EBS news
Ambitious plant-fuel plan boosted in global research
06 Nov 2024
Essex MBA’s global AMBA accreditation extended
05 Nov 2024
Challenges faced by creative freelancers highlighted in new report
23 Oct 2024