The project – BIAS: Responsible AI for Labour Market Equality - will look at how artificial intelligence (AI) can lead to unintentional bias in the processes of job advertising, hiring and professional networking, which are increasingly digitalised.
Lancaster University will lead the three-year project, working alongside colleagues at Essex’s Department of Mathematical Sciences and the University of Alberta. BIAS is one of ten interdisciplinary projects, worth a total of £8.2 million, where UK and Canadian researchers have joined forces for the first time to support the development of responsible AI.
The BIAS researchers will work with industrial partners to understand gender and ethnic bias within human resource processes, such as hiring and professional networking. They will analyse data from across hiring and recruitment platforms and develop new tools and protocols to mitigate and address such bias. This will allow companies, HR departments and recruitment agencies to tackle such issues in future recruitment.
Project leader Professor Monideepa Tarafdar, from Lancaster University Management School, said: “AI has reached the stage where the rubber is meeting the road and organisations are coming up against the road bumps. Bias is a huge one. We need to tackle labour market inequalities caused by gender and ethnic biases in hiring, job advertising and professional socialisation. They prevent equal and sustainable socio-economic development across all groups in society, and the recruitment process can often be the start of these issues. There are different causes and sources of this bias, and we want to investigate and mitigate them.
“In both the UK and Canada, access and rewards to work remain shaped around social distinctions, such as gender, race, and ethnicity, and the use of AI is known to exacerbate such inequalities through a perpetuation of existing gender and ethnic biases in hiring and career progression.”
Dr Hongsheng Dai, from the Department of Mathematical Sciences at Essex, added: “Data analysis algorithms depend on the input data. However, that data could be biased, which means although there is nothing wrong with the algorithm, the analysis may lead to discrimination and unintentional bias in our society.
“We will be developing new tools and systems to mitigate this bias. If we do not acknowledge and address these gender and ethnic biases, more discrimination might be brought into our lives by the fast development of AI algorithms.”
The research ties in with the UK's Industrial Strategy, which has “putting the UK at the forefront of the AI and data revolution” and will look to develop a protocol for responsible and trustworthy AI that reduces labour market inequalities by tackling gender and ethnic/racial biases in job advertising, hiring and professional networking processes.