AI recruitment software is “automated pseudoscience” • The Register

Claims that AI-powered recruiting software can increase the diversity of new hires in the workplace have been debunked in a study published this week.
Proponents of machine learning algorithms trained to analyze body language and predict candidates’ emotional intelligence believe that the software offers a fairer way of evaluating workers if it doesn’t take gender and race into account. They argue that the new tools could eliminate human bias and help companies meet their diversity, equity and inclusion goals by hiring more employees from underrepresented groups.
But a paper published in the journal philosophy and technology However, a study by two University of Cambridge researchers shows that the software is little more than “automated pseudoscience”. Six computer science students replicated a commercial model used in industry to study how AI recruitment software predicts people’s personalities from images of their faces.
Dubbed the “Personality Engine,” the system looks for the “big five” personality tropes: extraversion, agreeableness, openness, conscientiousness, and neuroticism. They found that the software’s predictions were affected by changes in people’s facial expressions, lighting and background, and their clothing choices. These traits have nothing to do with a job seeker’s skills, so using AI for recruitment purposes is flawed, the researchers argue.
“The fact that changes in light, saturation and contrast affect your personality score is evidence of this,” said Kerry Mackereth, a post-doctoral researcher at the University of Cambridge’s Center for Gender Studies The registry. The paper’s findings are supported by previous studies that have shown how wearing glasses and a headscarf in a video interview or adding a bookshelf in the background can decrease a candidate’s score on conscientiousness and neuroticism, she noted.
Mackereth also explained that these tools are likely trained to look for attributes associated with previous successful candidates and are therefore more likely to recruit similar-looking individuals rather than encourage diversity.
“Machine learning models are understood to be predictive; however, since they are trained on past data, they do not repeat decisions made in the past in the future. As the tools learn from this pre-existing data set, a feedback loop is created between what companies perceive as the ideal employee and the criteria used by automated recruiting tools to select candidates,” she said.
The researchers believe the technology needs to be more tightly regulated. “We worry that some vendors will wrap ‘snake oil’ products in shiny packaging and sell them to unsuspecting customers,” said co-author Eleanor Drage, a postdoctoral researcher also at the Center for Gender Studies.
“While companies may not act in bad faith, there is little accountability for how these products are built or tested. Therefore, this technology and the way it is being marketed could become a dangerous source of misinformation about how recruitment can be ‘equalized’ and made fairer,” she added.
Mackereth said that although the European Union’s AI law classifies such recruiting software as “high risk,” it’s unclear what rules will be enforced to mitigate those risks. “We think these tools and the marketing claims made about these products need to be scrutinized much more seriously, and that the regulation of AI-powered HR tools should play a much more prominent role in the AI policy agenda.”
“While the harms of AI-powered recruitment tools appear to be far more latent and insidious than high-profile cases of algorithmic discrimination, they have the potential to have long-term impacts on employment and socioeconomic mobility,” she concluded. ®
https://www.theregister.com/2022/10/13/ai_recruitment_software_diversity/ AI recruitment software is “automated pseudoscience” • The Register