Behavioral study shows that job candidates prefer AI tools used in the recruiting process that are “blind” to characteristics such as race, age or gender.
Job candidates may be wary of the hiring process if a company uses artificial intelligence to screen candidates and facilitate hiring decisions, a Northeastern University expert says, but their perception improves when they learn that an algorithm is “blind” to characteristics such as gender, race or age.
A group of researchers, including Yakov Bartprofessor of marketing at Northeastern, led a behavioral experiment to see how people’s perceptions of fairness change based on what they are told about the algorithm used in the hiring process.
“Our findings indicate that people perceive hiring algorithms as more procedurally fair when companies adopt a ‘fairness through ignorance’ approach to mitigating bias,” Bart says. “They are also likely to have a more positive opinion of companies that use this approach and are more motivated to apply for open positions.”
AI algorithms have enabled companies to automate aspects of the recruiting process, Bart says. They are now an integral part of the hiring decision-making process that affects individuals and their lives.
Job candidates and relevant stakeholders are generally concerned about bias and unfair treatment in the recruitment process, he explains, especially when it comes to artificial intelligence. Potential job candidates are hesitant to use technologies like AI in their daily lives unless they believe the algorithms behave fairly.
This subjective perception of fairness is very important, Bart explains, because candidates’ opinions on it can negatively affect a company’s reputation and its ability to attract top talent, even if it uses objectively fair hiring algorithms.
“Ultimately, if something is objectively true, but people perceive it differently, they behave based on how they perceive things, not how things actually are,” Bart says.
Bart and his co-authors, Lily Morse of the University of Denver and Mike Teodorescu of the University of Washington, tested three scenarios with different conditions of algorithmic fairness.
First, participants were informed that the algorithm is designed to follow rules of fairness through ignorance.
“We say that algorithms do not consider candidates based on race, gender, age, and other legally protected characteristics,” Bart says. “So they are ‘blind,’ and we explained to them that by remaining blind, algorithms are ensuring that candidates are treated equally.”
In the second scenario, candidates were informed that the algorithm was based on demographic parity, or equality. The algorithm continually examines potential disparities based on these protected characteristics to ensure equal outcomes. For example, the algorithm ensures that all candidates have similar rates of selection for an interview, regardless of their gender.
“Fairness is a matter of perception. So the company may think that implementing demographic parity is fair or equitable,” Bart says. “But an average candidate may not agree with that idea.”
The third scenario tested by the researchers is based on the idea of equal opportunities. In this case, the algorithm ensures that candidates with the same qualifications for the position have the same chance of being selected for an interview, regardless of their gender or other protected individual characteristics.
The fairness-through-ignorance approach produced the most positive outcome. The other scenarios proved unpopular or did not result in a change in opinion among study participants, compared to the control scenario.
“When we break down the results by gender categories, women and people who identify as non-binary show the most positive effect and account for most of the effects,” Bart says.
In men, the effect was insignificant, he said, although the researchers have not yet identified why, hoping to shed more light on this in future studies.
Bart’s recommendation to businesses is to consider adopting this ignorance equity approach.
“Companies might want to take note of this model that shows that the best way to attract candidates to the company is to use the ignorance fairness explanation in their recruiting algorithms,” Bart says. “And, of course, it’s not just an explanation, but something that the company has to comply with.”