Artificial intelligence (AI) may not be the answer to making the workplace more diverse, some experts say. 

  • AI in hiring is becoming increasingly common as companies try to increase diversity.A new paper argues that AI can make organizations less diverse. Experts say that AI often uses biased data when making decisions.

Researchers from the University of Cambridge recently published a paper criticizing the use of AI to boost workplace diversity. The scientist called many AI hiring practices an “automated pseudoscience.” It’s a growing concern that AI may reinforce bias in various areas. 

“The biggest issue is that AI systems are great at observing and identifying patterns, which can propagate biases with respect to gender, race, etc. present in historical data, or even generate entirely new biases not present in the data,” Pulkit Grover, a professor in the Department of Electrical and Computer Engineering at Carnegie Mellon University, who was not involved in the research, told Lifewire in an email interview. 

Hiring Via AI

Using AI to increase diversity is a method that’s becoming more popular. Proponents of the technology say that using AI can remove biases in hiring because the system doesn’t base its results on things like gender, race, and accents. But the Cambridge researchers claim in their paper that even if you remove ‘gender’ or ‘race’ as distinct categories, the use of AI may increase uniformity in the workforce. 

“We are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers,” the paper’s co-author Eleanor Drage said in a news release. “By claiming that racism, sexism, and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points rather than systems of power that shape how we move through the world.”

Trying to Make Things Fairer

Companies are turning to AI in the hiring process for a good reason, Kareem Saleh, the CEO of FairPlay AI, a company that provides fairness optimization software, told Lifewire via email. He noted that studies have shown that resumes with Latinx-, African- or Asian-sounding names are up to 33% less likely to receive an interview request than resumes with identical qualifications and “non-ethnic” names.

“Existing hiring practices like subjective resume reviews and unstructured interviews are more likely to result in less fair outcomes for diverse candidates,” he added. 

AI often screens for experience, but that’s not always the best way to find a good candidate, Saleh said. He noted that someone who’s run a large science lab has a bunch of experience, which is not highly relevant to running a major chunk of a commercial firm.

“Relevant experience can also be highly biased,” he added. “The availability of certain kinds of certificates/degrees is highly class-related, and class in America is correlated with race.”

It’s a big mess and increasingly getting worse.

Maya Huber, the CEO of TaTiO, a startup that uses AI to increase diversity and minimize bias in the hiring process, told Lifewire via email that it’s becoming increasingly common for companies to use AI for recruiting talent and expanding the diversity pool. However, most of these systems rely on resumes and analyze them with AI rather than using AI for sourcing or assessment tools.

“We need to move forward from analyzing credentials to analyzing competencies,” Huber said. “When focusing on competencies and performance,  recruiters can use more objective and relevant data to avoid bias.”

Lawmakers are starting to take notice of the problems with AI in hiring. Current laws in the US that govern hiring, such as the Civil Rights Act of 1965, are not designed for a world in which an AI algorithm scours every word in your CV, Grover said.

Grover said AI scientists need to be more careful in how they design models used to screen job candidates. And, he said, researchers need to look back at whether bias in AI systems has made hiring more problematic. 

“Biases against minorities are likely to increase because many minority candidates will be screened out by AI designed to be the most accurate on biased data,” he added.

Correction 10/24/22: Paragraph 8 was updated to “not highly relevant” from the original quote.

Get the Latest Tech News Delivered Every Day