AI in hiring might do more harm than good

0


The use of artificial intelligence in the hiring process has increased in recent years with companies turning to automated assessments, digital interviews, and data analytics to parse through resumes and screen candidates. But as IT strives for better diversity, equity, and inclusion (DEI), it turns out AI can do more harm than help if companies aren’t strategic and thoughtful about how they implement the technology.

“The bias usually comes from the data. If you don’t have a representative data set, or any number of characteristics that you decide on, then of course you’re not going to be properly, finding and evaluating applicants,” says Jelena Kovačević, IEEE Fellow, William R. Berkley Professor, and Dean of the NYU Tandon School of Engineering.

The chief issue with AI’s use in hiring is that, in an industry that has been predominantly male and white for decades, the historical data on which AI hiring systems are built will ultimately have an inherent bias. Without diverse historical data sets to train AI algorithms, AI hiring tools are very likely to carry the same biases that have existed in tech hiring since the 1980s. Still, used effectively, AI can help create a more efficient and fair hiring process, experts say.

The dangers of bias in AI

Because AI algorithms are typically trained on past data, bias with AI is always a concern. In data science, bias is defined as an error that arises from faulty assumptions in the learning algorithm. Train your algorithms with data that doesn’t reflect the current landscape, and you will derive erroneous results. As such, with hiring, especially in an industry like IT, that has had historical issues with diversity, training an algorithm on historical hiring data can be a big mistake.

“It’s really hard to ensure a piece of AI software isn’t inherently biased or has biased effects,” says Ben Winters, an AI and human rights fellow at the Electronic Privacy Information Center. While steps can be taken to avoid this, he adds, “many systems have been shown to have biased effects based on race and disability.”

If you don’t have appreciable diversity in your data set, then it’s impossible for an algorithm to know how individuals from underrepresented groups would have performed in the past. Instead, your algorithm will be biased toward what your data set represents and will compare all future candidates to that archetype, says Kovačević.

Copyright © 2021 IDG Communications, Inc.



Source link

You might also like
Leave A Reply

Your email address will not be published.