AI is all the rage, but is it dangerous?

AI on the digital map

Artificial Intelligence continues to be a major trend in HR as companies look to improve hiring decisions and efficiency. As a computer scientist and expert on hiring research, I can attest that there are definitely components of hiring that can be improved with AI. One example is using algorithms to automatically remove identifying information from resumes to make identity-blind resume review more efficient. We can also use AI to help companies write better and more inclusive job descriptions that attract a broader pool of qualified applicants. A company concerned with employee turnover could use AI to identify employees who may be likely to leave based on variables like how many managers they’ve had, pay equity, and length of tenure. These are all exciting applications of AI that could make a real difference to a company’s hiring success.

AI and Recruiting

The main place people seem to be interested in using AI in recruiting is in reducing the number of resumes recruiters have to review to get to the best candidate. This makes perfect sense: given how easy it is to apply to a job with one-click these days, recruiters are understandably overwhelmed with the number of resumes they receive.

Unfortunately, there is a huge risk that using AI in the recruiting process is going to increase bias and not reduce it. Why do I sound so pessimistic? Because AI is completely dependent on the training set that is used to generate its predictive results. We’ve already seen how this can go horribly wrong in trying to identify images and create Twitter posts. When it comes to hiring, a critically important function for companies, AI can perpetuate biased patterns and teams that are very similar to existing ones.

Here’s an example where AI does not serve a company well. Let’s say a corporate hiring manager always looks for candidates who went to Ivy League schools. When an algorithm looks for patterns of the employees at the company, it will notice that there are certain schools that are more common among current employees, and it will seek candidates from those schools. However, research has shown that where someone went to school is not predictive of how well they will perform in a job. So, the algorithm has now found a “signal” in the data that is not predictive of how well a potential candidate will actually do the job. In this case, AI is simply feeding recruiters “more of the same,” which may not be what your company needs to achieve future goals.

Using AI in this way won’t be help organizations predict what they need to achieve future goals. AI is essentially “driving in the rearview mirror” – it is based on what has been done in the past. That’s why AI can’t replace recruiters, who have specific knowledge on the best types of people to hire to meet certain skillsets that will move a company forward.

How to spot potential bias in AI

The possibility of bias in AI training sets won’t occur to many algorithms designers, so it is up to the organizations that are deploying these algorithms to ask the right questions about what testing has been done to ensure bias was not trained into the algorithm itself. For example, if you’re considering video software that analyzes nonverbal communication to predict candidate quality or a pre-assessment that claims to predict job performance, ask whether there were observed group differences in the training data. If they can’t tell you, think twice about using it.

You’re still smarter than AI

Use AI to augment your hiring wisely. No amount of AI can replace following best practices in hiring, like identifying key skills and values before sourcing candidates and using structured interviewing. Some AI can help improve these best practices and get you closer to your goals, faster. Just make sure you have your eyes open for potential biases along the way.