When a new law goes into effect in New York City in early 2023, employers will not be allowed to use artificial intelligence to screen applicants unless the technology has undergone a bias audit.
The potential for algorithmic discrimination in recruitment has been the target of state law in Illinois and Maryland. The Federal Commission for Equal Opportunities recently set up a working group to deal with this issue.
The internet has made applying for jobs easier than ever, but has also made the process less human, said Joseph Fuller of Harvard Business School.
“When you turn the tap on, all of a sudden you have a lot of applications and no one is going to print 250 times,” he said.
As a result, most large companies use some sort of automated recruiting system that uses algorithmic filters to narrow the candidate pool. “If you don’t have that, you’re out. If you don’t have that, you’re out, ”Fuller said.
It can be anything from years of experience to your choice of words. Wharton School’s Lindsey Cameron also says that companies are increasingly using automated video interviews.
“And it monitors your tone of voice and facial expressions as best it can and, you know, the depth and quality of your responses,” she said.
Which, while maybe a little creepy, isn’t necessarily bad, she said. Automated systems have the potential to circumvent some human bias, but too often bias is built into technology only, said Nicol Turner Lee, director of the Center for Technology Innovation at the Brookings Institution.
“Computers are programmed by humans to be endowed with the same values, norms and assumptions that humans have,” she said.
Amazon reportedly ditched the AI recruiting system it had been using a few years ago due to concerns about gender bias. Turner Lee said the algorithm was trained on historical data from successful candidates.
“Because the data was trained on men, any resume that suggested a woman’s name, a woman’s college, or a woman’s extracurricular activity like the women’s lacrosse team was deleted,” she said.
Likewise, facial recognition software can put people with darker skin at a disadvantage if algorithms are trained on white faces. Turner Lee said there needs to be more oversight to ensure the AI complies with civil rights laws.