LOS ANGELES – Since graduating from a U.S. university four years ago, Kevin Carballo has lost count of the number of times he has applied for a job only to receive a swift, automated rejection email — sometimes just hours after applying.
Like many job seekers around the world, Carballo’s applications are increasingly being screened by algorithms built to automatically flag attractive applicants to hiring managers.
“There’s no way to apply for a job these days without being analyzed by some sort of automated system,” said Carballo, 27, who is Latino and the first member of his family to go to university.
“It feels like shooting in the dark while being blindfolded — there’s just no way for me to tell my full story when a machine is assessing me,” Carballo, who hoped to get work experience at a law firm before applying to law school, said by phone.
From artificial intelligence programs that assess an applicant’s facial expressions during a video interview, to resume screening platforms predicting job performance, the AI recruitment industry is valued at more than $500 million.
“They are proliferating, they are fast, they are relatively cheap — they are everywhere,” said Alex Enger, a fellow at the Brookings Institute, who studies AI in hiring.
“But at this point there’s very little incentive to build these tools in a way that’s not biased,” he added, saying the cost and time involved in thoroughly testing a system for bias was likely to be prohibitive without regulations requiring it.
For Carballo, racial bias is his topmost concern.
“I worry these algorithms aren’t designed by people like me, and they aren’t designed to pick people like me,” he said, adding that he has undergone a plethora of different AI assessments — from video analytics to custom logic games.
The risk of discrimination is also a central issue for lawmakers around the world as they weigh how to regulate the use of AI technology, particularly in the labor market.
The EU is set to impose rules on the use of AI in hiring, and U.S. lawmakers are considering federal laws to address algorithmic bias. Last year, legislators in New York City proposed a law specifically to regulate AI in hiring.
“We’re approaching an inflection point,” Enger said.
‘It’s a minefield’
According to the most recent survey by human resource industry group Mercer, more than 55% of HR managers in the United States use predictive algorithms to help them make hiring choices.
AI is being introduced at every stage of the hiring pipeline, from the job adverts that potential applicants see to the analysis and assessment of their applications and resumes.
The COVID-19 pandemic has sped up the adoption of such tools. HireVue, a AI hiring firm that builds tools to analyze and score the answers job applicants give in video interviews, reported a 46% surge in usage this year compared with last.
The rise in AI could represent a real opportunity to root out prejudice in the hiring process, said Manish Raghavan, a computer scientist at Cornell University who studies bias in hiring algorithms.
“No one is going to tell you that traditional hiring was equitable,” he said. “And with AI systems we can test them in ways we could never test or audit people’s own biases.”
Subjecting all candidates to the same interview, judged by the same algorithm, eliminates the subjectivity and bias of people in hiring, said Kevin Parker, chief executive of HireVue.
“We can measure how men and women score, and compare how people of color score against white candidates,” he said. “We really try to fine-tune the algorithm to eliminate anything that can cause adverse impact, and come to very close parity.”
But the problem, Raghavan said, is that when you build a machine learning algorithm, bias can creep into it in many ways that are difficult to detect.
Enger echoed that view.
“Natural language processing systems have been shown to associate white names as being more qualified. Resume screening systems have been shown to weed out all applicants who went to a women’s college,” he said.
“It’s a minefield,” he added.
For job seekers like Carballo — who belong to ethnic minorities and have disadvantaged backgrounds — automated tools can easily reinforce patterns of discrimination, Raghavan said.
In 2017, Amazon stopped using an AI resume screener after discovering it penalized resumes that included the word “women,” automatically downgrading graduates of all-women’s colleges.
Because applicants often have no way of understanding how they were scored, they are left wondering if bias crept in, Carballo said.
“I’m a first generation college student, I’m Latino and I didn’t go to a top university — and every time I get a rejection, I wonder if the system was designed to weed someone like me out.”
Audit, regulate or ban?
Industry is eager to be perceived as fighting bias, Raghavan said, citing his own research showing that 12 of the 15 largest firms have announced some efforts to tackle discrimination.
But Enger said there was currently little incentive for companies to invest significant resources in detecting and rooting out bias, as regulators are not yet cracking down.
That could start to change, however, as policymakers begin to take a look at the industry.
Regulatory proposals being considered by the European parliament would designate AI used in hiring as “high-risk,” meaning any companies selling such systems would have to be included in a public database.
It would also impose requirements on firms selling such tools in the EU, such as ensuring datasets are “relevant, representative, free of errors and complete,” according to Daniel Leufer, an analyst at digital rights group Access Now.
Leufer said the draft regulations do not go far enough, calling for a blanket ban on certain AI tools in hiring, including any that use biometric information such as facial movements or voice tone.
“The length of my nose, how I speak, the way I move my mouth — we should not allow people to make inferences about someone’s job performance from these kinds of inputs,” he said.
In New York City, the city council is considering a law that would regulate the AI hiring industry and compel companies to do their own audits for bias, but critics fear it will not be sufficient to rein in discrimination.
“One flawed algorithm can impact hundreds of millions of people,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project (STOP), who wants a freeze on AI in hiring pending further bias investigations.
STOP and 11 other digital and civil rights groups sent a letter to New York City Council late last year asking for stronger protections, including allowing applicants who were discriminated against to file lawsuits.
“We need to press pause until we are able to come up with effective regulatory structures to block AI bias and discrimination,” Cahn said.
In April, after working a string of short-term temporary jobs over the past year, Carballo finally got a full-time job at a law firm. The hiring manager interviewed him without the use of an AI screener.
“I think that made a difference — I wasn’t just a guy from a rough neighborhood, with a Spanish last name,” he said. “I was able to make an impression.”
In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.