Many Japanese companies have already shifted to online interviews and seminars for recruiting new employees due to the coronavirus pandemic, but some have gone a step further by testing artificial intelligence to efficiently hire talent.
But while companies see the benefits of AI, such as standardization in the hiring process and saving recruiters’ time by automating high-volume tasks, they are still far from relying completely on the technology due to concerns about it yielding inappropriate or discriminatory decisions.
“Using AI in screening tens of thousands of applicant resumes has helped us cut total labor time by 75 percent. From May, we have also started implementing AI in assessing videos sent by applicants,” said Tomoko Sugihara, director of recruitment at SoftBank Corp.
“Extra time that has been created thanks to AI allows recruiters more time to proactively engage with potential candidates in person, build relationships and carefully determine the candidates’ culture fit,” Sugihara said.
The major mobile carrier, which hires more than 1,000 people a year, has trained AI with data from 1,500 past resume sheets.
Sugihara said humans still go through resumes and videos that AI has “rejected,” in case promising candidates were overlooked.
“We of course cannot rely on AI for all the processes, but making it learn our recruiting policies and standards for what kind of person we are looking for based on past data has helped us gain objectivity and uniformity in our hiring process,” Sugihara said.
Other companies are also using AI to automate or streamline some part of the recruiting workflow, especially repetitive and high-volume tasks at the initial stages of recruitment. Recruiter chatbots are used to interview applicants and subsequently grade, rank and shortlist candidates.
Some are introducing AI-powered video analysis software to assess a candidate’s word choices, speech patterns and facial expressions to see whether he or she is fit for the role being offered and to the corporate culture.
Data on past applicants who were not hired or who have left the firm, including what success they achieved, may also prove useful in hiring and deciding assignments when the recruit eventually joins the firm, human resources officials at companies say.
Major brewer Kirin Holdings Co., which has decided to complete all hiring, including the final interview, online this year to curb the risk of coronavirus infections, also said it will consider utilizing AI technology in future recruiting activities.
“At present, we are not using AI because it may only lead to accepting those that match a certain standard at a time when we are looking to hire diverse personnel,” said Kirin spokesman Keita Sato.
“But we have already been introducing technology in recruiting such as keeping databases of applicants, including evaluations of their interviews, profiles and resumes,” Sato said.
“We share them among interviewers to enhance efficiency and cut labor time. We will consider introducing AI in the future on the basis of this collection of data,” Sato said.
Shinji Kawakami, professor at Business Breakthrough University, said that even before the coronavirus pandemic, companies had become increasingly interested in collecting and analyzing data in the recruiting process in a bid to identify the right candidates from a large pool of applicants.
Manually reading resumes is seen by many firms as extremely time-consuming given the Japanese practice of hiring new graduates en masse in the spring of each business year, which leads to a deluge of resumes, he said.
“They began to think it’s a waste not to use human resources data collected over the years,” Kawakami said. “They also want to make more accurate decisions in hiring as the interviewers’ decisions, influenced by his or her dislikes and likes, cannot always be trusted.”
Companies can use the data to standardize the match between the candidates’ experience, knowledge and skills and the requirements of the job, which will lead to more productive employees with loyalty to the company, he said.
But using AI or machine learning in recruiting requires a careful selection of input data and regular assessments to see if the outcomes match the objectives of the users, analysts said.
“Machine learning is only a tool, it depends on how the user uses it. It can only do what humans can do and nothing more,” said Toshihiro Kamishima, senior researcher at the National Institute of Advanced Industrial Science and Technology.
Kamishima also noted that there are cases where the outcome of machine learning contains unconscious bias even if the training dataset avoids use of sensitive features such as gender, age or race.
For example, if a particular racial group lives in a certain area, inputting data about where they reside would indirectly prompt the computer to learn a racial characteristic.
“It is important that the system is monitored all the time and the model is reviewed continuously. Keeping and sharing a document that records information such as the intended use, training data and evaluation factors is useful,” Kamishima said. “The key is always to be able to fix the problem when it occurs.”
Among unsuccessful examples, Tay, an AI chatterbot released by Microsoft Corp. in 2016, caused controversy when it began to swear and make racist remarks and inflammatory political statements in tweets, prompting the company to terminate the program.
Amazon.com Inc. also reportedly decided to shut down its experimental AI recruiting tool after discovering it discriminated against women.
In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.