Business

Development of beneficial AI holds key to creating a better society: expert

by Takaki Tominaga

Kyodo

Whether you love or hate it, artificial intelligence is here to stay.

The question is — particularly in Japan, which is facing a severely aging population and shrinking of its workforce — will AI perform tasks that are beneficial to society as a whole?

AI has seen phenomenal success with a variety of tasks, such as performing facial recognition and piloting autonomous vehicles, but there are obvious concerns about the negative impacts of the technology on society, such as people losing their jobs to automation, as well as the dangers of new-generation weapons.

“It is crucial to have design methodologies for AI that are truly beneficial to humans,” Hideki Asoh, deputy director of the Artificial Intelligence Research Center at the National Institute of Advanced Industrial Science and Technology, said in a recent interview.

The beneficial AI movement, promoted by distinguished researchers and leaders in the AI community, including professors Max Tegmark at the Massachusetts Institute of Technology and Stuart Russell at the University of California, Berkeley, is gaining worldwide attention in the discussion of AI safety.

The Future of Life Institute, a charity and outreach organization that aims to ensure technology benefits humanity, has issued an open letter promoting robust and beneficial artificial intelligence, which has been signed by more than 8,000 people, including researchers and renowned entrepreneurs.

“Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls,” the letter reads.

“We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”

In recent years prominent researchers have also signed another open letter expressing their concerns about offensive “autonomous weapons (that are) beyond meaningful human control.”

Autonomous weapons are military systems capable of engaging targets without outside intervention. An example would be an armed quadcopter that can search out and “eliminate” a human target, but cruise missiles or remotely piloted drones, for which humans make targeting decisions, do not fall into the autonomous weapon category, according to the letter.

The letter points out the dangers of “starting a military AI arms race” while noting at the same time that “there are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”

“I think we will need an agreement to formulate some kind of control mechanism (on autonomous weapons), similar to what we have on nuclear weapons,” Asoh said.

To further develop beneficial AI, it is crucial to figure out how to utilize data and in which fields, according to Asoh, as Japan can benefit from effectively employing AI in medicine and health care, as well as manufacturing industries.

“For example, in the fields of medicine and health care, the utilization of AI has brought data-driven personalization and customization with reasonable costs,” Asoh said.

Before AI, a more traditional one-size-fits-all approach with less consideration for individual differences was common, but there are individuals who do not respond well to particular treatments.

Thus, service providers need to conduct trials to figure out which medicine works for which groups of people, for example, in cancer drug treatments.

A vast amount of data has become collectable across various fields due to the recent spread of internet of things services and technology, Asoh pointed out.

If there is data, AI can predict more accurately which treatments and prevention methods for particular diseases will work on certain people, which in the best-case scenario can contain medical costs by cutting wasteful spending, he said.

Regarding self-driving cars, which are already under active development and testing, Asoh said, “I think technically, (AI) has reached a level capable of driving more safely than humans,” while also noting there are roadblocks that need to be cleared for the technology to be widely applied.

“AI does not get tired, and it possesses 360-degree vision. That’s quite an advantage (compared with human drivers),” he said.

In contrast to AI aimed at specific, narrow tasks such as facial recognition and autonomous driving, Russell explained the two concepts of artificial general intelligence and artificial superintelligence on the website of Berkeley’s Department of Electrical Engineering and Computer Sciences.

AGI is a term intended to emphasize the goal of building general-purpose intelligent systems that can be applicable to a range of cognitive tasks similar to that of humans, while ASI applies to systems that are substantially superior to human intelligence, according to Russell.

“It is hard to make a clear prediction on this issue as a researcher, but I think it is unlikely that superintelligence completely exceeding humans in all areas will emerge by 2045, as some people are worried, while AI will surpass human intelligence in a narrow range of tasks as it has already happened in some fields,” Asoh said.

Among various types of AI, Asoh suggested Japan should focus more on the advancement of beneficial AI for use in issues such as addressing its aging population.

“Japan’s population is aging quickly and we definitely have to do something to address this issue. Also from the perspective of global competitiveness, we must concentrate our work on this type of AI for sure,” he said.

“It is said that Japan failed in the introduction of IT in the first place — we must not repeat the same mistake (in the field of AI),” Asoh said.