• SHARE

Artificial intelligence is making rapid progress and has the potential to bring huge benefits to many aspects of our lives, including improved services for sick, elderly or disabled people, disease diagnosis, development of new medicines, perfecting driverless vehicles and resolving problems related to climate change.

While AI is expected to contribute to enhancing people’s well-being and happiness, it has the potential of doing harm or being used for unethical purposes. The government, scientists, engineers and businesspeople involved in applying AI should push discussions on such questions as the relationship between AI and people and society, and ethical issues involved in the use of AI. Particularly important will be to deepen discussions on how to prevent harm caused by the wrongful use of AI. Such discussions will be indispensable to building public trust in AI and robots.

Google’s artificial intelligence program AlphaGo made headlines in March when it achieved a 4-1 win over South Korean go grandmaster Lee Sedol in a five-game tournament. AlphaGo’s overwhelming victory pointed to the possibility that rapidly evolving AI will catch up with and eventually surpass human intelligence. But another incident the same month underlined the fact that while AI may bring benefits to our lives, it also carries dangers. Microsoft’s AI chatbot was tricked by online people into making racist and hateful remarks — such as “Hitler was right, I hate the Jews” and “I f—-king hate feminists they should all die and burn in hell.” Microsoft had to issue an official apology.

The government’s white paper on science and technology released last month describes how technological progress will have changed people’s lives by around 2035. It says robots will cook on the basis of an AI-produced menu that suits the health conditions of each individual and that people will be able to design their own automatically driven cars as they like with the help of AI. But the report does not mention the potential negative aspects of AI and robot technology.

In a significant move, the Japanese Society for Artificial Intelligence has put forward a draft ethics guideline for AI researchers, which touches on how research in the field should be carried out. The society plans to complete the guideline in about six months by fielding opinions from its members.

A key feature of this draft is that while it says AI research can play an important role in various fields such as the economy and politics, it points out that AI can become harmful to people and public interests irrespective of the intention of researchers and developers because it has versatility and the potential to become autonomous. The guideline calls on AI researchers to publicly explain the limitations of and problems related to AI, and to take steps to prevent questionable use of the technology should it arise.

The draft also says AI researchers must contribute to peace, safety and the public interests of humankind, and must not use AI to harm people. It calls on researchers to do their utmost so that people can have equal access to AI as a resource and deal with people who have no special knowledge about the topic with respect and imagination — a point particularly important given the widening rich-poor divide in education opportunities.

The government has separately launched an experts’ panel to discuss the relationship between AI and society, which held its first meeting in late May. The topics to be discussed include how to handle AI that has its own consciousness, how to use AI in the realm of decision-making and formation of beliefs by humans, AI’s impact on the industrial structure and employment, legal problems related to the use of AI and issues that must be considered in AI research.

The panel should thoroughly discuss possible negative impacts AI may have when it is used in a wide range of fields and come up with concrete measures to deal with them. People’s anxieties about AI derive from the fact that no one knows how far its evolution and progress will go. People fear that they may someday lose their job to AI. The Nomura Research Institute unveiled a report in December which, on the basis of analysis of 601 types of job in Japan, predicted that AI and robots can replace some 49 percent of the nation’s labor force.

In view of the advance of AI and its possible application in various fields, improving education to nurture AI researchers will be inevitable. The government should take care so that such researchers will be rich in knowledge of the humanities and social sciences. Unless they are so equipped, the danger will arise that AI specialists will focus on the technological aspects alone and disregard the wider ramifications — to the possible detriment of and danger to people and society.

In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.

SUBSCRIBE NOW