Google’s artificial intelligence program AlphaGo’s overwhelming win over South Korean go grandmaster Lee Sedol in a five-game tournament this month has shown that machine intelligence is rapidly evolving and underlined the possibility that it will catch up with and eventually surpass human intelligence. The time has come for us to think how best to use AI in ways that will contribute to — and not detract from — our well-being.

In the tournament held in Seoul, the program built by a Google subsidiary DeepMind defeated Lee, a 33-year-old 9-dan professional go player with 18 world titles, in a 4-1 victory. Google had chosen Lee as an opponent in view of his impressive records, considering him as the world’s strongest player of the board game. The outcome has stunned go players, professional programmers and the public alike — given that experts had previously expected it would take more than 10 years for an AI program to beat a world-class professional go player. It was only last October that AlphaGo beat the three-time European go champion Fan Hui at a score of 5-0 — the first victory by a computer program over a pro player.

Before the tournament began, Lee boasted that he would win a complete victory over the AI program. Then he was humiliated with straight defeats in the first three games, only to come from behind to win game 4. The news surprised Japan’s go masters, who had not expected that the Google program was that good. Previously even the best computer program was given the handicap of four game pieces placed in advance in matches against professional players.

Computer programs have come a long way challenging — and defeating — human players in board games. World chess champion Garry Kasparov beat an IBM supercomputer called Deep Blue 4-2 in 1996, but the program defeated Kasparov 3 1/2 to 2 1/2 the following year. Computer programs then began to beat professional players of shogi (Japanese chess), which had been believed to be harder for computers since the game has more possible patterns of development than chess as players can use game pieces that had been captured from their opponents.

Go was thought to be the last bulwark among board games for professionals playing against computer programs. Go involves placing white and black game pieces called stones on a board with a 19×19 grid of lines. Players arrange the stones to create territories by setting boundaries. They can capture their opponent’s pieces by surrounding them. It is said that patterns of possible game developments in go number more than 10 to the power of 360 — far more than chess’10 to the power of 123. Through experience and intuition, go masters develop the ability to work out the best strategy out of the complexity of near-infinite game patterns. Developing such an ability was long thought to be a major challenge for AI programs.

But technology known as deep learning combined with the number-crunching power of supercomputers has contributed to overcoming this barrier. Deep learning utilizes neural networks similar to those of human brains to learn from a vast amount of data and enhance judging power. This is analogous to a baby learning a language by being exposed to it for a period of time. Google announced in 2012 that its Google Brain project using deep-learning technology understood the characteristics of a cat after it was fed a large number of images of cats and learned to recognize a cat — unlike earlier-generation AI programs that relied on humans first inputting the definition of a cat. Deep learning is said to be a breakthrough in the 50-year history of AI development. Examples of its application in our daily lives include the voice-recognition function of smartphones.

Fed the records of 100,000 matches played by professional players, AlphaGo learned how go is played and played 30 million games against itself. Thus in an amazingly short period of time it became a powerful go player that could beat professional masters. Part of its strength is that it doesn’t tire and doesn’t have emotions that might misguide its choices.

Heated discussions are going on among experts on the so-called 2045 problem — a proposition put forward by American physicist and futurologist author Louis Del Monte that by 2045 “the top species will no longer be humans, but machines” because the rapid progress in AI will allow AI programs to start creating better software on their own, enabling artificial intelligence to eventually outmatch human intelligence.

AI has huge potential to bring benefits to our lives, including automatically averting dangers while driving, greatly improving disease diagnosis, developing useful drugs by finding optimum compounds, and helping solve problems related to climate change. It also has the potential to make humans obsolete and deprive them of jobs. Given that AI is evolving more rapidly than we had imagined, we need to start thinking how to control the use of machine intelligence to increase the well-being of humans. It will be all the more important to study what concrete benefits AI will bring to people and society from various angles and be fully prepared to prevent the emergence of adverse effects.

In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.