PARIS – In a milestone for artificial intelligence, a computer has beaten a human champion at a strategy game that requires “intuition” rather than brute processing power to prevail, its makers said Wednesday.
Dubbed Alphago, the system honed its own skills through a process of trial and error, playing millions of games against itself until it was battle-ready, and surprised even its creators with its prowess.
“AlphaGo won five-nil, and it was stronger than perhaps we were expecting,” said Demis Hassabis, the chief executive of Google DeepMind, a British artificial intelligence (AI) company.
A computer defeating a professional human player at the 3,000-year-old Chinese board game known as Go, was thought to be about a decade off.
The clean-sweep victory over three-time European Go champion Fan Hui “signifies a major step forward in one of the great challenges in the development of artificial intelligence — that of game-playing,” the British Go Association said in a statement.
The two-player game is described as perhaps the most complex ever designed, with more configurations possible than there are atoms in the Universe, Hassabis says.
Players take turns placing stones on a board, trying to surround and capture the opponent’s stones, with the aim of controlling more than 50 percent of the board.
There are hundreds of places where a player can place the first stone, black or white, with hundreds of ways in which the opponent can respond to each of these moves and hundreds of possible responses to each of those in turn.
“But as simple as the rules are, Go is a game of profound complexity. There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible positions,” Hassabis explained in a blog.
Such a search base is “too enormous and too vast for brute force approaches to have any chance,” added his colleague David Silver, who co-authored the paper in the science journal Nature.
So the team sought to create an AI system with a “more human-like” approach to a game Hassabis said “is played primarily through intuition and feel.”
AlphaGo uses two sets of “deep neural networks” containing millions of neuron-like connections to reduce the search base to something more manageable.
The first, “policy network” narrows the search at each turn to only those moves most likely to lead to a win.
The second, “value network,” estimates a winner from each move made, “rather than searching all the way to the end of the game,” said Silver.
“AlphaGo looks ahead by playing out the remainder of the game in its imagination many times over,” he explained.
“The search process itself is not based on brute force, it’s based on something more akin to imagination.”
AlphaGo was programmed with 30 million moves from games played by human experts, and then left to do some self-coaching.
It played “thousands and thousands of games between its neural networks, gradually improving them using a trial-and-error process known as reinforcement learning,” said Silver.
The result: The value networks are able to “very accurately” estimate the eventual winner from any Go position, “a problem that was so hard it was believed to be impossible.”
AlphaGo was tested against the best existing Go programs, and won all but one of its 500 games, even when giving away free moves as a head-start.
Then last October, it beat Hui.
Tanguy Chouard, a Nature editor, described the feat as an “historical milestone” in AI development, which lies “right at the heart of the mystery of what intelligence is.”
Computer games serve as a testing ground for AI developers seeking to invent smart and flexible algorithms that can tackle problems in ways similar to humans.
The first game mastered by a computer was noughts and crosses (tic-tac-toe) in 1952, followed by checkers in 1994, and the famous victory by IBM supercomputer Deep Blue over chess champion Garry Kasparov in 1997.
In 2014, another DeepMind system called DQN taught itself to play 49 different video games, and beat human professionals at those.
But Go has proven tough, and until now, computers could only play as amateurs.
“In the game of Go, we need this amazingly complex, intuitive machinery which people previously thought was only possible within the human brain, to even have an idea of who’s ahead and what the right move is,” said Silver.
The technology may prove useful in making smarter smartphones, and improving medical diagnostics or climate change models, said the team.
AlphaGo’s next challenge will be in March, in Seoul, against Go world champion Lee Sedol of South Korea, who has held the crown for a decade.
“I have heard that Google DeepMind’s AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time,” he said in a statement.