“Hello and welcome. I can tell you about money exchange, ATMs, opening a bank account or overseas remittance. Which one would you like?”
This greeting is not uttered by a human employee but, instead, a diminutive robot named Nao. Just 58 cm tall, the humanoid will be deployed on a trial basis at one or two branches of Mitsubishi UFJ Financial Group Inc. in April to help customers with their enquiries.
It — I’m really struggling not to say “she” — is just one of an increasing number of robots working in customer-service positions in Japan.
By the time foreign visitors arrive in Tokyo for the Olympic Games in 2020, these multilingual robots may well be commonplace. Nestle Japan, for example, has unveiled plans to employ SoftBank’s emotion-sensing robot called Pepper to sell its coffee machines at up to 1,000 outlets by the end of the year.
So does Nao and her — I mean, its — robot compatriots signal a significant change in robot intelligence?
I’ve written before about how robots are more easily assimilated into society in Japan than in the West.
Many put this down to the fact that Japan has been founded on Shintoism and, as a result, the lack of spiritual distinction between animate and inanimate objects means people in Japan don’t have problems accepting robots as living entities. In other words, Japanese folk simply aren’t weirded out by robots.
This has fascinating implications for how we decide whether advanced machines can think, or even pretend to think. It’s something that a lot of scientists spend a lot of time wondering about, the heart of which can be traced back to the Turing test.
Conceived by computer scientist Alan Turing in the 1950s, the test recognized that a computer could have the ability to “think” if a human interrogator couldn’t tell it apart from another person through conversation. The test has since become something of a sport, with tournaments evaluating the trickiness of computer programs in passing themselves off as human beings.
Last year, Eugene Goostman — purportedly a teenager from Ukraine but, in fact, a piece of software called a chatbot — made headlines when it supposedly passed the Turing test. It hadn’t — it was just very good at chatting — but it won’t be long until a machine becomes so good at chatting that we may be forced to accept that thing as a real person.
I don’t think we’re far away — at least not emotionally. I already find myself wanting to refer to the robot Nao as a female. Many Japanese who currently spend a lot of time with robots — and here I’m thinking particularly of people in hospitals and care homes — tend to refer to them affectionately and add the suffix “chan” when talking to or about their robots.
Let’s say technology improves to a point where we can say that robots really do seem to be thinking, and not just chatting about hamsters (one of Eugene’s favorite subjects) but properly thinking. No doubt we will form bonds with these machines. The question then becomes: If these machines are good enough to really mimic human thought, should they be considered humans and, if so, should they possess the same rights as us?
Funnily enough, these very issues are currently being explored by writers and directors as well as neuroscientists worldwide.
Alex Garland’s “Ex Machina” tackles precisely this issue. The U.K. writer, who wrote “The Beach” in the 1990s before going on to write screenplays for “28 Days Later” and Kazuo Ishiguro’s “Never Let Me Go,” makes his directorial debut with a film that really only has three characters: Caleb, Nathan and Ava. Caleb’s a coder who wins a chance to spend a week at a house belonging to his billionaire boss, Nathan, to take part in an unusual kind of Turing test.
Unusual, it transpires, is a bit of an understatement. The machine attempting to pass itself off as human is Ava — a devastatingly sexy robot. This is no mere humanoid, but a full-on gynoid — a female android — with curves in the right places and a beautiful face. The film’s title derives, of course, from the Latin “deus ex machina,” which means “the god from the machine.”
While Ava is an unusual machine to use in a regular Turing test, her depiction reflects the all-too-familiar way feminized robots are portrayed in film.
In such movies, gynoids are almost always sexbots. Check out the character of Motoko Kusanagi in the anime “Ghost in the Shell.” (The movie is currently being remade in Hollywood, starring Scarlett Johansson.) You might even remember that the female replicant in “Blade Runner,” Pris, is also a sexbot — albeit one who can turn violent in a heartbeat.
“Ex Machina” recalls “Blade Runner” in several ways, not least the way a male character is asked to interrogate a beautiful female character to determine if she is “real” or not. Is Ava a conscious human being? Is she a very good simulation of consciousness, or does she have that “extra spark” that humans have?
And in a sentence, I hope it’s clear what blind alleys the study of consciousness can lead us down.
I don’t think there is an “extra spark” that differentiates humanity from machines — a robot smart enough to fake consciousness might as well be considered to be conscious.
In “Ex Machina,” Caleb queries the setup of the Turing test he has to perform on Ava (the machine is hidden from view in a normal Turing test, but here the robot is in plain sight). Yes, says Nathan. He is so confident of Ava’s intelligence that he feels he can reveal the fact that she is a robot.
I won’t give away what happens, but I wonder if intelligent robots are poised to be embraced in Japan before anywhere else in the world, not only because of the country’s proven prowess in robotics but because its people are more ready to accept the machines.
Rowan Hooper is the news editor of New Scientist magazine. The second volume of Natural Selections columns translated into Japanese is published by Shinchosha at ¥1,500. The title is “Hito wa Ima mo Shinka Shiteru (The Evolving Human).”
In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.