Like a child eagerly trying to win some trading cards during a playground huddle, I scrunch up my fingers behind my back before unleashing my hand in time-honored fashion with the Japanese phrase: “Saisho wa gu, janken … pon!”
Result: defeat — again!
Never before have I suffered the humiliation of a 100-percent loss record in this game of janken, known to many in the West as “rock-paper-scissors.”
I’ve tried mind games and I’ve tried all the sneaky tricks in the book — that deft, last-second unfurling of thumb and third and little fingers to slyly shift from scissors to paper, or the faint flick of the forefinger to sell my opponent a dummy.
But the truth is, I just cannot read him, her …. it.
That’s because, my faultless opponent is a robot. More accurately, it’s a robot without a head, torso, legs or arms. It’s just a mechanical hand that rattles in three-four time like a tinny tambourine — but something that has never lost a game of janken in its entire fidgety-digity existence.
The reason is unbearably simple: “It has often been said that humans provide good models for robots,” said Masatoshi Ishikawa — who heads the team responsible for developing the robotic hand and other super-nonhuman devices at the University of Tokyo’s Ishikawa Oku Laboratory. “But the truth is, we are too slow.”
Speed, of course, is all relative. Jamaican sprinter Usain Bolt is pretty swift for a human, but even he loses once in a while. Janken Robot, however, just doesn’t understand the phrase “wooden spoon.”
So how does it do it?
“Some call it cheating,” Ishikawa said with a look of sympathy, presumably for those mugs who have earnestly tried to outwit the unoutwitable. “But I see it as a human weakness; that the human eye is not fast enough to catch out the robot.”
He reasons that it would be seriously lacking in ambition to model the hand on the human equivalent, and looking at the data it’s easy to see why.
There’s a smartphone application called Lightning Tap that times your reflexes according to how quickly you can tap the screen after a virtual bolt of lightning has flashed across it. My best time is 0.19 seconds — about average for someone my age — but if Ishikawa were to program the bot to tap the app it would do so about 70 times faster.
“We are trying to find something to give this technology a bit of a test,” he said, pointing to its digits which each cost about as much as a small car. “A bullet, for example.”
The seeds of the “High Speed Vision” technology that facilitates such phenomenal reaction times were sown by Ishikawa and his colleagues two decades ago when they unveiled a “super-vision chip” that was speedy and powerful — but, at 1.2 meters square, a veritable monster compared with today’s microprocessors, some of which are more than 100 times smaller.
Refinements made to the technology that have enabled robotic fingers to reduce a human’s hand to a mere head-scratching implement have also led to the development of a 3-D book scanner that can process 300 pages per minute; fingers that can catch raw eggs dropped from height without breaking them; microscopes that can perfectly track rapidly moving micro-organisms; and a gesture-user interface that, unlike some motion-sensing devices, such as Microsoft’s Kinect, has no perceptible delay at all between human action and its virtual, on-screen representation.
The technology incorporates high-speed cameras and image-processors, which in the case of the robotic finger allow recognition, processing and analysis of a human hand shape in just 1 millisecond (a thousandth of a second). The signal sent to the fingers to form a winning response then takes the same amount of time again, for a total reaction time of 2 milliseconds — far faster than any human eye can possibly compute.
In imaging terms, the human eye operates at around 12-14 frames per second (fps), a frequency less than half that of conventional image-processing systems, which are geared to standard television rates of 25 or 30 fps, Ishikawa said. Those faster rates are the reason why we are unable to detect the change from one frame of a film to the next.
“If a baseball is thrown at 150 kph, a normal video camera with a sampling rate of 30 fps would be able to track the ball at points every 140 cm along its trajectory,” said Ishikawa, who hopes the technology will eventually be used to assist judgement in sports, such as line calls in tennis and soccer..
“But, with high-speed vision, which has a frequency of 1,000 fps, the ball can be tracked every 4 cm per one frame displacement. If you had a robot batter fitted with this technology, it could easily hit anything you throw at it.”
As if to prove it, Ishikawa and his team have created one such robot, and for good measure, a pitching arm and hand that hurls rapid pitches with eerily realistic finesse. The batting robot uses a 3-D high-speed active-vision system that allows it to track the ball’s position and unfailingly whack a home run. If the pitching arm had human sensibilities, it would have hung up its actuators long ago.
What differentiates the system used in the lab’s devices from those in humanoid robots is that while the latter can use high-speed imagers, they cannot incorporate high-speed processing systems, according to Ishikawa. This is due to their lower-frequency sampling rates and higher, data-heavy image resolution, which increases the response time.
While higher resolution means greater detail and therefore precision, lower frequency rates mean inferior tracking capabilities. In the baseball-hurling example this translates as seeing the ball very well, but only every 140 cm, as opposed to seeing it well enough every 4 cm in Ishikawa’s high-speed, low-resolution design.
“For high-speed uses, it’s the tracking rather than the resolution that is more crucial,” Ishikawa explained.
“Even if humanoid robots could use the high-speed image-processing, they would not be able to work much faster because their actuators and system architecture are too slow to manage it.”
Which in some ways is a relief: Honda’s low-frequency, slightly jerky humanoid robot, Asimo, for example, would no longer seem so endearing were it to serve drinks like a steroid-pumped C-3PO protocol android from the film “Star Wars.”
This is at the heart of Ishikawa’s research: Humanoid robots as we know them do nothing that falls outside of humans’ capabilities — and probably never will. The sheer impact of high-speed vision technology — which Ishikawa and his team have applied to a variety of other usages, including gesture-recognition and quality-control systems — instills a feeling of insecurity, even fear.
“It’s a very powerful tool which could be used negatively,” Ishikawa admitted. “In slower robotics systems it’s not such a big issue, but researchers into high-speed systems have a duty to consider issues of ethics.” Examples of areas in which the technology could pose a threat are spying on typed passwords and codes, maybe even combination locks on safes.
On one level the Janken Robot is just an amusing game — albeit an infuriating one for the human participant.
On another level, it’s disconcerting to realize that there is nothing in its actions that is “learned,” or which it can predict on the basis of instinct or experience. Rather, it relies on, or actually feeds off, an inferior and blissfully ignorant stimulus — the human hand — without which it cannot operate, or perform its act of deception.
It’s the kind of scenario that excites U.S. researcher Carson Reynolds, one of Ishikawa’s colleagues in the university’s Graduate School of Information Science and Technology.
“Since arriving in Tokyo, I’ve grown increasingly fascinated by the ethical and moral quandaries posed by technologies that surpass human ability,” said Reynolds, who has a PhD from the Massachusetts Institute of Technology’s Media Lab, and who has written several papers on the topic of robo-ethics. “When I first came in contact with the research here,” Reynolds said, “the question that sprang to mind was: ‘Would humans feel threatened?’ “
His early conclusion is that the projects at Ishikawa’s lab should not be assessed in terms of wrong or right. “The applications are provocations that seek to engage the viewing audience in questions which are moral or ethical at root.
“For instance, is a robot that cannot lose a game of rock-paper-scissors a cheat? Will baseball-playing robots destroy the bargaining power of players’ unions? Do devices that alter perception cast doubt upon the reality we experience without augmentation?”
However, Reynolds suggested that one rationale for beating humans in games is to develop “a more comprehensive model of human behavior.”
“The ability of robotic systems to outperform humans is a matter that will have broad consequences. For some it has already mattered economically. For others it will matter emotionally as the notion of human identity and what makes us distinct from robots is placed under closer scrutiny.”
Many engineers and designers of robotics, Reynolds said, take the view that the ethical consequences of their work are outside their area of expertise. They are more concerned with being able to demonstrate a new behavior or skill. “Some researchers willfully avoid going over the ways in which their inventions may end up being applied,” he said.
However, Ishikawa is quick to stress an unusual characteristic of his lab — namely, the simultaneous development of technology and potential applications.
“I am the same as many other researchers and concentrate on one discipline, in my case high-speed vision,” he said. “But I’m also pursuing applications in a number of fields, from industrial uses to the auto industry and medicine. We are mindful of the potential abuses, but ultimately we want to use this system to assist humans.”
The lab has already posted some other notable successes besides creating the unbeatable Janken Robot, which is chiefly a tool for demonstrating the technology.
To date, the technology is being tested by an automaker for use in vehicle safety, among other things, while a printing company is using the high-speed book-scanning technology to digitally record thousands of tomes, according to Ishikawa. Closer to home, meanwhile, a science professor at the University of Tokyo is using the microscope to unveil previously unknown facts about micro-organisms.
As for future uses for the Janken Robot, Ishikawa was unsure. An annual AKB48-Janken Championship perhaps?
“Of course, no,” he said. “The robot is not a member of (pop group) AKB48.”