Growing chorus of experts is raising ethical questions about the future of robotics


Special To The Japan Times

Crowds filter through a darkened corner of Tokyo’s National Museum of Emerging Science and Innovation on a recent Saturday, seeking to catch a glimpse of what the future may be like.

One of the attractions is a child-like creature that sits perfectly still, and occasionally bursts into speech. This is Kodomoroid, billed as the world’s first news-reading android, which the museum claims has “potential exceeding that of its human equivalent.”

Kodomoroid is the brainchild of Hiroshi Ishiguro, a professor at Osaka University whose next goal is to make a robot that can formulate its own intentions and desires.

“I think that this is very similar to the beginning of personal computers or smartphones,” Ishiguro says, referring to the state of his industry. “Companies are preparing for the robot society.”

As much as half of Japan’s workforce could be replaced by machines such as Kodomoroid within two decades, according to one estimate in December by Nomura Research Institute. At-risk occupations include everything from bus drivers and security guards to cashiers and cooks, the consulting firm said.

For a country with a growing labor shortage that might not be all bad. However, the expected proliferation of robots in new walks of life is raising all kinds of other thorny questions, too. And the issue is drawing interest from experts in Japan, among other countries.

One of them is Kohtaro Ohba, a genial engineer who turns serious when he discusses what he sees as a key challenge in his industry.

“The problem is that most robotics guys are only focused on technical issues,” he says. “There aren’t so many people thinking about ethics, because we are not social scientists.”

Ohba is deputy director of the Robot Innovation Research Center at the National Institute of Advanced Industrial Science and Technology in Tsukuba, an hour’s drive northeast of Tokyo. He consults with companies in Japan that are looking to develop new prototypes, and is involved with testing them at the institute’s Robot Safety Center, the only facility of its kind in the world.

“Robots should have some benefit for another person,” he says.

However, that’s a practical as well as a moral stance. Ohba believes that without considering ethics, robotics projects are more likely to face problems getting into the market.

Therefore, he encourages designers to think about how their creations will fit into society from the get-go.

As an employee of a government-funded center, Ohba is also part of an ambitious project to help bring about a “new industrial revolution,” as Prime Minister Shinzo Abe has put it, “to spread the use of robotics from large-scale factories to every corner of our economy and society.”

Japan remains one of the largest markets in the world for automated industrial machines, and the government is keen to stay on the cutting edge of efforts to get robots working elsewhere.

SoftBank has been selling out of its “humanoid companion” named Pepper, which the company says can read emotions, since it went on sale last June. And Paro, a small robot made to look like a baby harp seal, has been soothing dementia patients in Japan and Europe since 2003.

The government is hoping that by building on the country’s track record in this area, Japan can overcome its demographic problems and fix its economy.

Policymakers set aside ¥2.3 billion in funding in the fiscal 2016 draft budget for robotics development and have been trying to loosen regulations to spur more activity. Abe has also created an entity called the Robot Revolution Realization Council to figure out how the government can help the industry grow over the next five years.

However, getting to the point where robots will be able to operate alongside humans in everyday life is no simple task.

Growing body of research

To take just one example of the problems that emerge along the way, researchers published the results of a study last year in which a robot named Robovie-II was let loose in an Osaka mall. They discovered that children liked to harass, kick and punch the machine. Therefore, they had to devise a way for Robovie-II to save itself. When confronted by a gaggle of very short people, the robot was programmed to hone in on taller ones, lessening the chances it would feel the children’s wrath.

The study caught the attention of Kate Darling, a researcher at the MIT Media Lab in Cambridge, Massachusetts, and a rising star in the fledgling field of “roboethics.” One thing she’s trying to find out is whether abusing robots can desensitize people, so some of her experiments involve asking participants to smash a tiny machine called a Hexbug that scurries around like an insect.

“What we’ve found is there’s definitely a relationship between people’s natural tendencies for empathy and the way they respond to life-like robots. If you’re not an empathetic person, you’re more willing to beat up a life-like robot,” she says. “The question of desensitization, of whether your behavior toward a robot can change your empathy, is much harder to research. That’s something we’re starting to get at now a little bit.”

Darling, who has a law degree as well as a doctorate of sciences, is also interested in the legal impact of robots. She says advances in technology will likely spur a need for wide-ranging reforms in areas such as consumer rights and privacy.

For example, she cites the controversy caused by Hello Barbie, a version of Mattel’s iconic doll that is connected to Wi-Fi. This high-tech adaptation talks to children and stores their answers “in the cloud,” which has led to headlines about data-security vulnerabilities.

“With every new technology come new methods of collecting data,” Darling says. “But the special thing about robots is that they’re really going to be a part of people’s households where previously we haven’t seen a lot of data collection. It’s not clear to me that the companies developing these technologies are thinking enough about privacy and data security, and that they have enough of an incentive to do so.”

The government has already been forced to pass new regulations on drone flights, after one of the devices carrying a small amount of radioactive material crashed on the roof of the Prime Minister’s Office in April 2015.

Self-driving cars are also spurring legal changes. Abe announced plans in the fall to amend legislation so that autonomous vehicles can be tested on public roads starting in fiscal 2017, hoping it will pave the way for the technology to be showcased during the 2020 Tokyo Olympics.

Another focus for Japan is elderly care. As the population continues to grow older, an acute shortage of nursing care workers is expected to follow. The Ministry of Health, Labor and Welfare last year estimated that Japan will need 750,000 more care workers for the elderly by 2025.

Not surprisingly, the largest robotics research project at the Robot Innovation Research Center focuses on developing robotic devices to address these needs, Ohba says.

One of the latest robots in this field is Robear, which has the head of a cartoon bear but motors around on wheels and is strong enough to lift and move patients from a wheelchair or a bed. The experimental device was unveiled last year, and was developed jointly by the Riken institute and Nagoya-based Sumitomo Riko Co. Ltd.

Robert Sparrow, an ethicist at Australia’s Monash University, however, says that while robots can be used in all kinds of ways to enhance care, one danger is they could wind up make people in nursing homes more isolated.

“Care providers are under immense budgetary pressure, so if they can save money by replacing a human being with a robot, I think it’s very unlikely they will reinvest that funding in more useful caring roles,” he says.

“Loneliness is a kind of cause of death, essentially. It elevates a whole series of risk factors,” he says. “We know that from a long history of aged-care research.”

Weapons of the future

The issue of autonomous weapons has garnered the most attention in the growing debate over what kind of robots we should be making.

“Killer robots” might conjure up futuristic images from the “Terminator” franchise, but some experts believe they’re already in use. One example they point to is the Phalanx close-in weapons system, a “computer-controlled, radar-guided gun” nicknamed R2-D2 that is used by the Maritime Self-Defense Force and other navies to shoot down anti-ship missiles.

Nongovernmental organizations have been lobbying the United Nations to ban autonomous weapons before they’re put into widespread use. An open letter calling for such a ban was published in July, and has drawn signatures from more than 3,000 artificial intelligence and robotics researchers around the world.

Among the signatories was Peter Asaro, an assistant professor at The New School for Public Engagement in New York and co-founder of the International Committee for Robot Arms Control.

The committee believes that making decisions “about the application of violent force must not be delegated to machines.”

Asaro visited Japan in late 2013, and raised his concerns with the director of the arms control and disarmament division at the Foreign Ministry. He’s also troubled by recent defense policy changes in Japan, particularly the decision to reinterpret the Constitution to allow collective self-defense and the lifting of a decades-old ban on arms exports.

“I think there are a lot of people who are not only concerned in general about that shift within the Constitution, but also what that means for robotics in particular,” he says.

Asaro acknowledges that Japan is a leading robotics innovator and producer. If, he says, Japan is able to sell automated weapons systems in a way it hasn’t been able to for 50 years, its perspective on the economic potential of producing them could really change.

The Defense Ministry has been trying to boost cooperation with university researchers, a move that prompted dozens of scientists in Japan to sign a 2014 online petition against joint military-academic research. But while military research seems to still be unpopular among engineers on campuses across the country, some critics of autonomous weapons fear the taboo will fade with time.

The Defense Ministry has also begun to subsidize high-tech research at domestic universities, and the Self-Defense Forces are developing robotics power suits.

As Japan presses ahead with plans to spur a “new industrial revolution,” it faces another challenge in educating engineers to think about moral considerations when designing the world’s future robots.

“Usually people who want to teach ethics are old people,” says Takashi Maeno, a professor at Keio University’s Graduate School of System Design and Management who leads courses on the subject. “Not many young faculty members or students want to do research or education on ethics.”

One obstacle may be the interdisciplinary nature of roboethics, which encompasses everything from law to philosophy to psychology. Interdisciplinary studies are far less common in Japan than in Europe or the United States, Maeno says.

His own research has involved analyzing whether robots “can be good or bad for humans,” he says, and that’s a question he puts to his undergraduate engineering students, too.

Meanwhile, the industry continues to grow. Abe has said he wants to see the robotics market hit ¥2.4 trillion by the time the 2020 Tokyo Olympics rolls around.

For Asaro, however, a wider conversation is needed about what kinds of robots could benefit society before they start cropping up in our daily lives.

“What do we really want them to do? What do we not want them to do? And why? Who gets to decide that? And how does that get implemented?” he says. “I think those are all important questions to be asking.”

  • Betty F

    Instead of worrying about the ethics of future robots, Japan should be discussing the ethics of shooting whales with exploding harpoons and stabbing thousands of dolphins to death in a sickening bloody massacre at Taiji.

    • Pita

      Wrong attitude. Is whaling an issue? Of course. But this is also an incredibly serious issue that’s going to concern the whole world, not just Japan. And it’s going to affect people sooner than most think. Let’s not divert.

      Like climate change, the ramifications of progress in robotics and (more importantly) artificial intelligence feel far off and difficult to grasp. Maybe it’s even more difficult to take seriously because it’s hard to not think of it all like science-fiction. But it’s going to change a lot of people’s lives in ways we probably can’t imagine. People need to start thinking seriously about how we want it to change our lives, or else we’re just chasing the carrot on the stick blind to where we’re going.

  • kyushuphil

    Aren’t the terms “ethics” and “ethical” rather quaint vestiges of the human?

    Schools in Japan, after all, are deliberately reducing their humanities, deliberately keeping things such as essay writing occasional, not central to education. In Minae Mizumura’s “The Fall of Language in the Age of English,” she makes the claim that no school in Japan anymore asks anyone to read any novel from beginning to end (except of course in specialized lit departments). All humanities now get reduced to snippets for machine-readable bubble-in A-B-C-D.

    Students getting shorted of humanities, and essay writing, of course get treated this way due to the consensus key to American-imported materialism that our humanity comes simply from buying stuff, shopping for it, longing after it, according to the advertising promises clever marketers know how to instill in us.

    No one grows into a human adult anymore by aid of any of the old-fashioned ways. Or, do they? Do some schools in Japan escape the humanly-reduced, mechanistic, group-robotic, crass, and vulgar priorities coming from the ministry of education?

  • J.P. Bunny

    Kodomoroid. This sounds like something that would be available in some of the seedier parts of Kabukicho.