Issues | LAW OF THE LAND

Robot rights: From Asimov to Tezuka

by Colin P.A. Jones

Contributing Writer

I just stayed at a hotel that had a robot trundling around the lobby. Its principal function seemed to be conveying the message, “This is a trendy hotel,” but still, it is a sign of the times, one that nicely intersects with my own recent foray into the field of robot law.

I am not trendy, though, just lazy: Law is a much easier subject to write about when there is no actual law. Then you can just focus on the heady topic of what the law should be at some time in the future.

Fantasy vs. the real world

The absence of real robot law is evidenced by the tendency of many who dabble in the subject to pay homage to the late Isaac Asimov’s “Three Laws of Robotics.” In 2017, even the European Parliament jumped on the bandwagon with a resolution on robotic law that included language describing Asimov’s laws as established legal norms.

For those of you who never read Asimov’s “I Robot” or saw the awful 2004 movie adaptation starring Will Smith and various product placements, the three laws (slightly simplified) are:

1) Robots must not harm humans or allow them to be harmed through inaction.

2) Robots must obey human commands unless doing so would violate the first rule.

3) Robots must protect themselves unless doing so would violate the first two rules.

Remember, these are fictional rules. They are also silly and people should stop looking to them for guidance. The closest thing to real-world robot law we could possibly have anytime soon seems likely to develop around the question of when autonomous military drones can make “kill” decisions without human intervention.

In a 2017 law review article, no less a figure than professor John Yoo (formerly of Bush-era “torture memo” fame) argues that new technologies like autonomous drones “increase the precision and decrease the harms of attack.” If it turns out that robots are actually better (“more precise”) at killing humans than humans, then Asimov’s first law seems unlikely to survive.

As for the second law, would you spend a lot of money on a robot that has to obey every human it encounters? “Hey Neurosurgerybot, unclog this drain,” etc. No, right?

As to the third, given our entire political and economic system is based on private property rights, it seems optimistic to expect robots will not be empowered to defend themselves. We already allow artificial beings — corporations — to protect their property rights, sometimes even with lethal force (i.e., armed security guards), so why should robots be any different?

Self-defense can mean a lot of things, too. Some, including the European Parliament have advocated giving robots separate legal personhood, like corporations. This would entail giving them rights that somebody has to defend, and perhaps it will be robots themselves.

AI life in Japan

Robots are a subject of great interest in Japan, as you might expect given it has the largest robot population of any country on Earth. When it comes to robot law, however, individual ministries still seem to be figuring out how they can protect or expand their regulatory mandates. The Ministry of Agriculture, Forestry and Fisheries has developed guidelines for self-guiding harvesters; the Ministry of Health, Labor and Welfare has amended its rules on humans working in proximity with industrial robots and has grand designs for robots to care for the elderly; and the Ministry of Economy, Trade and Industry seems to want a finger in every robot pie. Rights for robots are not a hot topic, but perhaps this is just a reflection of the nation having a ruling party that doesn’t seem particularly keen on people having them either, at least not in the form of “natural” rights that arise simply from being a human being.

Interestingly, Japan has its own literary law of robots — 10 laws that were enunciated decades ago by manga giant Osamu Tezuka in his “Astro Boy” (“Tetsuwan Atomu,” in Japanese) comic book series. These are:

1) Robots must serve humanity.

2) Robots must not kill or harm humans.

3) A robot must call its human creator “father.”

4) A robot can make anything, except money.

5) Robots may not go abroad without permission.

6) Male and female robots may not change their genders.

7) Robots may not change their face to become a different robot.

8) A robot created as an adult may not become a child.

9) A robot may not reassemble a robot that has been disassembled by a human.

10) Robots shall not destroy human homes or tools.

These may seem weird and patriarchal, so perhaps there is a reason why Asimov’s rules have had more longevity. Still, the gendered and familial aspects of Tezuka’s rules are interesting. Moreover, thanks to widely-known comics books such as “Doraemon” and “Dr. Slump,” Japanese people may be far more inclined to accept the idea of a robot as a member of the household, one with amazing powers but who still sits at the dinner table with its human family and laughs at jokes and worries about what is on the big test tomorrow at school.

By contrast, popular American portrayals of robots seem dominated by movies such as “The Terminator,” with them being another species of scary monster. So perhaps it is not coincidental that much of the concern about the rise of robots enabled by AI seems to come from that side of the Pacific.

In my view, Tezuka’s rules are actually more relevant than Asimov’s in that they at least address the concept of robotic identity, albeit by referring to outdated views of family and gender. Yet it is robotic identity — what is a robot? — where many more modern writers on the subject of robot law seem to struggle. As a practical matter, the absence of a legal definition makes it hard to propose clear rules (I have actually proposed a possible solution but will have to leave the details to another day).

Possibly because both Asimov and Tezuka managed to live their entire lives without receiving a single email or software update relating to the EU’s General Data Protection Regulation, neither gave any consideration to privacy. In retrospect this seems a terrible failure of imagination. It should have been predictable that robots in our midst would have incredible powers to record, store and replay everything they experience. As this aspect of technology becomes a real part of our lives, the privacy implications of robots walking among us — particularly in our homes — are huge.

In 2015 Keio University professor Shinpo Fumio proposed eight precepts of robot law, which did reference OECD privacy principals:

1) Humanity first — robots may not harm or become people.

2) Obedience to order — they must follow human orders and be subject to control.

3) Secrecy and privacy — robots must be designed to preserve the secrecy of information they gather.

4) Use limitation — robots must be limited to their intended use and may not be used to harm humans.

5) Security safeguards.

6) Openness and transparency — robot design and use must be verifiable.

7) Individual participation — individuals must participate in the creation of rules governing robots, and robots must not govern individuals.

8) Accountability — there must be rules of liability for robot-caused harm.

All good stuff and nothing about robots having rights, which is fine by me. I like rights as much as the next fellow, particularly when they are vested in me and people I like, but at the end of the day, whether something is a right or a duty largely depends on who gets to tell whom what to do. I may have the right to quietude in my home, but it exists because I can (hopefully) get the authorities to force you to stop practicing your tuba at midnight next door. Robots rights needs to be thought of in similarly basic, practical terms.

Robot rights or wrongs

At a conference last year, German Chancellor Angela Merkel was asked whether robots should have rights. Her response was: “What do you mean? The right to electric power? Or to regular maintenance?”

This was reported in The Economist as an example of the usually dour chancellor’s rarely seen sense of humor but could just as plausibly be taken as a rhetorical question with a serious point. If robots have rights and independent personhood, will they be able to sue their owner for poor maintenance, or demand emergency access to your power supply? Corporations can sue anyone, even their own directors and even shareholders, so why not lawsuits against owners by or on behalf of their robots?

And if robot rights include the right to own property, it only seems logical that they should be able to donate money to political parties that promise better laws for robots. This is already a right recognized in corporations in both Japan and the U.S.

While sounding nice in theory, rights only have meaning if asserted. Adults can assert their own rights, but children, animals and corporations need someone to do it for them. For robots it may be different if AI develops to the point where they can make independent judgments about such things without human intervention. More immediately, though, robotic rights would likely be asserted the same way they are for corporations: through human agents. They could thus simply become another means by which people with property and power tell the rest of us what to do.

In any case it would be nice if we could move from the world of science fiction to real laws, ideally rules that value humans over automatons.

Colin P.A. Jones is a professor at Doshisha Law School in Kyoto and primary author of “The Japanese Legal System” (West Academic Publishing, co-authored with Frank Ravitch). The views expressed are those of the author alone.