A while back I attended a robot expo in Tokyo. It was actually kind of depressing.
Robots are supposed to be sexy, but much of the technology on display was for old people — you know, intelligent dolls that sense when a dementia patient is trying to get out of bed, engaging them in simple conversation long enough for a human helper to arrive — that sort of thing. Even the cool stuff like powered exoskeletons was being marketed as a way to help young people lift invalid octogenarians into the tub.
The Ministry of Economy, Trade and Industry was out in force and had clearly picked automatons for old people as a winner. Doubtless there’s lots of money to be made in peddling androids for the aged. Anything to avoid more immigration, right?
Nonetheless, it seemed that after enjoying decades of economic growth, job security and generous pensions, Japan’s elderly were going to snatch even the promise of cutting-edge technology away from the younger generation. Afterwards, I tried to articulate to my wife why I came back from the event feeling glum, and she summed it up nicely in a rhetorical question — “Yume ga nai, deshō” — “There were no dreams, right?” Indeed. And if you can’t find dreams at a robot convention, where can you find them in Japan?
But this column isn’t about dreams, but nightmares. At the expo, as I waited for a clunky industrial robot in an apron to make me a cup of coffee, I couldn’t help but ponder whether we humans might ultimately be destroyed by our soulless, undying artificial creations.
By which I mean corporations, of course. When it comes to world-destroying potential, robots probably have a long way to go before they catch up with entities whose names end in “Corp.” or “LLC.” Don’t get me wrong, I love corporations — they keep my home warm, make useful stuff and keep much of society employed, including me — but I also know with complete certainty that corporations will never love me back. Neither will robots.
Right now, drones are hogging the limelight when it comes to anti-human behavior. As we all know, drones are now being widely used for monitoring and the occasional due-process-free assassination. Companies such as Amazon want to use them to deliver things to consumers who just can’t wait— and, honestly, if a drone could bring me a tequila and tonic right now, I would happily drink it.
The big debate, of course, is over whether robotic creatures should be allowed to make decisions that significantly affect people — like whether to kill them. Right now, drone operators and sometimes the U.S. president himself make decisions about whether to pull the trigger on a terrorist. But the president is a busy guy and combat is a fast-moving, fluid affair, so there is an argument for letting drones (or LAWS — short for lethal autonomous weapons systems — as the technogentsia now call them) decide for themselves whether to, for example, launch a rocket into a schoolhouse full of insurgents.
Those worried about ceding control over moral decisions like this to automatons might take heart in the November 2014 resolution by the High Contracting Parties to the U.N. Convention on Certain Conventional Weapons (which Japan signed in 1981) to have expert discussions on LAWS next month. This is a welcome development for groups such as the International Committee for Robot Arms Control, which, according to its website, is “concerned about the pressing dangers that military robots pose to peace and international security and to civilians in war.”
On the other side of the debate, robo-ethicist Robin Arkin has been quoted (in a 2013 Rolling Stone article on LAWS) expressing optimism that “if this technology is done correctly, it can potentially lead to a reduction in noncombatant casualties when compared to traditional human war fighters.” This would be great — if nothing else, because of the commercials and campaign ads that would surely result: “Now, with 30 percent fewer noncombatant casualties!”
Having long ago pledged my heart to dark cynicism, I take little comfort from whatever people are trying to accomplish in international law. In fact, I actually don’t understand much of the debate over whether we should let robots decide whether to kill us. If the evolution of corporations is any guide, surely the question is not if this will happen, but when.
In my mind it all comes down to responsibility. As most readers have probably come to realize at some point in their lives, responsibility sucks, and it is perfectly rational behavior to avoid taking it. Autonomous robots will provide that valuable service and more: I don’t want to be an alcoholic, but that Amazon robot kept sensing I wanted more tequila, you see?
Isaac Asimov famously created the three rules of robotics, the first of which was that robots could not harm people or allow them to be harmed. But Asimov was an optimist; if corporations are any guide, whatever iron-clad rules we think we can put in place to keep robots harmless will soon be swiftly set aside by the incredible convenience of being able to use them to harm others without being responsible for it.
The original value of corporate entities was that they were immortal and collective, thus facilitated the ownership of property and incurrence of debt beyond the frail span of human life. They nonetheless entailed serious responsibilities: Human members of municipal corporations could be amerced (fined) in collective responsibility for their violations, and partners in a business entity might have unlimited liability for its debts.
This sort of unpleasantness was remedied by the development of the corporation limited by shares, in which people could invest safe in the knowledge that they would have no responsibility beyond the amount of their investment. Limited liability — a synonym for limited responsibility — has been great for fostering investment. Furthermore, corporations no longer need to be collective entities — a single shareholder will do in most places.
As a result, today you can set up a company with barely any investment at all, and a few hundred bucks will get you a lawyer who will shield you from the consequences of whatever you tell that company to do. In the U.S., “what you can do” covers a lot more stuff now that the Supreme Court has declared corporations to be people enjoying constitutionally protected free speech — a fundamental freedom that now extends to those well-known expressive mediums of wire transfers and personal checks. (Japan’s top court, by the way, reached a similar conclusion way back in 1970.)
Big companies have the added advantage of being fictive, immortal entities with really deep pockets, and therefore pretty much immune from imprisonment, massive fines or most other punishments that make us mortals quiver. Moreover, large corporations are so organizationally diffuse and complicated that they can do all sorts of horrible things without any identifiable person being clearly responsible. Hence the endless stream of “the corporation ate my homework”-type news about companies — oops! — laundering billions of dollars in drug money, almost destroying the world economy and periodically releasing the occasional mushroom cloud of flaming petroleum products or radioactivity over the landscape. Everyone involved is terribly sorry yet not quite culpable. But this is arguably a feature of the corporate form, not a bug.
That is why it is always amusing when politicians talk about “responsible government” while outsourcing a growing range of government functions to entities whose principal merit is that they limit responsibility. A private prison treating convicts inhumanely? Bad, bad corporation — your next annual report is going to have some really unpleasant footnotes. Military contractors gunning down civilians in that country you invaded? See if the State Department renews that contract. Some mutual funds might even dump the stock!
Corporations being a useful means of unbundling responsibility from consequence, you can see why a military or political bureaucracy in which wrong decisions that result in death are (one hopes) fodder for public criticism — or at least, private score-settling — might find autonomous kill-bots extremely useful for reasons that have nothing to do with their actual killing abilities. A robot that decides whether or not to kill someone is a robot that spares a human being the burden of making the same decision. A robot that can be blamed for civilian deaths means a human being who won’t be.
It bears repeating: Responsibility sucks, and it is entirely rational to seek to avoid or at least minimize it. So as long as we feel the need to occasionally harm our fellow human beings, most of us will happily let other people — or things — do the dirty work. U.S. Air Force drone pilots are reportedly quitting in unusually large numbers, with stress being cited as one likely reason. Autonomous robots, by contrast, would suffer no angst from making responsibility-laden decisions.
That letting terminators off the leash could actually lead to more people getting killed is probably immaterial if the urge to avoid responsibility is more powerful than the desire to prevent the needless death of distant strangers. Moreover, when an autonomous drone kills the wrong person, in addition to bad intelligence, it can be attributed to bad programming, faulty sensors and all the other things we curse about when the Internet goes down. If, 20 years from now, African-Americans are being gunned down by robo-cops with alarming and disproportionate frequency, authorities can just attribute it all to a dodgy algorithm or shoddy corporate outsourcing of kill-chip manufacturing to Myanmar.
Here in Japan, such glitches will hopefully be more harmless — the sushi-bot addressing you in English and proffering a fork if it senses you are a foreigner, perhaps. However, if Japan continues down the road towards more assertive use of its military — particularly abroad — the question of who makes decisions to kill will have to be addressed.
Right now such things are probably being discussed in comfortingly theoretical terms by largely anonymous committees, but how many Japanese leaders are there willing to be responsible for making an actual decision? In 2007 then-Justice Minister Kunio Hatoyama famously complained about having to sign the death warrants (as was required of him by law) to execute a prisoner on death row, and wondered aloud whether death sentences couldn’t be confirmed without human intervention, either through some metaphorical conveyer-belt-like process or some sort of random number generator.
In the same vein, Prime Minister Shinzo Abe likes to talk tough, but it is hard to imagine him unilaterally ordering the launch of a Hellfire missile at a school bus full of Islamic State militants driving towards a refugee camp. Nor will his country’s famed preference for responsibility-obfuscating consensus-based decisions work in such a fast-moving situation. So perhaps by default, robots may assume a frighteningly uncute new role in Japan, too — and perhaps sooner than any of us expect.
Colin P.A. Jones is a professor at Doshisha Law School in Kyoto. Starting next month, Law of the Land will appear in print on the second Monday Community Page of every month. Comments: email@example.com
IN FIVE EASY PIECES WITH TAKE 5