NEW YORK - The development of “killer robots” is a new and original way of using human intelligence for perverse means. Humans directing machines to kill and destroy in a scale not yet imagined is a concept that not even George Orwell could have imagined. In the meantime, the leading world powers continue their merry-go-round of destruction and death — mostly of innocent civilians — without stopping to consider the consequences of their actions.
Killer robots are fully autonomous weapons that can identify, select and engage targets without meaningful human control. Although fully developed weapons of this kind do not yet exist, the world leaders such as the United States, the United Kingdom, Israel, Russia, China and South Korea are already working on creating their precursors.
The U.S. Government Accountability Office reports that in 2012, 76 countries had some kind of drones, and 16 countries already possessed armed ones. The U.S. Defense Department spends $6 billion every year on the research and development of better drones.
South Korea is presently using the Samsung Techwin security surveillance guard robots, which the country uses in the demilitarized zone it shares with North Korea. Although these units are currently operated by humans, the robots have an automatic feature that can detect body heat and fire a machine gun without human intervention.
Israel is developing an armed drone called Harop that can select targets with a special sensor. Northrop Grumman has also developed an autonomous drone called the X-47B that can travel on a pre-programmed flight path while being monitored by a pilot on a ship. It is scheduled to enter active service by 2019. China is also moving rapidly in this area. In 2012 it already had 27 armed drone models, one of which is an autonomous air-to-air supersonic combat aircraft.
Killer robots follow the generation of drones and, as with drones, their potential use is also creating a host of human rights, legal and ethical issues. Military officials state that this kind of hardware protects human life by taking soldiers and pilots out of harm’s way. What they don’t say, however, is that the protected lives are those of the attacking armies, not those of the mostly civilians who are their targets, whose untimely deaths are euphemistically called collateral damage.
According to Denise Garcia, an expert in international law, four branches of international law have been used to limit violence in war: the law of state responsibility, the law on the use of force, international humanitarian law and human rights law. U.S. drone strikes currently violate all of them.
From the ethical point of view, the use of these machines presents a moral dilemma: By allowing machines to make life-and death decisions we remove people’s responsibility for their actions and eliminate accountability. Lack of accountability almost ensures future human rights violations. In addition, many experts believe that the proliferation of autonomous weapons would make an arms race inevitable.
The United Nations is trying to negotiate the future use of autonomous weapons, but the U.S. and U.K. want to support weaker rules that would prohibit future technology but not killer robots developed during the negotiating period. That delay would allow existing semi-autonomous prototypes to continue being used.
The need for a pre-emptive ban on the development and use of this kind of weapon is urgent. As Christof Heyns, the U.N. special rapporteur on extrajudicial, summary or arbitrary executions, stated recently, “If there is not a pre-emptive ban on the high-level autonomous weapons, then once the genie is out of the bottle it will be extremely difficult to get it back in.”
Based in New York, Cesar Chelala, M.D. and Ph.D., is a winner of the Overseas Press Club of America award, a national journalism award from Argentina, and recently received the Cedar of Lebanon Gold Medal from the House of Lebanon in Tucuman, Argentina. He frequently writes on humanitarian issues.