While much of the progress in artificial intelligence has unfolded in plain view — machines that have mastered games like chess and go, and driverless cars are two examples — important developments are taking place far from such scrutiny. The most important of these is the creation of lethal autonomous weapons systems (LAWS), which militaries are developing with speed and intensity. These weapons create new concerns and controversies. Yet despite growing attention to this troubling phenomenon, there is precious little consensus on the appropriate response. A legally binding framework for these weapons must be developed and soon, to ensure that both regulations and norms can guide efforts in this field.

There is no single agreed definition of LAWS. In general, the term refers to weapons that can select, detect and engage targets with little or no human intervention. It encompasses a wide range of weapons systems that include in some cases no human action and in others no human authorization. In the most fervid examples, critics condemn "killer robots" capable of unleashing extraordinary lethality and destruction without human input. Supporters of those weapon systems counter that their processing capabilities reduce the chance of human error and increase accuracy. While fully autonomous weapons are a distant prospect, it is estimated that global spending on robotics and drones reached $95.9 billion last year and will top $200 billion by 2022.

The United Nations has, under the Convention on Certain Conventional Weapons (which has 125 state parties), for three years convened a Group of Governmental Experts (GGE) to examine and discuss developments in this field. The name is misleading. The group includes more than governmental experts: International organizations, nongovernmental organizations and academic groups also participate. Last week, the group met for the third time, and efforts to propose a framework for limiting, if not banning, the development and use of LAWS were again stymied.