While much of the progress in artificial intelligence has unfolded in plain view — machines that have mastered games like chess and go, and driverless cars are two examples — important developments are taking place far from such scrutiny. The most important of these is the creation of lethal autonomous weapons systems (LAWS), which militaries are developing with speed and intensity. These weapons create new concerns and controversies. Yet despite growing attention to this troubling phenomenon, there is precious little consensus on the appropriate response. A legally binding framework for these weapons must be developed and soon, to ensure that both regulations and norms can guide efforts in this field.

There is no single agreed definition of LAWS. In general, the term refers to weapons that can select, detect and engage targets with little or no human intervention. It encompasses a wide range of weapons systems that include in some cases no human action and in others no human authorization. In the most fervid examples, critics condemn “killer robots” capable of unleashing extraordinary lethality and destruction without human input. Supporters of those weapon systems counter that their processing capabilities reduce the chance of human error and increase accuracy. While fully autonomous weapons are a distant prospect, it is estimated that global spending on robotics and drones reached $95.9 billion last year and will top $200 billion by 2022.

Unable to view this article?

This could be due to a conflict with your ad-blocking or security software.

Please add japantimes.co.jp and piano.io to your list of allowed sites.

If this does not resolve the issue or you are unable to add the domains to your allowlist, please see out this support page.

We humbly apologize for the inconvenience.

In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.