LONDON – Russia’s latest “Zapad” military exercise just took place on NATO’s eastern border. Tens of thousands of soldiers participated in the massive four-yearly war games that are both a drill as well as a show of strength for the West. Next time around, in 2021, those troops might be sharing their battle space with a different type of force: self-driving drones, tanks, ships and submersibles.
Drone warfare is hardly new — the first lethal attack conducted by an American unmanned aerial vehicle took place in Afghanistan in October 2001. What is now changing fast, however, is the ability of such unmanned systems to operate without a guiding human hand.
That’s a truly revolutionary shift — and one every major nation wants to lead. Critics have long feared countries might be more willing to go to war with unmanned systems. Now, some see a very real risk that control might pass beyond human beings altogether.
Tech entrepreneur Elon Musk has long warned that humanity might be on the verge of some cataclysmic errors when it comes to artificial intelligence. Last month, he ramped that up with a warning that the development of autonomous weapons platforms might provoke a potentially devastating arms race.
As if to reinforce Musk’s point, Russian President Vladimir Putin told students shortly thereafter that he believed the technology would be a game changer, making it clear Russia would plow resources into it. “The one who becomes leader in this will become ruler of the world,” Putin was quoted as saying.
China too is pushing ahead, believed by some experts to now be the global leader when it comes to developing autonomous swarms of drones.
Already, drones are able to fly themselves independently so they can stay airborne if they lose touch with human pilots. Soon they may be able to make their own tactical decisions. At Georgia Tech in the United States this summer, researchers programmed swarms of light drones to fight their own aerial dogfights. The U.S. military is trying out similar products.
That means one operator could command many, many more drones — or that they might not need direct supervision at all.
Even more important than what is happening in robotics may be the wider developments in artificial intelligence. That won’t make warfare necessarily more deadly — a bomb dropped from a drone is not in itself less lethal than one from a manned aircraft. While it’s possible that greater accuracy might reduce casualties, some analysts fear that the changes brought by new unmanned systems might themselves fuel new conflicts.
“Radical technological change begets radical government policy ideas,” concluded a July report on the topic produced for the U.S. intelligence community by Harvard University’s Belfer Center. It warned an “inevitable” AI arms race could prove as revolutionary as the invention of nuclear weapons.
Artificial intelligence could dramatically increase the efficiency of surveillance technology, allowing a single system to monitor perhaps millions of digital conversations, hacked personal devices and other sources of information. The implications could be terrifying, particularly in the hands of a state with little or no democratic oversight.
At a recent U.K. panel discussion, Britain’s former Special Forces director, Lt. Gen. Graeme Lamb, predicted that by 2030, technological breakthroughs — not just AI, but quantum computing and beyond — would produce entirely unpredictable changes. Special force teams, he suggested, might well have a robotic and artificial intelligence component deployed alongside them — the U.S. Army calls this “manned-unmanned teaming.”
That sounds like something out of science fiction — and it might well look like it. Last year, Russia unveiled its FEDOR humanoid military robot, which could fire a gun.
Most countries deliberately keep their defense AI secret, ultimately fueling the arms race Musk was warning about. Some scientists already worry about a real-world version of the premise for the Arnold Schwarzenegger-starring “Terminator” film franchise in which the U.S., fearing a cyberattack, hands control of key military systems to the artificial intelligence Skynet. (Skynet, fearing its human creators might choose to turn it off, immediately launches a full-scale nuclear attack on humanity.)
For now, Western nations at least look keen to keep a human in the “kill chain.” Not all countries may make that choice, however. Russia has long had a reputation for trusting machines more than people, at one stage considering — and, some evidence suggests, building — an automated system to launch its nuclear arsenal should its command structure be destroyed by a first strike.
Outside of the military, there is evidence AI algorithms have already alarmed their creators. In August, Facebook shut down an AI experiment after programs involved began communicating with each other in a language the humans monitoring them could not understand.
Is this the end for ordinary human soldiering? Almost certainly not. It’s even been argued that a more complex, high-tech battlefield might require more soldiers, not fewer.
Robotic systems may be vulnerable to hacking, jamming and simply rendered inoperable through electronic warfare. Such techniques allow U.S.-led forces in Iraq to largely negate the off-the-shelf drones being used by Islamic State. Russia used similar techniques against Western-made drones in Ukraine.
That’s a worry for armed forces betting — like many industries — on automation. Britain’s new aircraft carrier has only a fraction of the sailors on its only slightly larger U.S. carrier counterparts, relying heavily on automatic systems to manage weaponry and damage control. The latest Russian tank, the T-14 Armata, has an automated turret that will usually be out of reach to most of its crew. Such techniques have clear advantages — but also mean that interfering with electronics could leave them useless.
Such technology is coming whether it is a good idea or not. Indeed, even relatively old military equipment increasingly can be retrofitted. Russian engineers have already demonstrated that they can adapt the 20-year-old T-90 tank to be controlled remotely.
Ironically, the North Korean crisis reminds us that the most dangerous technologies may well remain those invented more than 70 years ago — atomic weapons and the missiles that carry them. Even if mankind can avoid a nuclear apocalypse, however, the coming AI and robotic revolution may prove an equal existential challenge.
Peter Apps is Reuters global affairs columnist.