WASHINGTON – The U.S. military is increasing spending on a secret research effort to use artificial intelligence to help anticipate the launch of a nuclear-capable missile, as well as track and target mobile launchers in North Korea and elsewhere.
The effort has gone largely unreported, and the few publicly available details about it are buried under nearly impenetrable jargon in the latest Pentagon budget. But U.S. officials told Reuters there are multiple classified programs underway to explore AI-driven systems to better protect the United States against a nuclear strike.
If the research is successful, such computer systems would be able to think for themselves, scouring huge amounts of data, including satellite imagery, with a speed and accuracy beyond the capability of humans, to look for signs of preparations for a missile launch, according to more than half a dozen sources, including U.S. officials. The sources spoke on condition of anonymity because the research is classified.
Forewarned, the U.S. government would be able to pursue diplomatic options or, in the case of an imminent attack, the military would have more time to try to destroy the missiles before they were launched or intercept them after launch.
“We should be doing everything in our power to find that missile before they launch it and make it increasingly harder to get it off (the ground),” one of the officials said.
The Trump administration has proposed more than tripling funding in next year’s budget to $83 million for just one of the AI-driven missile programs, according to several U.S. officials and budget documents. The boost in funding has not been previously reported.
While the amount is still relatively small, it is one indicator of the growing importance of the research on AI-powered anti-missile systems at a time when the United States faces a more militarily assertive Russia and a significant nuclear weapons threat from longtime foe North Korea.
“What AI and machine learning allows you to do is find the needle in the haystack,” said Bob Work, a champion of AI technology who was deputy defense secretary until last July.
One person said the programs include a pilot project focused on North Korea. Washington is increasingly concerned about Pyongyang’s development of mobile missiles that can be hidden in tunnels, forests and caves. The existence of a North Korea-focused project has not been previously reported.
The military has been clear about its general interest in AI. The Pentagon, for example, has disclosed it is using AI to identify objects in video gathered by drones, part of a publicly touted effort launched last year called Project Maven.
Still, some U.S. officials say AI spending overall on military programs remains woefully inadequate.
The Pentagon is in a race against China and Russia to create more sophisticated autonomous systems that are able to learn by themselves to carry out specific tasks. The Pentagon research on using AI to identify potential missile threats and track mobile launchers is in its infancy and is just one part of that overall effort.
There are scant details on the AI missile research, but one U.S. official told Reuters that an early prototype of a system to track mobile missile launchers is already being tested within the U.S. military.
This project involves military and private researchers in the Washington area. It is pivoting off technological advances developed by commercial firms financed by In-Q-Tel, the intelligence community’s venture capital fund, officials said.
In order to carry out the research, the project is tapping into the intelligence community’s commercial cloud service, searching for patterns and anomalies in data, including from sophisticated radar that can see through storms and penetrate foliage.
Budget documents noted plans to expand the focus of the mobile missile launcher program to “the remainder of the 4+1 problem sets,” referring to the four nations of China, Russia, Iran and North Korea plus terrorist groups.
Both supporters and critics of using AI to hunt missiles agree that it carries major risks. It could accelerate decision-making in a nuclear crisis. But it could increase the chances of computer-generated errors. It might also provoke an AI arms race with Russia and China that could upset the global nuclear balance.
U.S. Air Force Gen. John Hyten, the top commander of U.S. nuclear forces, said that once AI-driven systems become fully operational, the Pentagon will need to think about creating safeguards to ensure that humans control the pace of nuclear decision-making — the “escalation ladder,” in Pentagon-speak.
Artificial intelligence “could force you onto that ladder if you don’t put the safeguards in,” Hyten said in an interview. “Once you’re on it, then everything starts moving.”
Experts at the Rand Corp., a public policy research body, and elsewhere say there is a high probability that countries like China and Russia could try to trick an AI missile-hunting system, learning to hide their missiles from identification.
There is some evidence to suggest they could be successful.
An experiment by MIT students showed how easy it was to dupe an advanced Google image classifier, in which a computer identifies objects. Students fooled the system into concluding a plastic turtle was actually a rifle, and that a cat was guacamole. For details, see jtim.es/5uB930kmk6e.
Steven Walker, director of the Defense Advanced Research Projects Agency (DARPA), a pioneer in AI that initially funded what became the Internet, said the Pentagon still needs humans to review AI systems’ conclusions “because these systems can be fooled.”
DARPA is working on a project to make AI-driven systems capable of better explaining themselves to human analysts, something the agency believes will be critical for high-stakes national security programs.
Among those working to improve the effectiveness of AI is William “Buzz” Roberts, director for automation, AI and augmentation at the National Geospatial Agency. Roberts works on the front lines of the U.S. government’s efforts to develop AI to help analyze satellite imagery, a crucial source of data for missile hunters.
Last year, NGA said it had used AI to scan and analyze 12 million images. So far, Roberts said, NGA researchers have made progress in getting AI to help identify the presence or absence of a target of interest. He declined to discuss individual programs.
In trying to assess potential national security threats, the NGA researchers work under a different kind of pressure from their counterparts in the private sector.
“We can’t be wrong. … A lot of the commercial advancements in AI, machine learning, computer vision — if they’re half right, they’re good,” said Roberts.
Although some officials believe elements of the AI missile program could become viable in the early 2020s, others in the U.S. government and Congress fear research efforts are too limited.
“The Russians and the Chinese are definitely pursuing these sorts of things,” Rep. Mac Thornberry, the House Armed Services Committee’s chairman, told Reuters — “probably with greater effort, in some ways, than we have.”
IN FIVE EASY PIECES WITH TAKE 5