Despite growing concerns about the breakneck speed at which the world’s armed forces are incorporating artificial intelligence into their weapons and systems, global cooperation in regulating the military use of the cutting-edge technology is proving elusive.

The challenges were highlighted on Tuesday, the final day of the Responsible AI in the Military Domain (REAIM) summit in Seoul, as over a third of the 96 participating countries, including military powers such as China, Russia and Israel, refused to back a "blueprint for action" that puts a strong emphasis on human oversight.

A total of 60 nations, including the United States and most of its allies, backed the declaration, but there is no guarantee they will adhere to it, experts warned, pointing to its nonbinding nature and the significant military advantages AI provides at a time of growing international tensions.

The new document underscores the importance of setting guardrails and retaining human control over life-and-death decisions, warning that these emerging technologies, while potentially beneficial for militaries, can also present “foreseeable and unforeseeable risks.” These could be the result of design flaws or the potential “misuse or malicious use” of the technology.

To mitigate unintended consequences, the declaration calls for “appropriate human involvement ... in the development, deployment and use of AI," emphasizing that “responsibility and accountability” for the use and effects of this technology “can never be transferred to machines.”

This blueprint reflects an awareness “of how AI use in warfare could go awry, particularly if various militaries’ most potent tools like nuclear weapons are not supervised by human controls,” said Ali Plucinski, a cyberanalyst at the geopolitics and intelligence firm RANE.

Western governments, in particular, are seeking to "reassure citizens and international counterparts that their integration of AI in their offensive military operations will be conducted with extensive risk assessment and oversight protocol to minimize unforeseen risks and peripheral harm to civilians and civilian infrastructure,” she said.

Participants stand by the Tenebris midsize unmanned surface vessel concept, on display at the REAIM summit in Seoul on Tuesday.
Participants stand by the Tenebris midsize unmanned surface vessel concept, on display at the REAIM summit in Seoul on Tuesday. | AFP-JIJI

This week’s REIM summit was only the second of its kind. The first was hosted in the Netherlands last year, resulting in the U.S., China and about 60 other countries backing a more modest "call to action."

Experts say the value of such gatherings lies primarily in promoting international dialogue on the issue.

These forums “create space for countries to discuss, align, and refine their visions and approaches to regulating military AI,” said Kateryna Bondar, an advanced technologies expert at the U.S.-based Center for Strategic and International Studies.

Even though the resulting agreements may not be binding, the discussions reveal geopolitical tensions, clarify national positions and open the door to crucial conversations that can create common ground in the future, she said.

The summits could even act as starting points for establishing norms.

“There is sometimes a tendency to assume that a binding agreement is essential on every single issue — and there are certainly some issues associated with military use of AI that merit legally enforceable constraints — but building consensus is also an important aspect of any governance process,” said Manoj Harjani, coordinator of the Military Transformations Program at the Singapore-based S. Rajaratnam School of International Studies.

The rapid use of AI-enabled military systems in recent conflicts, as well as their growing importance for planners, has underscored the urgent need for assessing the potentially unintended consequences of misusing this technology.

This is exemplified by the unregulated development and use of battlefield drones where the only real constraint is the fear of algorithmic errors, such as those that result in friendly fire.

However, countries directly involved in conflicts have little incentive to slow this development, as shown in the Russia-Ukraine war, where both sides are increasingly leveraging AI-enhanced drones for kinetic, deadly action with minimal human oversight.

“They are reluctant to impose restrictions, viewing military AI as a critical advantage,” Bondar said, noting that even those nations that have endorsed the REIM declaration and similar statements will often act in their own interests.

“In matters of national survival, no statement or agreement, no matter how well-intentioned, will prevent a country from doing whatever it takes to ensure its own security,” Bondar added.

Participants look at a miniature of KF-21 fighter jet (front) on display at the REAIM summit in Seoul on Tuesday.
Participants look at a miniature of KF-21 fighter jet (front) on display at the REAIM summit in Seoul on Tuesday. | AFP-JIJI

One example of this may be Ukraine.

While Kyiv has backed the REIM blueprint, it already made clear in a June white paper by its Ministry of Digital Transformation that Ukraine “in no way” intends to propose the regulation of AI systems in the defense sector due to national security concerns.

“Unilateral regulation ... will only put our country in a less favorable position compared to the aggressor, which will not implement such regulation,” it said.

According to Plucinski, while several countries want to publicly demonstrate their commitment to safe AI development, governments are also “heavily invested” in AI innovation and getting ahead of competitors’ in harnessing the technology.

This is particularly true in the military field, where countries wish to maximize AI's potential for their national defense and foreign policy strategies and objectives, constraining their willingness to implement robust AI regulations that may stymie innovation efforts, she said.

The value, observers say, of various recent nonbinding international AI pledges — including the Bletchley Agreement, the G7 AI Code of Conduct and the United Nations General Assembly AI resolution — is therefore largely symbolic.

Another issue is the large number of obstacles governments face when trying to regulate AI, a challenge Harjani compares to “a constant game of catch-up,” with the technology evolving at a much faster pace than the often-slow-moving government regulation process.

Perhaps more importantly, Harjani added, it would be “unrealistic” to expect countries to impose constraints on themselves when the strategic advantages of AI continue to evolve.

Global AI regulation is also hindered by the lack of a supranational enforcement agency that can ensure countries are implementing safety mechanisms on AI systems.

The world’s top military powers, the U.S. and China, are now neck-and-neck in a competition to be the world’s top AI innovators, with Plucinski pointing out that the Pentagon has already integrated this technology into several projects, including its Combined Joint All-Domain Command and Control strategy as well as innovations in unmanned aerial, naval and ground vehicles and other systems.

But while the U.S. has enjoyed superiority in military technology since the end of the Cold War — and tech companies such as Google, IBM, Microsoft and Meta have leading or strong positions in AI, quantum and computing technologies — this edge is rapidly being eroded by China, which is determined to become a global leader in AI and machine-learning technologies that could revolutionize warfare.

According to a report published late last month by the Australian Strategic Policy Institute, China has expanded its lead as the world’s top research nation, leading in 57, or nearly 90%, of the 64 categories examined by the think tank, including in advanced data analysis, AI algorithms, machine learning and adversarial AI.

While China’s military doctrine is more opaque, Beijing has made clear in its defense papers that it aims to integrate these innovations into the People’s Liberation Army, creating a “world-class” force that offsets U.S. conventional military supremacy in the Indo-Pacific and tilts the balance of power.

Chinese companies such as Tencent Holdings, Alibaba Group Holdings and Huawei Technologies are also reported to be among the top 10 firms conducting AI research.

Given the high stakes, experts argue that, despite the slim chances of reaching a binding international agreement to regulate military AI, such an achievement would be “profoundly important, if possible.”

“AI, at its core, is still a product of human design — software that can be programmed to adhere to rules and regulations,” Bondar said.

“Unlike humans, AI does not experience emotions like anger, hatred, or fear, which often fuel destructive decisions in conflict,” she said, adding that if nations could agree on a framework, AI could be developed to “minimize civilian casualties and unnecessary destruction, following ethical guidelines embedded into its code.”