Skip to main content
en
Close

Autonomy in weapons systems: playing catch up with technology

Analysis / Autonomous Weapons / Humanitarian Action / Law and Conflict / New Technologies 11 mins read

Autonomy in weapons systems: playing catch up with technology

For almost eight years now, the international community at the United Nations (UN) has been discussing the various ethical, legal and security-related issues surrounding autonomy in weapons systems. The Convention on Certain Conventional Weapons (CCW) in Geneva is the focal point of this exchange. Since 2017, a CCW Group of Governmental Experts (GGE) has been busy deliberating the structure and content of a possible ‘normative and operational framework’ for regulation.

In this post, Frank Sauer – a long-term observer and participant in the CCW process as well as the wider debate on weapon autonomy – examines the current state of the debate and charts a possible way forward for the discussion within the CCW framework.

In May and June of 2021, a UN Security Council report (S/2021/229) made headlines around the globe. Lethal autonomous weapons systems, it stated, had attacked fighters in a battle in Libya in 2020. It remained somewhat unclear, however, if the quadcopters described in the report were remotely piloted or not; that is, if a human was involved in the decision to apply force or not, and if humans were in fact targeted or harmed or killed in the incident. But the peculiar term ‘lethal autonomous weapons systems’ (LAWS), UN parlance for weapons capable of selecting and engaging targets without human intervention, had once again found its way onto the newspaper frontpages and websites of the world.

While experts on the subject matter were not particularly surprised by the event (because loitering munitions similar to those described in the report have been present on battlefields for at least a decade), a wider public was prompted to take note of the fact that weapons technology is – and has been for a while – at a point where algorithmically hunting, targeting and killing people is no longer the stuff of dystopic science-fiction. But is the diplomatic process examining possible options for regulation of weapon autonomy keeping pace?

Regulating autonomy in weapons systems: the state of play in 2021

Every military use of force follows a number of steps, among them the finding, fixing, tracking, selecting and engaging of a target. Delegating the two so-called ‘critical functions’ of the targeting cycle – the selection and engagement of targets – from a human to machine is not as new as the Libya story above might suggest. Terminal defense systems firing at missiles, rockets or artillery shells have featured this kind of functionality for decades. They demonstrate that weapon autonomy is not necessarily problematic. In fact, used in an anti-materiel manner, as is the case when defending against incoming munitions, it is a useful instrument to protect human life on the battlefield.

However, for a number of years, and prompted by civil society and concerned scientists, States Parties at the CCW have been discussing the negative implications of unconstrained weapon autonomy in other applications. This included ethical concerns about humans being stripped of their dignity and reduced to data points fed into an algorithmic killing machine, legal questions surrounding accountability gaps and compliance with international humanitarian law (IHL) as well as security-related debates around the acceleration of combat operations to a point where uncontrollable escalations at machine-speed loom large.

The CCW discussion progressed at a modest pace even before the current global pandemic. After COVID-19 had hit, various multilateral fora managed to proceed in a formal fashion in virtual formats, but a lack of consensus on procedure forced diplomats and civil society at the CCW to continue their exchange informally online.

A major recent milestone in this discussion was the ICRC recommending the adoption of new, legally binding regulation to States. This is remarkable for at least three reasons.

One, it is quite rare, though not unprecedented, for the ICRC, the guardian of IHL, to explicitly recommend the creation of new international law. Two, when the ICRC does this, usually because it deems foreseeable adverse consequences of inaction and the risk of existing principles eroding as being too great, its word carries a lot of weight. Past arms control and disarmament processes – such as the prohibitions on blinding lasers or anti-personnel landmines – were greatly affected by the ICRC’s positioning, and recent discussions in the GGE on LAWS suggest a similar impact. Three, without predetermining too many specifics, the ICRC’s position points to the possible contents and structure of the ‘normative and operational framework’ the GGE seeks to create. With new momentum gained, where might the GGE process go, especially in view of the next CCW Review Conference (RevCon) due to take place in December?

Fleshing out the “normative and operational framework”: a possible way forward

There are three main stumbling blocks impending progress. The first is the misconstrued relevance of specific enabling technologies. The second is a misunderstanding surrounding the role of a so-called ‘technical definition of LAWS’. The third and final one is dissent regarding possible forms of regulation.

As seen above, terminal defense systems selecting and engaging targets without human intervention have been in use for decades. Clearly then, machine learning (or whatever is currently en vogue in the wide field that is artificial intelligence (AI)) is not a requirement for lending a weapons system autonomy in its critical functions. AI is a new and powerful enabler, of course. Recent innovations, such as computer vision, are allowing the application of weapon autonomy on a much larger scale, so that autonomous targeting has left its former niche applications. Nevertheless, the CCW discussion can remain largely agnostic regarding the precise characteristics of the underlying technologies. After all, since only humans can be moral agents, since legal decisions call for human judgment, and since humans are well-suited to act as circuit breakers in run-away automated processes, it might as well make the human element the focus of the debate rather than losing precious time debating supposed levels of technological sophistication or needlessly trying to differentiate between ‘autonomy’ and ‘automation’. This is directly connected to the still lingering question of a technical definition.

Definitional struggles are no longer plaguing the CCW process on LAWS as much as they used to. But some stakeholders still keep seeking the infamous ‘possible definition of LAWS’, the rationale being that arms control always requires a precise categorization of the object in question before any regulative action can be taken. In the case of weapon autonomy, however, defining and then regulating a specific class of military hardware is not applicable. Almost any current and future weapons system can conceivably be endowed with autonomy in its critical functions, and no one will be able to tell what any given system’s level of dependence on human input is by merely inspecting it from the outside. In addition, autonomous functionality will in many cases be distributed in a system of systems-architecture, that is, independent of one specific platform, and eventually it will come down to nothing more than clicking a check box in a user interface.

Hence the challenge of regulation is not met by trying to categorically separate ‘LAWS’ from ‘non-LAWS’. Instead, the challenge is met by developing a new norm in order to adjust human-machine-interaction on the battlefield: Who or what – human or machine – is supposed to perform which function on the targeting cycle where and when? Finding context-dependent and differentiated answers to this question will yield the desired regulation on how autonomous weapons technologies are applied in a manner that is ethically acceptable, compliant with IHL and prudent in terms of the preservation of international security and stability. Luckily, considerable headway has been made this year, also due to the recent papers circulated by the Belgian chair of the GGE, which underlined that the issue under discussion is best characterized by asking what the circumstances of autonomous target selection and engagement are within a framework of human command and control.

Convergence is slowly but surely not only taking place in terms of conceptualization, resulting in much less ‘talking past each other’ at the CCW. It is also observable regarding the structure of a possible regulation which, potentially, could take shape in the fully fleshed-out ‘normative and operational framework’. A two-pronged approach combining prohibitions and regulations is taking shape: First, there are specific applications of autonomy in the critical functions of weapons systems that are not acceptable to many members of the international community and should thus be prohibited. Here, the ICRC as well as the Campaign to Stop Killer Robots and a recently formed group of ten States at the CCW especially highlight the targeting of human beings. In addition, the ICRC as well as many States Parties suggest that uncontrollable autonomous weapon systems should be ruled out as well due to their potentially unforeseeable or indiscriminate effects on the battlefield.

Second, when applying force against target profiles other than those intended to represent humans, such as various military objects, autonomy in the critical functions is acceptable, but it requires certain limits and constraints, that is, positive obligations to curb ethical risks, safeguard IHL compliance and address security and safety concerns. Those limits and constraints can be temporal, spatial and, generally speaking, be subsumed under the notion that human control – no matter if eventually characterized as ‘meaningful’, ‘substantial’ or ‘effective’ – must be preserved by design and in use of a weapons system, even and especially when it, at times, performs its critical functions autonomously.

The 2021 CCW RevCon and beyond

Arguably, a soft ‘proto norm’ on weapon autonomy has emerged and socially taken hold already. After all, in 2021, virtually no one is able to contemplate and discuss autonomy in a weapon’s critical functions without being pointed to the serious concerns involved, the open letters published by the scientific community, the ongoing UN debates, emerging domestic legislations, large bodies of scholarly works in moral philosophy, international law, political science, and so on and so forth.

In other words, the debate overall, including the UN deliberations on weapon autonomy in Geneva, has come a long way. The conversation at the CCW in particular has gotten more productive and constructive recently, with convergence increasing.

That said, especially the regulatory structure sketched above is far from being universally accepted; nor is the notion that the next step should be codifying it as a legally binding instrument. The most recent GGE meeting in August demonstrated that at least a handful of States Parties are clearly having none of it, this way signaling their intent to prevent the consensus body from making headway.

At the same time, pressure on the CCW keeps increasing. In light of the upcoming RevCon, States Parties need to produce tangible results. If they fail to do so, the volume of calls for moving the process into another forum will most certainly increase. Then, the CCW would – once again – have served as only an incubator.

New, binding international law is needed. While weapon autonomy presents welcome opportunities in the optimization of defenses against munitions – protecting soldiers’ lives – leaving it unchecked and unregulated will make the world a more unsafe, uncertain, unstable, and inhumane place. Technology will not wait. It is time for UN diplomacy to catch up.

See also

Share this article

Comments

There are no comments for now.

Leave a comment