Skip to main content
en
Close

Autonomous weapon systems: what the law says – and does not say – about the human role in the use of force

Analysis / Autonomous Weapons / Conduct of Hostilities / Law and Conflict 9 mins read

Autonomous weapon systems: what the law says – and does not say – about the human role in the use of force

Intergovernmental discussions on the regulation of emerging technologies in the area of (lethal) autonomous weapon systems (AWS) are back on track in Geneva after more than a year of COVID-19 related disruptions. A critical task facing States is to further clarify how international humanitarian law (IHL) applies: what limits does it place on the development and use of AWS and, perhaps most importantly, what does it require from humans in the use of force?

In this post, Laura Bruun from the Stockholm International Peace Research Institute (SIPRI), reflects on whether IHL provides sufficiently clear guidance as to how humans and machines may interact in use of force decisions. Building on the findings of a recent SIPRI study, she argues that clarification may be warranted and provides concrete suggestions on how States may further identify what IHL compliance requires in the development and use of AWS.

Autonomous weapon systems (AWS) may radically transform the way humans make decisions in armed conflicts. AWS, by most definitions, differ from other weapons by their ability to select and engage targets without human intervention, once activated. While this capability may present operational and humanitarian benefits, it could also raise fundamental legal, ethical and strategic concerns. While the benefits and risks associated with the use of AWS remain subject to debate, progress in autonomy has led the international community to consider one fundamental question: what is the role of humans in the use of force, and to what extent—if any—life and death decisions may be “delegated” to machines?

In search for answers, States are looking to international humanitarian law (IHL) as one of the applicable legal frameworks. SIPRI’s research, however, shows that existing IHL does not provide sufficiently clear guidance about what is required from humans (and permitted from technology) in the use of force. States should therefore seek further clarification. One way to do so would be to elaborate on what respecting and ensuring respect for IHL means across four dimensions: Who should do what, when and where?

Regulating AWS and the question of human–machine interaction

The legal, ethical and military challenges posed by AWS – and how these should be addressed – have been subject to intergovernmental discussion for almost a decade. The debate, initiated within the framework of the 1980 Convention on Certain Conventional Weapons, has been led since 2017 by a group of governmental experts (GGE). The GGE is mandated to adopt consensus recommendations in relation to the clarification, consideration and development of aspects of the normative and operational framework on emerging technologies in the area of LAWS. The central question in that regard is whether existing rules of IHL provide a sufficiently clear regulatory framework or whether new rules, standards or best practices are needed to address the unique characteristics—and challenges—of AWS.

While States continue to have different views on that matter, the GGE has reached consensus on a number of substantial issues. In particular, it has reaffirmed that IHL applies to AWS and agreed that humans, not machines, remain responsible for the development and use of AWS. The group has further established that a certain quality and extent of human-machine interaction (HMI) is needed to ensure that the development and use of AWS is compliant with international law, IHL in particular. The attention paid to HMI in the GGE reflects the consensus that autonomy in weapon systems cannot be unlimited and that human involvement is needed.

What level of human involvement – or in GGE parlance, what type and degree of HMI – is required by IHL is now a central, if not the most central, issue of the debate. The GGE agrees that there cannot be a one-size-fits all approach to HMI as the type and degree needed may vary depending on the weapon and context of use. But the difficult and vexing question that remains is: how, and on what basis, would a State identify what quality and extent of HMI is needed in a given situation?

For many States, answers to questions around how humans and machines may lawfully interact could be found in a more systematic exploration of IHL. To support States with such an exercise, SIPRI conducted in 2021 a mapping study, ‘Autonomous Weapon Systems and International Humanitarian Law’, which identified key questions that would need to be addressed in order to clarify what type and degree of HMI would be needed to ensure IHL compliance. The following section outlines some of the key findings from that exercise.

What IHL says—and does not say—about the human role in the use of force

The main purpose of IHL is to limit the effects of armed conflicts, setting out rules, restrictions and prohibitions intended to protect the civilian population from the use of force while also sparing combatants superfluous injury and unnecessary suffering. These rules apply to all weapons, AWS included.

However, while IHL is clear about what effects are unlawful, it is less clear about how lawful effects may be produced. This lack of clarity has been brought to the surface by autonomy. AWS raise the question as to whether, and to what extent, IHL obligations—notably those demanded by the principles of distinction, proportionality and precautions in attack—may be implemented through machine processes. SIPRI’s study found that existing rules of IHL do not provide a clear answer to that question. It also found that, as a result, States come to different conclusions in their interpretation of the rules. Some argue that as long as the effects are lawful, there is no problem (legally) with delegating tasks to machines. Meanwhile, others argue that to comply with IHL, the entire process of applying force implicitly requires—and needs to reflect—human agency.

Consequently, there is a risk that States come to radically different conclusions about the type and degree of HMI needed for IHL compliance in a given situation. Further clarification of IHL may therefore be warranted.

Who, what, when and where?

Autonomy in weapon systems does not replace human decision-making. Rather it transforms the way humans make decisions in warfare. The decision-making process in relation to AWS is likely to include a larger number of people and there may be a greater temporal and spatial distance between decision makers and application of force. This transformation poses novel challenges to the application of IHL, raising questions of how far in advance, by how many and how far away from the site of force application obligations under IHL can be exercised.

In order to clarify such issues, the GGE could usefully address how autonomy affects the exercise of IHL obligations across at least four dimensions. The first dimension relates to who (and how many people) may be responsible for complying with IHL. The second dimension relates to what type and degree of HMI the rules of IHL require, permit or prohibit. The third and fourth dimensions relate to when and in relation to what locations IHL provisions need to be complied with.

Addressing who should do what, when and where in the development and use of AWS will be essential to clarify what type and degree of HMI may be warranted for IHL compliance in a given situation. While each dimension merits attention on its own, it is when these dimensions are considered in combination that limits may be identified. For example, how do the characteristics of the operational environment affect how far in advance a proportionality assessment may be made, and who needs to be involved in the exercise of IHL evaluations?  One way to usefully explore these dimensions and their interdependency could be through scenario exercises.  By using different scenarios, dialling up and down the different dimensions, States will be able to better identify where to draw the lines in terms of which tasks may be delegated to machines and which should remain with humans.

Clarifying the human role in the use of force

After years of discussions around the governance of AWS, the GGE is at a crossroads. The group is approaching the Sixth Review Conference of the CCW in December, considered a critical juncture for the international community’s response to the challenges raised by increasing autonomy in weapon systems. Further clarification on what IHL compliance demands from humans—and permits from machines—is essential to determine whether new regulation is needed and in what form. This is in particular needed if the GGE, as indicated in recent sessions, pursues to prohibit some types of AWS and limit the use of others. Structured and deeper discussions around the four-dimensional framework will help States identifying the red lines and green zones of the IHL landscape; what types should be prohibited and what types should be regulated.

Efforts to clarify what IHL says about the human role in the use of force are also relevant beyond the case of AWS. Technological developments in other areas, including advances in artificial intelligence, pose similar questions about human agency and decision-making. Therefore, it is time for the international community to consider, not only what the law already says, or does not say, about the human role in the use of force, but what it should say—regardless of the technology being used.

See also

Share this article

Comments

There are no comments for now.

Leave a comment