Skip to main content
en
Close
Human judgment and lethal decision-making in war
For the fifth year in a row, government delegates meet at the United Nations in Geneva to discuss autonomous weapons. Meanwhile, the technology that enables greater autonomy in weapons races forward. The speed of technological change is a major hurdle in tackling the challenges of autonomous weapons. While advocates for a ban argue that the international community must come together before it is too late, opponents of a ban can point to technological progress to argue that someday machines might outperform humans in warfare.

The pace of change presents more than just political hurdles, though; it is a major problem for any regulation or ban that would be based on the state of technology today. Even the most thoughtful regulations or prohibitions will not be able to foresee all of the ways that autonomous weapons could evolve over time. An alternative approach would be to focus on the unchanging element in war: the human. If we had all the technology in the world, what role would we want humans to play in war, and why? What decisions in war require uniquely human judgment, not because machines cannot make them, but because they shouldn’t?

There has been growing interest in recent years in focusing on the role of the human in war. This concept is expressed in different ways, with various parties using terms like ‘meaningful human control’, ‘appropriate human judgment,’ or ‘appropriate human involvement’. While these terms are not yet defined, they suggest broad agreement that there is some irreducible role for humans in lethal force decisions on the battlefield. Setting aside for the moment the specific label, what would be the underlying idea behind a principle of ‘_______ human _______’?

International humanitarian law may help give us some purchase on the problem. The laws of war do not specify what role(s) humans should play in lethal force decisions, but there is one critical way the laws of war treat machines differently from people: machines are not combatants. People fight wars, not robots.

This viewpoint, simple and straightforward, has important implications for defining the role of humans in warfare. The International Committee of the Red Cross (ICRC) articulated this view in its statement at the 2017 Group of Governmental Experts on Lethal Autonomous Weapon Systems:

From the perspective of international humanitarian law, it is clear that the rules on the conduct of hostilities are addressed to those who plan, decide upon, and carry out an attack. These rules, which apply to all attacks regardless of the means or methods employed, give rise to obligations for human combatants, who are responsible for respecting them. These legal obligations, and accountability for them, cannot be transferred to a machine, a computer program, or a weapon system.

The United States outlined a similar position in its Department of Defense Law of War Manual:

The law of war rules on conducting attacks (such as the rules relating to discrimination and proportionality) impose obligations on persons. These rules do not impose obligations on the weapons themselves; of course, an inanimate object could not assume an ‘obligation’ in any event. … The law of war does not require weapons to make legal determinations, even if the weapon (e.g., through computers, software, and sensors) may be characterized as capable of making factual determinations, such as whether to fire the weapon or to select and engage a target. … Rather, it is persons who must comply with the law of war.

It may seem obvious that the laws of war apply to people, not machines, but that position has important implications. It means when using a weapon—even an intelligent one—the person launching that weapon, or ordering the weapon to be launched, has a responsibility under international humanitarian law to ensure that the attack is lawful. The human cannot delegate this obligation to a machine. A human could delegate specific targeting functions to the weapon, but not the determination of whether or not to attack, nor the judgment about the lawfulness of the attack.

This suggests some minimum necessary human involvement in the attack. In order to make a determination about whether the attack complies with the principles of distinction, proportionality, and precautions in attack, the human must have some information about the specific attack. The human must have sufficient information about the target(s), the weapon, the environment and the context for the attack, to determine whether that particular attack is lawful. The attack also must be bounded in time, space, targets and means of attack in order for the determination about the lawfulness of that attack to be meaningful. There would presumably be some conditions (time elapsed, geographic boundaries crossed, circumstances changed) under which the human’s determination about the lawfulness of the attack might no longer be valid. How much information the person needs and what those bounds are on autonomy is open for debate, but this position would seem to establish some minimum necessary standard for human involvement in attacks.

Of course, a critical question is: What constitutes an ‘attack’? The Geneva Conventions define an ‘attack’ as ‘acts of violence against the adversary, whether in offence or in defence’. The use of the plural ‘acts of violence’ suggests that an attack could consist of many engagements. Thus, a human would not need to approve every single target. An autonomous weapon that searched for, decided to engage and engaged targets would be lawful, provided it was used in compliance with the other rules of international humanitarian law and a human approved the attack. At the same time, an attack is bounded in space and time. It wouldn’t make sense to speak of a single attack going on for months or to call the entirety of a war a single attack. The ICRC made this point explicitly in a 1987 commentary on the definition of an attack, noting an attack ‘is a technical term relating to a specific military operation limited in time and place’.

There is considerable flexibility in how an ‘attack’ might be defined. Nevertheless, international humanitarian law requires some minimum degree of human involvement in lethal force decisions: (1) human judgment about the lawfulness of an attack; (2) sufficient information about the target(s), weapon, environment and context for attack in order to make a determination about lawfulness of that particular attack; and (3) that the weapon’s autonomy be bounded in space, time, possible targets and means of attack.

As nations move into the fifth year of discussions on autonomous weapons, there could be merit in countries defining a common standard for human involvement in lethal force. While an overarching principle along these lines would not tell States which weapons are permitted and which are not, it could be a common starting point for evaluating technology as it evolves. Many principles in the law of war are open to interpretation: unnecessary suffering, proportionality and precautions in attack, for example. These terms do not tell States which weapons cause unnecessary suffering or how much collateral damage is proportionate, but they still have value. Similarly, a broad principle outlining the role of human judgment in war could be a valuable benchmark against which to evaluate future weapons.

 ***

Paul Scharre is a senior fellow and Director of the Technology and National Security Program at the Center for a New American Security. He is author of the forthcoming book, Army of None: Autonomous Weapons and the Future of War, to be released on April 24, 2018. Follow him on Twitter @paul_scharre.

***

This article is adapted from Army of None: Autonomous Weapons and the Future of War by Paul Scharre. © 2018 by Paul Scharre. Used with permission of the publisher, W.W. Norton & Company, Inc. All rights reserved.

***

Related posts


NOTE: Posts and discussion on the Humanitarian Law & Policy blog may not be interpreted as positioning the ICRC in any way, nor does the blog’s content amount to formal policy or doctrine, unless specifically indicated.


 

Share this article

Comments

There are no comments for now.

Leave a comment