Skip to main content
Autonomous weapons and human control
Concerns about ensuring sufficient human control over autonomous weapon systems (AWS) have been prominent since the earliest days of the international regulatory debate. They have motivated calls for a comprehensive, proactive ban on development and use of highly autonomous weapons as well as a range of more nuanced proposals for defining and maintaining what is most commonly described as ‘meaningful’ human control. (‘AWS’ here refers to any weapon system which exhibits a degree of autonomous capability in the critical functions of selecting and attacking targets, including existing weapon systems and those proposed for future development.) Typically, degrees of human control over an AWS are expressed in terms of whether and in what capacity a human is directly involved in AWS operations, such as in the well-known ‘in the loop’ / ‘on the loop’ / ‘out of the loop’ spectrum. That close association between direct human involvement and human control is questionable in the context of AWS. The main problem is that it tends to obscure the fact that autonomous control is itself a form of human control, one which can sometimes be beneficially employed alongside, or instead of, direct human involvement in operation of a weapon system.

Autonomous control is human control

Autonomous capabilities in weapon systems are achieved by means of software-based control systems, essentially specialised computers, in communication with controlled weapons. Those control systems receive information from sensors, the environment, human operators and possibly other autonomous systems; process that information; and issue instructions to the controlled weapons.

The software running on a control system computer is, of course, written by humans. It comprises sets of instructions which embody human-defined decision processes relating to the functions subjected to autonomous control. That is not to say that the software would necessarily express decision processes identical to those that would be followed by humans. It only means that the process the AWS uses is ultimately defined by humans.

Those software instructions might specify a static set of steps the AWS is to follow in performing some assigned operation, or they might describe a process by which the AWS is to absorb information from its environment and ‘learn’ how to approach a task. Either way, the behaviour of the AWS is determined by that human-written software: whether and in what circumstances it should initiate an attack, how it should respond to changes in its environment and every other aspect of its behaviour (barring malfunctions). Autonomous control is therefore an exercise of human control, independently of whether any human is in a position to oversee or intervene in the operation in real time.

This (perhaps obvious) point is sometimes overlooked in discussions about controlling AWS, where concerns may be expressed about delegating or assigning decisions about the use of force to a machine. It is therefore worth emphasising: the heart of an AWS is just a computer running human-written software. An AWS is not a decision-making entity which takes over control of a weapon from its human operators. The AWS control system is itself the means of exercising human control.

The choice between using an AWS or a weapon without autonomous capabilities in an attack is not a choice between a human decision about the use of force and a machine decision. It is only a choice between a human decision made ‘live’ in the course of conducting the attack and a human-specified decision process defined at an earlier point, encoded in software and executed by a machine (albeit possibly using a different process than a human would use).

Similarly, the idea that an AWS would select targets and initiate attacks—while correct on a strictly technical level—is potentially misleading in a discussion about control. The decision to initiate an individual attack is not ceded to an AWS in the sense that it is no longer a human decision (because an AWS, as an inanimate object, cannot make decisions in that sense). Rather, a human makes a decision—perhaps to execute one or more individual attacks—utilising some process encoded in AWS control software—perhaps to activate an AWS in the knowledge that it may result in an attack being launched. That is, the decision to launch an attack is still a human one, but the character of the human decision may change, depending on the scope of operation assigned to the AWS. Operators of an AWS could be making substantially the same attack decisions as when employing manually operated weapons, down to selecting individual targets, or they could be making only broader, more policy-like decisions and relying on previously defined software-encoded processes to execute those decisions in respect of individual targets.

It follows that legal and ethical constraints relating to the control and use of weapon systems govern reliance on the use of autonomous target selection and attack capabilities just as they govern the use of any other weapon system capability. IHL rules require (human) attackers to do everything feasible to ensure: that only legal targets are attacked; that the means and methods employed are those which minimise expected civilian harm; that attacks are cancelled or suspended if circumstances or information change; and so on. Those obligations may be met by reliance on AWS in some circumstances, and may require significant levels of direct human involvement in other cases.

A requirement for meaningful human control should not be seen simply as a requirement for some level of direct human interaction with the weapon system for its own sake. Rather, exercising meaningful human control means employing whatever measures are necessary, whether human or technical, to ensure that an operation involving an AWS is completed in accordance with a commander’s intent and with all applicable legal, ethical and other constraints. That means ensuring that autonomous systems are employed only to the extent that they can be shown to operate in a way which allows all those constraints to be met, and may or may not require that a human remain in or on the loop.

On the other hand, a range of arguments have been presented for mandating a degree of human involvement in AWS operations outside the context of maintaining control over weapon systems. Perhaps the most frequently cited are: that the principles of humanity and dictates of the public conscience would not permit reliance on highly autonomous weapons; that the dignity of enemy combatants and affected civilians would be eroded by attacks conducted with autonomous weapons; that reducing the political cost of warfare by removing one’s own soldiers from combat roles would lead countries to more readily resort to armed conflict; and so on. If those arguments are found to be persuasive, there may be value in recognising a minimum level of human involvement in AWS operations as a standalone requirement, rather than conflating it with questions about control over AWS.


This blog post ensued from the IHL roundtable on ‘Emerging military technologies applied to urban warfare’, held at Melbourne University Law School on 21 and 22 March 2018. The event was co-hosted by the ICRC, the Program on the Regulation of Emerging Military Technologies (PREMT) and the Asia Pacific Centre for Military Law (APCML) and took place in the context of the ICRC’s 2017-18 conference cycle on ‘War in Cities’. The multidisciplinary meeting gathered governmental, military and academic experts from various disciplines, including law, ethics, political science, philosophy, engineering and strategic studies.


DISCLAIMER: Posts and discussion on the Humanitarian Law & Policy blog may not be interpreted as positioning the ICRC in any way, nor does the blog’s content amount to formal policy or doctrine, unless specifically indicated.

Share this article


Leave a comment