Autonomous control is human control
Autonomous capabilities in weapon systems are achieved by means of software-based control systems, essentially specialised computers, in communication with controlled weapons. Those control systems receive information from sensors, the environment, human operators and possibly other autonomous systems; process that information; and issue instructions to the controlled weapons.
The software running on a control system computer is, of course, written by humans. It comprises sets of instructions which embody human-defined decision processes relating to the functions subjected to autonomous control. That is not to say that the software would necessarily express decision processes identical to those that would be followed by humans. It only means that the process the AWS uses is ultimately defined by humans.
Those software instructions might specify a static set of steps the AWS is to follow in performing some assigned operation, or they might describe a process by which the AWS is to absorb information from its environment and ‘learn’ how to approach a task. Either way, the behaviour of the AWS is determined by that human-written software: whether and in what circumstances it should initiate an attack, how it should respond to changes in its environment and every other aspect of its behaviour (barring malfunctions). Autonomous control is therefore an exercise of human control, independently of whether any human is in a position to oversee or intervene in the operation in real time.
This (perhaps obvious) point is sometimes overlooked in discussions about controlling AWS, where concerns may be expressed about delegating or assigning decisions about the use of force to a machine. It is therefore worth emphasising: the heart of an AWS is just a computer running human-written software. An AWS is not a decision-making entity which takes over control of a weapon from its human operators. The AWS control system is itself the means of exercising human control.
The choice between using an AWS or a weapon without autonomous capabilities in an attack is not a choice between a human decision about the use of force and a machine decision. It is only a choice between a human decision made ‘live’ in the course of conducting the attack and a human-specified decision process defined at an earlier point, encoded in software and executed by a machine (albeit possibly using a different process than a human would use).
Similarly, the idea that an AWS would select targets and initiate attacks—while correct on a strictly technical level—is potentially misleading in a discussion about control. The decision to initiate an individual attack is not ceded to an AWS in the sense that it is no longer a human decision (because an AWS, as an inanimate object, cannot make decisions in that sense). Rather, a human makes a decision—perhaps to execute one or more individual attacks—utilising some process encoded in AWS control software—perhaps to activate an AWS in the knowledge that it may result in an attack being launched. That is, the decision to launch an attack is still a human one, but the character of the human decision may change, depending on the scope of operation assigned to the AWS. Operators of an AWS could be making substantially the same attack decisions as when employing manually operated weapons, down to selecting individual targets, or they could be making only broader, more policy-like decisions and relying on previously defined software-encoded processes to execute those decisions in respect of individual targets.
It follows that legal and ethical constraints relating to the control and use of weapon systems govern reliance on the use of autonomous target selection and attack capabilities just as they govern the use of any other weapon system capability. IHL rules require (human) attackers to do everything feasible to ensure: that only legal targets are attacked; that the means and methods employed are those which minimise expected civilian harm; that attacks are cancelled or suspended if circumstances or information change; and so on. Those obligations may be met by reliance on AWS in some circumstances, and may require significant levels of direct human involvement in other cases.
A requirement for meaningful human control should not be seen simply as a requirement for some level of direct human interaction with the weapon system for its own sake. Rather, exercising meaningful human control means employing whatever measures are necessary, whether human or technical, to ensure that an operation involving an AWS is completed in accordance with a commander’s intent and with all applicable legal, ethical and other constraints. That means ensuring that autonomous systems are employed only to the extent that they can be shown to operate in a way which allows all those constraints to be met, and may or may not require that a human remain in or on the loop.
On the other hand, a range of arguments have been presented for mandating a degree of human involvement in AWS operations outside the context of maintaining control over weapon systems. Perhaps the most frequently cited are: that the principles of humanity and dictates of the public conscience would not permit reliance on highly autonomous weapons; that the dignity of enemy combatants and affected civilians would be eroded by attacks conducted with autonomous weapons; that reducing the political cost of warfare by removing one’s own soldiers from combat roles would lead countries to more readily resort to armed conflict; and so on. If those arguments are found to be persuasive, there may be value in recognising a minimum level of human involvement in AWS operations as a standalone requirement, rather than conflating it with questions about control over AWS.
This blog post ensued from the IHL roundtable on ‘Emerging military technologies applied to urban warfare’, held at Melbourne University Law School on 21 and 22 March 2018. The event was co-hosted by the ICRC, the Program on the Regulation of Emerging Military Technologies (PREMT) and the Asia Pacific Centre for Military Law (APCML) and took place in the context of the ICRC’s 2017-18 conference cycle on ‘War in Cities’. The multidisciplinary meeting gathered governmental, military and academic experts from various disciplines, including law, ethics, political science, philosophy, engineering and strategic studies.
DISCLAIMER: Posts and discussion on the Humanitarian Law & Policy blog may not be interpreted as positioning the ICRC in any way, nor does the blog’s content amount to formal policy or doctrine, unless specifically indicated.
Excellent article. Personally I believe that no weapon or technology in the use of force should contradict or afect any norm of IHL and if that is the case new IHL legislation should be written.
Very good, succinct article. The thrust of your points is similar to the view of some (albeit a minority, in my view) of the parties at the CCW discussions in Geneva. However, if, as you say, the machine has the capacity to “learn”, I find it difficult to conceive how the humans who programmed the software previously could still be considered to have control. Up to the point where the machine learned, certainly, but I cannot see how humans could anticipate what the machine could possibly learn. Is it suggested that the developers could manage/regulate the learning by the AWS in advance? If that were to be possible, I think that it would change the whole debate?
Developers would certainly have to place constraints on the range of behaviours which a weapon system could learn, such that any learned behaviours could be guaranteed to be consistent with applicable legal and ethical requirements. That is, any part of a weapon system’s control software which employs a machine learning algorithm would need to be validated and verified, just like any other program, to ensure that it would not result in any undesired behaviour. In that sense, any learning by an AWS would indeed need to be regulated in advance.
However, there are questions about learning and weapon systems which have not been fully answered: Would developers need to anticipate every possible change in a machine’s behaviour as a result of its learning ability, or just take steps to ensure that its behaviour would remain within acceptable limits? Would a weapon system require a legal review whenever it learns some (significant) new behaviour, or would the learning process itself be the behaviour which must be reviewed? Answers to those questions could, as you say, change the debate.
Even if an AWS remains a tool for human control, albeit indirectly, what is “meaningful” about this kind of control? AWS can only execute that human control within a limited range of situations and assuming all goes as planned. Additionally, what happens if chip technology develops further (see neuromorphic technology) and allows computers to learn by doing, with limited human input?
It seems to me that an assessment of whether control is ‘meaningful’ must be based on the combined effect of all forms of control applied to the weapon: facilities for ‘live’ inputs by a human operator, instructions encoded in control system software, and any other means employed (although note that the definition of meaningful human control does not appear to be fully settled). An autonomous control system can play a part in applying a meaningful level of control to a weapon, as can a human operator.
Regarding new chip technologies, I’m not sure that further advancements in machine learning capabilities would fundamentally affect the debate. Any new capabilities would still have to be assessed in light of existing ethical and legal constraints, and used only to the extent that they allow those constraints to be satisfied.
This is a very thought provoking blog entry. In my opinion it would be helpful if one separated the discussion around what human control involves and what meaningful human oversight entails. Because programmers create software, the conduct of LAWS is a result of human decision-making. However, the nature of human decision-making differs from the nature of algorithmic decision-making. This calls for a more nuanced inquiry regarding whether the decision of a programmer to create a particular software architecture maps onto the LAWS’s assessment that a particular object constitutes a military objective. Moreover, it might be useful to develop different concepts for what control entails for different contexts in light of technological progress.