Skip to main content
en
Close

The (im)possibility of meaningful human control for lethal autonomous weapon systems

Analysis / Autonomous Weapons / Law and Conflict / Weapons 13 mins read

The (im)possibility of meaningful human control for lethal autonomous weapon systems
This week, the Group of Governmental Experts (GGE) on lethal autonomous weapon systems (LAWS) is holding their third meeting at the UN Certain Conventional Weapons (CCW) in Geneva. It is expected that the concept of ‘meaningful human control’ will continue to be a central theme in the discussions. While there is not yet an internationally agreed definition of what precisely meaningful human control constitutes, there is a clear convergence by both advocates and opponents of LAWS that some degree of human control over the critical functions of LAWS is vital. For many advocates of a maximally restrictive legal framework on LAWS, this is a step in the right direction, as it would address some of the legal concerns raised in the debates, albeit one that is contingent on the type and degree of control. However, while a consensus on meaningful human control may help resolve some level of legal accountability, the ethical dimensions may not be so easily resolved. As advancements in incorporating artificial intelligence (AI) in military technologies develop apace—despite multiple efforts by prominent voices in the AI community to raise alarms against the repurposing of AI capabilities for LAWS—the ability to retain adequate levels of agency to make moral decisions is called into question. This is an ethical issue of grave concern, which should not be passed over in favour of consensus on a legal definition. In what follows, I challenge the presupposition that we can meaningfully be in control over autonomous weapon systems, especially as they become increasingly AI controlled. I argue that their technological features progressively close the spaces required for human moral agency. In particular, there are three technological features which limit meaningful human control that I briefly highlight: 1) cognitive limitations produced in human-machine interface operations; 2) epistemological limitations that accompany the large amounts of data upon which AI systems rely; 3) temporal limitations that are inevitable when LAWS take on identification and targeting functions.

Technology as social power

Much of the public debate on LAWS, and on military technology in general, embraces an instrumentalist position. Such a position presupposes that we, as humans, can retain full agency and direction over the tools we avail ourselves of. In short, it assumes that we use our technologies as a means to reach our intended aims and goals. Take, for example, the telephone. We use it as an instrument to communicate with individuals in a different location, in real time. This seems to be a non-controversial position. However, as technological systems become more complex and ubiquitous, this presupposition is not as straight forward as it seems.

My discussion here starts with a different assumption—one that is in line with many contemporary philosophers of technology—namely that technology is never just a tool we employ at will, but rather carries a social power of its own. In other words, technology is not merely an instrument that is used in an isolated manner, rather, it has the capacity to shape, influence and produce new outcomes, aims, practices and frames of reference for decision making.

We are thus dealing with webs of relations in which we are embedded as agents—among many others, human and non-human—which all shape our perspectives and actions. These relations, and their impact, may not be immediately evident. Consider, once more, the telephone. The smart phone in your pocket is a very different type of technology than the traditional landline phone. Unlike your landline phone, the smart phone produces a different attention economy. This attention economy, in turn, shapes expectations and practices in ways that were previously not thought of, possible or relevant.

I suggest thus that it is worthwhile to consider the ways in which technologies might facilitate new, positive human capacities, but they might also produce new incapacities and limitations to our human agency. This condition is as applicable for emerging technologies in warfare as it is for our daily digital ecologies—perhaps even more so—where the human is embedded within an intricate system of technological elements. Such digital webs of relations prevalent with LAWS favour a distinctly technological logic for information and decision-making, shaping how we think about ethics more generally, and how we think about using technologically mediated force, specifically.

If we take the social power of digital technology seriously—and I suggest we should—can we then readily assume that we are well equipped to retain meaningful human control over lethal autonomous weapon systems, especially as they draw on AI to conduct their functions? I will briefly sketch three interlinked reasons why the possibility for meaningful human control might be more elusive with advanced AI-supported LAWS than it is currently suggested in the debates, and why the concept may not be sufficient to assure moral responsibility in the use of force with LAWS.

Limitations on meaningful human control

A lacking consensus for a definition of meaningful human control means that the parameters of what exactly the concept should or can entail are subject to ongoing debate. Various organisations, including the ICRC and  Article 36 have proposed a set of features that are useful as a general guide to what meaningful human control should take into consideration. This includes that the technology should be predictable and reliable, that it is transparent, that the user has accurate information about the purpose and process of the system and understands the information in context, that the user has the ability to intervene in a timely manner and that the use is linked to a certain level of accountability. These are conscientiously crafted elements, and important in furthering the debate.

However, given the nature of the technologies in question, we may not be able to deliver on these dimensions for human control in an actually meaningful manner for the following reasons: our cognitive limitations in human-machine interface operations; the epistemological foundations on which algorithmic decisions rest; and finally, the time horizons within which autonomous intelligent decisions are set to act.

Cognitive limitations

There is an extensive body of scholarship in cognitive psychology that attests to the fact that we experience cognitive limitations when it comes to interacting with computational systems. Noel Sharkey outlines this effectively and in more detail in a briefing paper for the GGE meeting in April. In the paper, Sharkey notes that as humans, we typically make decisions based on two types of reasoning. The first is deliberative reasoning, a process for which we draw on more extensive memory resources typically required for decisions of considerable weight and impact, such as foreign policy decisions. The second type of reasoning is automatic reasoning, which we use for routine events in life. This type of reasoning is, as the name suggests, automatic and therefore takes place much more quickly. As humans, we tend to choose the path of least resistance, so as to be able to cope with the many multiple demands we face in daily life.

Automatic reasoning is our first response to most events and occurrences. It can be overridden by deliberative reasoning in novel or exceptional situations, but in short, it is the go-to mode for making decisions. In our interactions with machines, we tend to draw on this type of reasoning first, and often predominantly. The faster and more autonomous the operational mode of the system, the less likely it is that deliberative reasoning will play a role in decision making.

When it comes to decisions of lethal nature, understanding the properties of automatic reasoning is important. Automatic reasoning tends to cut corners by sidelining ambiguity and doubt, by assimilating fragments of information into a familiar coherent narrative and by ignoring absent evidence. This is particularly pronounced in our interaction with computational technologies. Studies have consistently shown that there is a tendency for humans to place uncritical trust in computer-based decision systems (automation bias), as we have a tendency to ignore—or not search for—contradictory information in light of a computer-generated solution. This applies not only to fully autonomous systems, but also to ‘mixed-mode’ systems where the human is in the loop to review the decisions.

This cognitive limitation and the tendency toward automatic reasoning could potentially be overcome, with training, expertise and experience working with specific technological systems. However, within the parameters of AI, the risk is that the human decision maker is unable to ‘develop an appropriate mental model which is crucial to overcome system failure’. Where automatic reasoning prevails, greater trust and authority is placed in the technology and its decisions and human moral agency is diminished.

Epistemic limitations

AI systems rely on the existence and availability of large amounts of data in order to perform an evaluative or predictive analysis for any given situation. AI is trained to capture the present through the lens of past data, to identify patterns and make efficient, future oriented assessments. This carries a number of ramifications.

First, it prioritises a datafication of the environment upon which the AI is set to work. This means that the context within which the AI system is tasked to make an assessment or decision needs to be labelled and categorized so that it can be read as data. By default, then, everything that is not easily rendered in a numerical data format falls outside of any algorithmically determinable decision, as it remains invisible to any AI system.

Moreover, the quality, origin and quantity of the data available matters. While the process of AI labelling and classification is quite advanced and reliable for fixed categories (a chair, a cat, a bird, etc.), it is less easy to train an AI to ‘understand’ relational and more fluid dimensions of life (friendship, enmity, identity, culture, social relations etc.). Decisions based on incomplete data or falsely labelled data may lead to biased and unfair outcomes.

If an AI builds a world model, based on available data, it is likely to be much more successful in closed systems where parameters can easily be grasped as data. In the context of warfare, where parameters are likely to be less fixed, more fluid and dynamic, the AI system may suggest a course of action based on an epistemic foundation that may be biased, incomplete, or otherwise not fully appropriate to the situation.

There is already ample evidence that state-of-the-art AI facial recognition systems, for example, produce biased and potentially erroneous outcomes. Where AI systems operate within a complex and uncertain environment the results may well be undesirable, if not disastrous, yet difficult to challenge, both during and after an event. One need look no further than the flash crash 2010, where high-frequency trading algorithms managed to wipe out 600 points on the Dow Jones Index within a matter of minutes. It took years to get to the heart of what caused the costly event. As China seeks to harness the impenetrable deep neural network learning success of the AlphaGo system for its military strategy, the ability to epistemically understand the AI decision process is further hampered, as AI might engage in calculations that are not intelligible even to programmers or engineers. For human decision makers to be able to retain agency over the morally relevant decisions made with AI they would need a clear insight into the AI black box, to understand the data, its provenance and the logic of its algorithms.

Temporal limitations

The ability to retain control becomes an even less realistic prospect when we consider that the main allure of autonomous systems is speed and efficiency. The promise of speed and efficiency of LAWS is an important advantage over the adversary. Where speed is a key factor, time horizons for decision making inevitably shrink. This is already evident with fire-and-forget systems like Phalanxor SeaRAM, which ‘complete their detection, evaluation and response process within matter of seconds’. Once activated, it becomes very difficult for a human operator to exercise control over the system. Such collapsed time horizons are likely to be exacerbated with technologies that makes calculations within nanoseconds, eclipsing any horizon for timely, meaningful intervention.

Moreover, where technologies are not merely diagnostic or descriptive, but predictive and prescriptive—which is the aim of many AI systems—they become distinctly future oriented, privileging the optimisation toward action points. In doing so, they produce a sense of urgency, whereby action points need to be taken in good faith on digital memory and information. This condition may have potential effects on how the categories ‘necessity’ and ‘imminence’ are interpreted for the justification of lethal force.

***

The question remains, to what degree are we able to act as moral agents in the use of lethal autonomous, intelligent weapons? If we cannot readily understand or predict how AI-supported LAWS might interact with the contingent, dynamic environment of warfare, if we are unable intervene in a timely manner, if we are unable to challenge an algorithmic decision on its technological authority, is it possible to retain the level of human control required for a morally meaningful decision? I am doubtful.

The category meaningful human control can serve as a valuable concept for the advancement of a legislative framework. But we should avoid conflating the legal concept with the possibility to retain moral agency for life and death decisions with LAWS. Particularly when AI enters into the mix. As warfare becomes increasingly systematic, through digital networks and algorithmic architectures, we should be mindful that these architectures might affect our ethical thinking and acting in ways that move ever-further away from a humanist framework and etch ever closer to the purely cost-calculative logic of machines within which our moral agency inevitably atrophies.

***

Related blog posts

***

Key ICRC documents on AWS


DISCLAIMER: Posts and discussion on the Humanitarian Law & Policy blog may not be interpreted as positioning the ICRC in any way, nor does the blog’s content amount to formal policy or doctrine, unless specifically indicated.


 

Share this article

Comments

Leave a comment