Contemporary armed conflicts are increasingly complex and, through rapid technological development, increasingly remote. This calls into question the capacity of a machine to apply human emotional traits such as empathy and caution, crucial for effective judgement and evaluation in challenging situations. Despite the precision and reliability that might be achieved through the increased automation of military activities such as target identification, from a humanitarian perspective, outsourcing such high-stakes decisions to machines is highly problematic.
In this post, Dr Joanna Wilson, Lecturer in Law at the University of the West of Scotland, calls for the urgent ‘rehumanization’ of military decision-making. Emotions play a key role in this. While sometimes blamed for unpredictable, erratic human behaviour, for which a machine might therefore be viewed to be a welcome alternative, emotions are indispensable for effective and flexible moral reasoning, intuition, and self-regulation. The use of artificial intelligence (AI) should thus be exclusively limited to effectively supplementing and facilitating human agency and decision-making: a technological means for strictly human ends.
As stated by the International Committee of the Red Cross (ICRC) and the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS), human actors remain the primary (indeed the only) accountable agents in international humanitarian law (IHL).
In the practical, legal sense, accountability is achieved through the existing military command structures and systems of shared responsibility for violations of IHL, allowing responsibility for AI to be attributed, for example, to designers and programmers or the commanding officer ordering the technology’s use.
However, the responsibility at issue here is not only causal. If human judgment is removed, Cavalcante Siebert et al. ask, “how can designers, users, or other human agents be morally responsible for systems that are designed to perform tasks, learn, and adapt without direct human control?” Beyond the issue of causation and command structure, it is a minimal expression of respect for humanity that someone should accept responsibility, or be capable of being held responsible, for the decision to take a life, and also, of knowing and expressing what their reasons were. This is interpreted by the ICRC as the requirement for moral responsibility – perhaps even for the presence of conscience – which is further highlighted by McMahan: “it is important that combatants should always experience deep inhibitions about tackling non-combatants.” Such inhibitions, regardless of how accurate and reliable machines may one day become, can arguably never be replicated algorithmically, and Leveringhaus stresses further that “even if new combat technologies render individuals and communities in armed conflict ever more distant from each other, this should not distract from the basic truth of the cosmopolitan ideal of a common humanity.”
As noted by Beard, “the central legal requirement” in determining accountability for violations of IHL through the use of AI “is a meaningful connection to the effective exercise of human judgment.” Furthermore, the CCW GGE on LAWS has also stated that IHL compliance through the use of military AI requires “good-faith human judgement based on the assessment of the information available at the time, and the ability of the operators to make decisions and exert control over the use of force.”
Human Rights Watch (HRW) has noted that, while “machines have long served as instruments of war, … historically humans have always dictated how they are used.” Human involvement must therefore always be present, and the importance of human judgement upheld, in the development and deployment of AI in war.
Indeed, the GGE has recommended that human operators possess full control and decision-making powers relating to acts of violence, retaining ultimate responsibility for compliance with IHL. This links to the concept of meaningful human control, which has been defined by the ICRC as “the type and degree of control that preserves human agency and upholds moral responsibility.”
The meaning of meaningful human control
Most (if not all) wartime activities can only be effectively carried out with an inextricable nexus to human judgement and agency. Purves, Jenkins and Strawser argue that “even a sophisticated robot” is not “capable of replicating human moral judgement.” Congruently, Zawieska cautions that we must be wary of “the difference between what is human and what is only human-like.” Due to the “black box” problem with AI, and associated issues relating to explainability and predictability, fully autonomous technology is inherently unpredictable: however sophisticated and reliable a technology might be, we can rarely truly understand why and how such an ‘intelligent’ system ‘behaves’ as it does, and, more concerningly, how it might develop beyond its programming. This causes problems in terms of a lack of human knowledge about a machine’s functions and their consequences, and, as a result, a lack of human control over those functions and consequences. In specific relation to lethal autonomous weapons systems, the ICRC has stressed that “the focus must remain on obligations and responsibilities of humans,” that “human control must be maintained”, and limits on autonomy urgently established, “to ensure compliance with international law and to satisfy ethical concerns.”
Meaningful human control over military AI therefore requires “a sufficiently direct and close connection to be maintained between the human intent of the user” and the “eventual consequences” of the operation. Consonantly, according to Roff, an important nexus exists between knowledge and control in the military doctrine of command responsibility. Adopting Bacon’s ‘knowledge is power’ assumption, control, in this sense, can be understood in terms of the idea that the decision to deploy AI in war must entail a sufficient level of predictability about the technology and its capabilities. From this would emanate a degree of knowledge and understanding of the outcome of the operation, and, thus, genuine, human practical and moral responsibility for that outcome.
The human element in military decision-making is paramount and, where parts of those decisions are being enacted by highly intelligent technology, the maintenance of human control over that technology at all times is an effective way to ensure that this human element is preserved and maintained. The ability of a manmade machine to effectively navigate the complex dilemmas generated by the contemporary battlefield is, at best, unknown and, at worst, unlikely. Consonantly, the UK Ministry of Defence (MOD) has stressed that human beings alone can improvise when necessary, behave instinctively and demonstrate empathy. Furthermore, Kalmanovitz has highlighted that “with respect to the elasticity of the applicable standards, references to reasonableness in the principles of distinction and proportionality make human judgement indispensable.” As such, the concept of meaningful human control and the importance of human emotions go hand-in-hand in reinforcing the imperative that human beings remain the moral, as well as rational, agents in military decision-making.
The importance of human emotions
Emotions are a key part of the human psyche, indispensable for effective and flexible moral evaluation, reasoning, intuition, empathy, self-regulation, and the ability to navigate multiple reasoning systems at once. Human emotions, note van Diggelen et al., “intrinsically reflect the human’s personal values towards the decision-making problem.” Morkevicius, meanwhile, highlights the importance of decision-makers possessing “a soul, or conscience,” when faced with complex situations, and cautions against the problematic “systematization” of military decisions and actions that would allow for its rules to be programmed into an algorithm-governed machine. Consonantly, Dunlap questions the extent to which an algorithm “could ever substitute for the judgment of the commander,” stating that “the linear, mathematical nature of computer processes may never be able to replicate the nonlinear and often unquantifiable logic of war.” In pursuit of a solution to AI’s potentially problematic ‘mathematisation’ of war and its regulation, Kalpouzos notes:
“if there is to be a meaningful role for law and if we are not to mechanise and outsource our judgement, we need to work towards an irreducible and situated understanding of the law of war, one that entails the appreciation of subjectivity and emotion, a law that cannot be coded.”
Many of the arguments in support of the increased use of AI in war cite the disadvantages of the human condition and the danger of emotions on the battlefield. Emotions can indeed result in aggravated situations, revenge killings or the failure to adhere to legitimate orders, together with the potential for calculated cruelty, leading to the presumption that “overly emotional humans make poor ethical actors.” AI is thus hailed as a welcome alternative to human frailty and unpredictability. Accordingly, it is supposed, ‘negative’ human emotions are replaced with ‘reason,’ making the battlefield more humane by removing human passions and their excessive consequences.
However, as noted earlier in this post, due to the “black box” problem and issues relating to (lack of) explainability, AI outcomes are arguably no more predictable, or reliable, than human behaviour.
The problems of the human condition cannot be solved by reducing human input. This is not the solution. Rather, in line with the humanitarian project, whatever little humanity might remain on the battlefield must instead be protected and enriched as much as possible, in order to provide as effective protections as possible against human suffering in war. In this pursuit, human emotions are not a source of weakness, but one of strength. They promote instinctive responses such as empathy and caution, crucial for effective judgement, evaluation, and the perception of nuance in challenging situations: an important check on violence (and its humanitarian consequences) in complex contemporary battlefield situations, where, for example, the traditional understanding of the principles of distinction and proportionality may be exceedingly difficult to discern and apply. In the face of such uncertainty, unpredictability and moral complexity, “emotions,” notes Zilincik, “are ubiquitous in the conduct of military strategy.” “Human emotions,” van Diggelen et al. concur, “matter in military decision-making,” while Cosic et al. highlight that “the emotional dimension of any conflict or problem should not be underestimated.”
Arguing that the ‘messiness’ of war is actually a positive thing that ought to be maintained in order to ensure ethical decision-making and actions, “emotions,” Morkevicius stresses, “play an important and irreplaceable role in our ethical behaviour:”
“The awareness of the fragility of life makes us (at least on our better days) consider the killing of another as a morally serious activity, not only for the other, but for ourselves.”
War is not, and should never be, easy. Military decision-making is, and ought to be, difficult. To this end, the enduring presence of human emotions is vital.
The emphasis on the role of human emotions on the battlefield does not mean to act as an argument against the military use of AI altogether, but it does serve to reinforce the importance, indeed necessity, of maintaining humanity in military tasks and decisions (whatever little humanity might be perceived to remain in a state of war). This, in turn, highlights the need to limit the scope of military AI to its definition as a tool at the service of human actors, for the extension of human agency and the augmentation of human decision-making.
See also:
- Elke Schwarz, The (im)possibility of responsible military AI governance, December 12, 2024
- Pierrick Devidal, Trying to square the circle: the ICRC AI Policy, November 28, 2024
- Erica Harper, Will AI fundamentally alter how wars are initiated, fought and concluded?, September 26, 2024
- Matthias Klaus, Transcending weapon systems: the ethical challenges of AI in military decision support systems, September 24, 2024
- Wen Zhou & Anna Rosalie Greipl, Artificial intelligence in military decision-making: supporting humans, not replacing them, August 29, 2024