Skip to main content
en
Close
Shifting the narrative: not weapons, but technologies of warfare

Debates concerning the regulation of choices made by States in conducting hostilities are often limited to the use of weapons . . . but our understanding of weapons is outdated. New technologies – especially those with embedded artificial intelligence (AI) algorithms, even if non-weaponized – are significantly transforming contemporary warfare. The indirect influence of these technologies on warfare decisions is consistently underestimated.

In this post, Klaudia Klonowska, a researcher with the Asser Institute’s DILEMA project, calls for a dramatic shift in what we consider to be an important tool of warfare. Not weapons, but all technologies of warfare. She argues that we need to acknowledge that the choice of technologies may influence offensive capabilities just as much as the choice of weapons.

The emergence of artificial intelligence (AI) technologies and their use in conflicts is a topic of rising importance in international debates. However, despite the multitude of different AI applications, international fora – for example, the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) – focus exclusively on AI applications to weaponized systems. Despite the lack of a shared definition, LAWS are most commonly referred to as weapon systems that can select and attack targets without human intervention, a focus on weaponization that is even tautologically emphasized by the inclusion of ‘lethal’ in the name of the technology in question. Non-weaponized AI technologies, such as decision aids[1] or military human enhancement technologies[2], are frequently overlooked and lack a suitable forum.

But is this weapons-focused approach justified from a legal perspective?

To answer this question, I point to and examine a key provision in international humanitarian law that reflects the limited freedom of States to select means or methods of warfare: Additional Protocol I (API) Article 36, which obliges State Parties to ensure that the question of the legality of ‘a new weapon, means or methods of warfare’ is reviewed with care during development or acquisition and before use and deployment. There is a prevailing tendency among States and scholars to call it the ‘weapons review’, as if non-weaponized items were excluded. This narrative, however, is overly simplistic.

So let’s take a step back and unpack a question that is often glossed over and assumed by international legal scholars and State experts: what specifically are the ‘weapons, means or methods of warfare’ about which these regular debates are held?

In the words of Mary Ellen O’Connell: ‘we need a radical shift about how we think of weapons!’ We need a shift to consider technologies that may be far removed from the battlefield, are not weaponized in the traditional sense, and nevertheless, significantly contribute to the conduct of hostilities. We need to move on from describing Article 36 as strictly requiring a weapons review and acknowledge that the choice of non-weaponized technologies may influence militaries’ offensive and defensive capabilities just as much as the choice of weapons. We need not a review of weapons, but a review of ‘technologies of warfare’, as the ICRC also calls them, a term that is adaptive to the modern battlefield, inclusive of both current and future military developments, and highlights a broad scope of review in the spirit of Article 36’s rationale.

Terminology of weapons under pressure: fleshing out the scope of Article 36 review

The AP1 itself does not define the key terms: ‘a new weapon, means or method of warfare’. There is no common definition of these terms nor does another provision or the preamble of the Protocol provide context to clarify their meaning. Drafters of the Protocol intentionally left the scope open to be sufficiently inclusive of a wide variety of military items, and to prevent States from evading or circumventing this prohibition by developing or defining new war-waging tools with distinct capabilities.

At the same time, this choice may have created some inconsistency in the Protocol’s application and lack of clarity in regards to which items shall be reviewed. For example, in 2017, the Netherlands and Switzerland stated that ‘what else should fall under the category of means apart from, of course, weapons (…), that is what needs to be reviewed, is up for debate’.

Arguably, the term ‘weapons’ has been the easiest of the three terms to define. States’ national definitions include a broad range of weapons—from traditional weapons, such as firearms, to munitions, missiles, non- or less-lethal weapons, and delivery systems. These definitions most commonly encompass all those objects that are intended to directly cause harm to persons or damage to objects.

Greater ambiguity lies with the understanding of the terms ‘means or methods of warfare’. Historically, one of the more common approaches has been to read ‘means and methods’ together as directly referring to weapons: on the one hand, capturing the design of a weapon (means) and, on the other hand, how the weapon is expected to be used (methods). In the past two decades, however, we can observe the development of a second approach which reads the terms separately, influenced by a significant contribution by Justin McClelland, an officer performing the legal review for the British Armed Forces. In his article from 2003, McClelland defined means of warfare as ‘items of equipment which, whilst they do not constitute a weapon as such, nonetheless have a direct impact on the offensive capability of the force to which they belong’. This approach further clarified the inclusion of non-weaponized items of equipment such as mine clearance vehicles, and ‘methods of warfare’ such as perfidy or starvation techniques prohibited by international law.

Notably, certain cyber capabilities are now subject to Article 36 review. Rule 110 of the Tallinn Manual 2.0 notes that States should conduct a legal review of the ‘cyber means of warfare’, and many States have already followed suit. This development is significant, as cyber capabilities no longer resemble the typical iron-and-steel appearance of weapons. Where previously only those objects with a direct influence on the battlefield were reviewed, this addition opens up a range of possibilities to review systems that indirectly, through a chain of effects, achieve an intended purpose of inflicting harm or damage. This excludes cyber espionage, which is merely a passive collection of information. Instead, it highlights technologies that are capable of ‘transforming the passive collection of information into active disruption’ and thus are considered to fall within the meaning of ‘means of warfare’.

These developments show that under the pressure of technological developments, the meaning of Article 36 within international law has evolved to include a broader array of items other than weapons, such as items that are intended to indirectly cause harm or damage. This evolution must continue to account for new technologies that are significantly changing the way hostilities are conducted.

Reviewing technologies of warfare: four general criteria

In contemporary armed conflict, new means of warfare are not easily identifiable. They come in different shapes and forms, such as algorithmic codes or nano-bio-info-cognitive technologies that enhance human capacities. Furthermore, these emerging technologies are often of dual use – both civilian and military – which makes it even more difficult to define them or find viable pathways for regulating their use in the military context.

Think of facial recognition software, developed first as an algorithm, and then ‘militarized’ by its inclusion in software that matches the faces captured on a drone camera with military databases of headshots of known or suspected terrorists. Even if this dual use technology is never automated, it can be used to provide recommendations to the battlefield soldiers or drone operators in targeting decisions. Such technologies, although they differ from conventional weapons, are expected to considerably influence human agency, human judgement, and human intention in warfare decision-making.

To consider whether an item shall fall within the scope of Article 36 and be subjected to a legal review, I propose to consider the following four general criteria, taking into account the above-mentioned characteristics of ‘weapons, means or methods of warfare’ and the evolving interpretation of Article 36.

1. Compliance with international law

The first criterion considers whether the item risks States’ lack of compliance with international law. This criterion triggers the following questions, among others: first, is the item or its expected use expressly prohibited by existing legal obligations; and second, (if not prohibited per se) could its intended use in an operational environment violate rules or principles of applicable international law? If a system or its use in warfare may lead to violation of legal rules and principles, then it should be in the best interest of State Parties to ensure that such items are reviewed before deployment or use.

2. Integral to military decision-making and operations

For non-weaponized items (including those capable of fully complying with the applicable rules and principles) that are not prohibited by law nor do they explicitly violate any rules or principles of applicable law, further evaluation is required of their intended use within the critical military infrastructure. This includes consideration of whether the item will form an integral part of critical decision-making (i.e. targeting decisions) and be used in a chain of effects to inflict harm, injury, or damage. This is the case for certain cyber capabilities and may further include decision aids or human enhancement technologies.

Think of targeting decision aids (e.g. facial recognition software) with the tendency to misidentify certain female individuals resulting in a gender bias and a small percentage of misidentified targets. Such an algorithm, if using machine learning techniques, may be used alongside military operations to improve its accuracy with time. Nonetheless, it would be highly problematic if it were used as an integral element of the military decision-making infrastructure, in which case the recommendations of this software may lead to unlawful final targeting decisions. The phenomena of de-skilling and automation bias highlight that even where a system is not acting fully autonomously, it may become an integral element of military decision-making, and thus should be reviewed before deployment or use.

3. Actionable information to military decision-making and operations

If an item forms an integral element of military decision-making and operations, it is necessary to perform another evaluation of whether its contribution to the chain of effects is significant. Principally, as with cyber capabilities, it is necessary to consider if the item interprets, translates, or filters information before it is then provided to human decision-makers. Instead of simply displaying information (e.g. video feed camera), does it process and provide actionable intelligence (e.g. target recognition software) that may significantly affect military decisions? As the SIPRI report notes, even a surveillance system can become a subject of review ‘if it can be established that it collects and processes information used in the targeting process’.

4. Intended use in the conduct of hostilities

The last criterion of evaluation relates to the specific intended use of the item under review. Not all technologies integral to military decisions, with the capability to provide actionable intelligence, should fall within the meaning of Article 36. Only items that are intended for use in the conduct of hostilities, that is, in support of critical military decisions, should be reviewed to prevent avoidable (and/or unlawful) harm, injury, or damage.

* * * * *

In short, a weapons-focused approach to the regulation of new technologies of warfare cannot be justified. Where an item risks noncompliance with international law, is intended for use in critical military decision-making, and can significantly alter the nature of processed information, it should be subjected to a review before deployment or use. It should be said that such an item, together with human operators, soldiers, and commanders, participates in the co-production of hostilities, meaning that the same outcome cannot be achieved without their use.

The criteria above seek to include both items which can directly or indirectly – through a chain of effects – influence the conduct of hostilities. According to their obligation under AP1, State Parties should therefore ensure that all such items are reviewed with care before deployment.

The intention of the AP1 drafters to provide a sufficiently broad scope of Article 36 can be realized if we adequately capture emerging technological solutions that pose new challenges and threats to compliance with the legal rules and principles – not only new weapons, but also of new technologies of warfare that indirectly but increasingly affect our decisions and conduct in war.

[1] Decision aids or decision-support systems refer to technologies that use algorithms to process and filter information with the objective to provide recommendations and assist decision-makers.

[2] Human enhancement technology refers to a broad range of biomedical technologies with the aim of improving the human physical or mental capabilities. In the military context, human enhancement technologies are being developed to enhance the capabilities of soldiers.

See also

Share this article

Comments

There are no comments for now.

Leave a comment