What do I mean by AI-related techniques and tools?
But first, a word on what I mean by AI-related techniques or tools. My starting point is that there is no generally recognized definition of AI. That said, it might be of value to focus on techniques or tools derived from, or otherwise related to, AI science broadly conceived. My understanding—drawn from the work of such scholars as Barbara J. Grosz—is that AI science pertains in part to the development of computationally based understandings of intelligent behavior, typically through two interrelated steps. One of those steps concerns the determination of cognitive structures and processes and the corresponding design of ways to represent and reason effectively. The other step relates to the development of theories, models, data, equations, algorithms and/or systems that embody that understanding.
So defined, AI systems are typically conceived as incorporating techniques—and leading to the development of tools—that enable systems to ‘reason’ more or less ‘intelligently’ and to ‘act’ more or less ‘autonomously’. The systems might do so by, for example, interpreting natural languages and visual scenes; ‘learning’ (or, perhaps more commonly, training); drawing inferences; and making ‘decisions’ and taking action on those ‘decisions’. The techniques and tools might be rooted in one or more of the following methods: those rooted in logical reasoning broadly conceived, which are sometimes also referred to as ‘symbolic AI’ (as a form of model-based methods); those rooted in probability (also as a form of model-based methods); and/or those rooted in statistical reasoning and data (as a form of data-dependent or data-driven methods).
Existing and purportedly new or emerging primary norms
By way of reminder, under international humanitarian law/law of armed conflict (IHL/LOAC), Article 36 of Additional Protocol I of 1977 provides that
[i]n the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.
What is the legal nature of these reviews? A determination of lawfulness, or lack thereof, by a State in respect of those treaty provisions is not—at least according to the Rapporteur of those provisions’ drafting Committee (see O.R. XV, p 269, CDDH/215/Rev.1, para 30)—binding internationally. If we assume that that position is accurate, it would seem that the same contention might hold for the customary law counterparts, if any, of those treaty provisions. Instead, these legal review provisions—whether of a treaty or customary nature—might be seen as boiling down to an expectation that the obligation to make such a determination will be performed to ensure that weapons, means or methods of warfare will neither be developed nor adopted without at least a careful examination of their legality.
That contention, in turn, begs the question: what are the applicable primary norms? While there is widespread agreement on several primary norms, the possible development and employment of AI-related techniques or tools in respect of weapons, means or methods of warfare might nevertheless encounter several disagreements concerning aspects of the sources and/or content of some primary norms. Some of those disagreements stretch back decades (if not longer). Others are relatively new. Such differential approaches as to what constitutes lawful and unlawful conduct prevent normative uniformity and legal universality and thereby preclude the establishment of a comprehensive set of agreed primary legal norms against which all weapons, means or methods of warfare must be reviewed. Consider three examples.
Indiscriminate attacks
First, while there is, to my mind, no reasonable disagreement among States that, in general, indiscriminate attacks are prohibited under IHL/LOAC, some key aspects of that basic principle are currently contested. Take direct participation in hostilities as an example. In general, under IHL/LOAC civilians shall enjoy protection against the effects of hostilities. Certain aspects of those protections—including the so-called immunity from direct attack—might be withdrawn with respect to civilians who take a direct part in hostilities. There seems to be extensive support for the customary principle upon which Article 51(3) of AP I is based. (That provision, at least as a matter of treaty law, concerns direct participation of civilians in hostilities in respect of international armed conflicts as defined in that instrument.) Yet, according to the Law of War Manual (December 2016), the Office the General Counsel of the United States Department of Defense has noted that, at least in its view, that treaty provision, as drafted, does not reflect customary international law in all of its precise aspects.
Applicable legal frameworks
Second, with respect to applicable legal frameworks, there is, to my mind, no reasonable disagreement among States that relevant provisions of at least IHL/LOAC must be taken into account in legal reviews of weapons. Meanwhile, some States are considering whether international human rights law (IHRL) provisions must also be taken into account—and, if so, how and to what extent. The United Kingdom, for example, is apparently actively considering this issue. Such an assessment concerning the applicable framework(s) matters in no small part because the content of relevant IHL/LOAC provisions are at least traditionally perceived as tolerating more—indeed, in certain circumstances much more, though never unlimited—death, destruction and other harm in comparison to IHRL provisions.
A primary norm concerning AI-related techniques or tools?
Third, there currently seems to be a pivotal disagreement among certain States whether a new or emerging primary norm concerning AI-related techniques or tools and other relevant technologies can, should and/or must be developed. (According to certain scholars and advocates, such a norm might already be discerned.)
Here is where much of the normative debate currently seems to lie in respect of ‘emerging technologies in the area of lethal autonomous weapons systems’ (to use the term from the title of the relevant Group of Governmental Experts). On one hand, for some States, such a primary norm might be formulated in conceptual terms drawn, for example, from the August 2018 proposal by Austria, Brazil and Chile to establish a mandate for a new binding international instrument. That proposal speaks of ‘ensur[ing] meaningful human control over critical functions in lethal autonomous weapon systems’. On the other hand, certain other States argue that existing IHL/LOAC is sufficient. According to that viewpoint, the ‘modernization’ or ‘adaptation’ of IHL/LOAC in respect of emerging technologies in the area of lethal autonomous weapons systems is not needed.
16 elements or properties of interest or concern
While recognizing the significance of the disagreement on the existence and/or sources—or, at least, on some precise aspects—of certain primary norms identified above, it remains imperative for States to adopt robust legal review regimes. With that in mind, it may be of value to enumerate elements or properties of interest or concern that might be salient for the people responsible for conducting legal reviews of weapons, means or methods of warfare involving AI-related techniques or tools to consider.
A few caveats first. The listing order here is not meant to imply a hierarchy. Some of the elements or properties might overlap substantively and/or procedurally. Others might stand on their own. Inclusion on the list is not meant to represent a contention that international law does or does not already oblige a State to consider that particular element or property as part of a legal review. Nor is the list meant to exhaustively enumerate all possibly relevant considerations—far from it. With those caveats in view, here are 16 non-exhaustive assessments concerning elements or properties of interest or concern that might be considered as part of a legal review:
- Legal agency: an assessment concerning the preservation of legal agency of humans—as grounded in international law—in respect of an employment of weapons, means or methods of warfare involving AI-related techniques or tools;
- Attributability: an assessment concerning the preservation of the attributability—at least to a State and to an individual, including, as relevant, a commander—of an employment of weapons, means or methods of warfare involving AI-related techniques or tools;
- Explainability: an assessment concerning the preservation of the explainability of an employment of weapons, means or methods of warfare involving AI-related techniques or tools;
- Reconstructability: an assessment concerning the preservation of the reconstructability—in a nutshell, the capacity to sufficiently piece together the inputs, functions, dependencies and outputs of the computational components adopted, and by whom, in relation to each relevant circumstance of use, encompassing all potential legal consequences thereof—of an employment of weapons, means or methods of warfare involving AI-related techniques or tools both during and after employment (a possible guidepost here might be that such an employment is capable of being subject to juridical scrutiny, including by a judicial organ);
- Proxies: an assessment whether the computational components—adopted in respect of an employment of weapons, means or methods of warfare involving AI-related techniques or tools—may or may not be permitted to function, in whole or in part, as proxies for any legally relevant characteristics;
- Human intent and human knowledge: an assessment concerning the preservation of human intent and human knowledge—as they pertain to compliance with international law applicable in relation to armed conflict as regards State responsibility and/or individual (including criminal) responsibility—in respect of an employment of weapons, means or methods of warfare involving AI-related techniques or tools;
- Normative inversion: an assessment concerning the preclusion of normative inversion—that is, preventing the computational components from operating in a manner that, for example, assumes that every person may prima facie be directly attacked, thereby functionally rejecting, and hence inverting, the general presumption of (protected) civilian status—in respect of an employment of weapons, means or methods of warfare involving AI-related techniques or tools;
- Value decisions and normative judgments: an assessment concerning the reservation of IHL/LOAC-related value decisions and normative judgments only to humans in respect of an employment of weapons, means or methods of warfare involving AI-related techniques or tools;
- Ongoing monitoring: an assessment concerning the feasibility or not of the ongoing monitoring of the operation of the computational components adopted in an employment of weapons, means and methods of warfare involving AI-related techniques or tools;
- Deactivation and/or additional review: an assessment concerning the feasibility or not of the establishment of deactivation thresholds and/or additional review thresholds in respect of an employment of weapons, means or methods of warfare involving AI-related techniques or tools;
- Critical safety features: an assessment concerning the prevention of the continued employment of weapons, means or methods of warfare involving AI-related techniques or tools where a critical safety feature has been degraded;
- Improvisation: an assessment concerning the establishment of sufficient limitations and—as warranted—prohibitions on possible forms of ‘improvisation’ in relation to an employment of weapons, means or methods of warfare involving AI-related techniques or tools;
- Representations: an assessment concerning the representations reflected in the computational components—in short, the configurations of the models and their features—adopted in respect of an employment of weapons, means or methods of warfare involving AI-related techniques or tools;
- Biases: an assessment concerning the biases capable of arising in relation to the computational components adopted in respect of an employment of weapons, means or methods of warfare involving AI-related techniques or tools;
- Dependencies: an assessment concerning the dependencies within and between the computational components—and the relationships between those dependencies—adopted in respect of an employment of weapons, means or methods of warfare involving AI-related techniques or tools; and
- Predictive maintenance: an assessment concerning the feasibility or not of the establishment of predictive maintenance—that is, measures aimed at anticipating, forewarning and preventing failures, degradation, or damage with a view to avoiding the need for corrective maintenance—in respect of an employment of weapons, means or methods of warfare involving AI-related techniques or tools.
***
This post is part of the AI blog series, stemming from the December 2018 workshop on Artificial Intelligence at the Frontiers of International Law concerning Armed Conflict held at Harvard Law School, co-sponsored by the Harvard Law School Program on International Law and Armed Conflict, the International Committee of the Red Cross Regional Delegation for the United States and Canada and the Stockton Center for International Law, U.S. Naval War College.
Other blog posts in the series include
- Intro to series and Expert views on the frontiers of artificial intelligence and conflict
- Ashley Deeks, Detaining by algorithm
- Lorna McGregor, The need for clear governance frameworks on predictive algorithms in military settings
- Tess Bridgeman, The viability of data-reliant predictive systems in armed conflict detention
- Suresh Venkatasubramanian, Structural disconnects between algorithmic decision making and the law
- Li Qiang and Xie Dan, Legal regulation of AI weapons under international humanitarian law: A Chinese perspective
- Netta Goussac, Safety net or tangled web: Legal reviews of AI in weapons and war-fighting
See also
- ICRC, Artificial intelligence and machine learning in armed conflict: A human-centred approach, June 6, 2019
DISCLAIMER: Posts and discussion on the Humanitarian Law & Policy blog may not be interpreted as positioning the ICRC in any way, nor does the blog’s content amount to formal policy or doctrine, unless specifically indicated.
Thanks for putting these suggestions forward, Dustin. Though I fear taking the conversation outside the scope of this series, do you think that these same elements are also relevant for the legal reviews of other ‘newer’ means and methods of warfare, such as cyber capabilities?
Dustin, thank you for writing such an interesting post of such an important topic. I completely agree that States must develop robust procedures for the legal review regimes. This is especially the case for new technology weapons such as AI-enhanced weapons that challenge existing IHL norms. Unfortunately few States do (19 according to PREMT ). Many of your elements or properties of interest or concern fall within a State’s prerogative to determine how it will conduct it internal legal reviews. A State may regard your elements as issues to be considered in the context of the Martens Clause and develop its own position as to whether they represent an existing international law obligation or require a policy position. I think such considerations should be made early in the study and development of an AI weapon and effectively be a design specification to ensure that the AI weapon is capable of review. As such States should be investing now in the legal review of such weapons in parallel to their research into AI weapon capabilities. The legal review of AI weapons will require an assessment of IHL and international law principles and rules are relevant to the weapon’s use, determining the standard of compliance that the reviewing State will require to pass an Article 36 review and developing a testing methodology that identifies the weapon’s ability to do so in a range of operating environments. Determining the standard of compliance will be a challenge. For example, if an AI-enhanced weapon makes recommendations to a human operator whether certain persons or objects are lawfully targetable (i.e. because they are combatants or military objectives) what standard of certainty will the human operator be able to accept before acting on a recommendation? Will the State Article 36 review require a standard equivalent to or superior to a human who does everything feasible to distinguish? How will the Article 36 review consider AI bias, false assessments and human cognitive issues associated with information display? Ultimately a weapon review should recommend that a weapon is not aquired or adopted or that its use is restricted if it is unable to satisfy the appropriate review standards. The weapon review obligation will probably need to extend into the operational life of an AI weapon to address changes to its methods of operation or upgrades to the software that renders it ‘new’ for the purpose of Article 36. Perhaps legal advisors deployed in compliance with Article 82 of AP1 will conduct in-service weapon reviews. Thanks again for generating discussion on this important topic. I think that there is a need for further discussion.
Netta, a great article. You highlight many of the challenges for States will need to consider to develop their internal process for the legal review of AI enhanced or autonomous weapons. In my view, to address the challenges you raise, States will need to rethink their internal processes for the weapon review of AI enhanced or autonomous weapons and develop a raft on policy positions to address these challenges. From a process perspective, I agree that legal reviews should commence at the earliest stages of study and development of an AI enhanced or autonomous weapon. This may even extend to include study and research conducted through State sponsored Defence Industrt study and research (see for example the Australian Defence Science and Technology Groups ‘Trusted Autonomous Systems Defence Cooperative Research Centre’). States should identify at the earlier stages their legal review requirements (e.g. explainable or recordable recommendations/decisions) and including them in the programming and design specifications of a new capability. This will require States to identify what aspects of the proposed system or weapon functionality engage its international law and IHL obligations and how those obligations can be translated into code and the AI system trained. For example, if an AI system makes ‘distinction’ recommendations about persons or objects on the battlefield for a human weapon operator to decide and act upon, the State will need to determine what standard of certainty the system must achieve to make a recommendation. The State may require a standard equivalent to that of a trained human or seek a higher level of certainty. All functionality that engages a States international law and IHL obligations will need to be tested to inform the legal review and allow the reviewer to identify limitations in the system or weapon. The testing may occur for the weapon’s normal or expected use in a range of anticipated operating environments designed to identify system limitations. The legal review may ultimately restrict certain weapon functionality in certain environments or circumstances where it proves unreliable or unable to achieve the required standards set by the reviewing State. For an AI enhanced weapon capable of machine learning, the legal review process will need to extend into the service life of the weapon to address not only self-learned changes and programming updates but also operational factors (rule of engagement, enemy counter-measures that necessitate a change in the weapons method of warfare), environmental and human factors that are unique to a particular mission. Even for an AI or autonomous weapon that has passed a legal review a State will need determine how the weapon can be certified for operation in an specific mission or theatre. I think there is merit in States developing a set of guiding principles for the legal review of AI enhanced or autonomous weapons. These could include fundamental issues such as IHL apply to the use of AI enhanced or autonomous weapons, and legal reviews are to be conducted at least prior to the deployment. I look forward to further guidance from the ICRC on the legal review of new technology weapons, including AI enhanced and autonomous weapons.
States need faction out ways on how to make policy on Al in line with legal review of weapon treaty.