Skip to main content
en
Close

Safety net or tangled web: Legal reviews of AI in weapons and war-fighting

Artificial Intelligence and Armed Conflict / Law and Conflict / New Technologies / Weapons 11 mins read

Safety net or tangled web: Legal reviews of AI in weapons and war-fighting
Editor’s note: For those interested in the topic of legal reviews of weapons, it is recommended to read this post by Netta Goussac together with the recent post by Dustin Lewis, where he enumerates 16 elements that States might consider as part of their legal reviews involving AI-related techniques or tools.

***

Strides in artificial intelligence (AI), especially in the field of machine learning, have made their way into weapon design and war-fighting, as they have into many aspects of our everyday lives. The swiftly expanding list of military applications of AI includes software that controls robotics as well as software that supports decision-making processes related to targeting. The use of AI to undertake tasks previously performed by humans may fundamentally change the way that decisions to kill, injure, destroy or damage are made in war. The main concern is the potential loss of human control over these decisions—and the unpredictability in outcomes that would result—which raises unique legal and ethical concerns.

Technologies like AI may be ‘determining how wars can be fought’ but it is international humanitarian law (IHL) that restricts how wars can be fought. Militaries are turning to AI and autonomy to enhance decision-making and operations. Meanwhile, discussions continue over legal and ethical acceptability of autonomous weapon systems,[1] and the need for regulation. What remains beyond question is that all weapons used in war must be used, and be capable of being used, in compliance with IHL. This means that each State that develops or acquires weapons that utilize AI must be satisfied that these weapons can be used in compliance with existing rules of warfare. But how is the legality of an AI weapon to be verified before it is employed?

Legal reviews: A safety net against unlawful weapons

Reviewing the legality of new weapons before they are deployed is an obligation for States that are party to Additional Protocol I of the Geneva Conventions (Article 36). For other States, such legal reviews are a common-sense measure to help ensure that the State’s armed forces can conduct hostilities in accordance with that State’s international obligations, not to mention to avoid the costly consequences of approving and procuring a weapon the use of which is likely to be restricted or prohibited.

Today’s technological advances in how conflicts are fought mean that robust legal reviews are as critical now as they were when Article 36 was conceived, during the Cold War arms race. While Article 36 does not specify the process by which legality should be determined, in the view of the ICRC, the obligation clearly implies a mandatory standing procedure that assesses all weapons, and their normal or expected method of use, against a State’s international obligations, including IHL. According to the ICRC’s Guide to legal reviews, this entails a multi-disciplinary examination of the technical description and actual performance of a weapon, at the earliest possible stages of its research, development or acquisition.

Legal reviews can be a potent safeguard against the development and use of AI weapons that are incapable of being used in compliance with IHL rules regulating the conduct of hostilities, notably the rules of distinction, proportionality and precautions in attack. These rules are addressed to those who plan, decide upon and carry out attacks in armed conflict. It is humans that apply this law and are obliged to respect it. An AI weapon system that is beyond human control would be unlawful by its very nature—a conclusion that would become evident during a legal review.

To be clear, legal reviews by States cannot replace the need for internationally agreed limits on autonomy in the critical functions of weapon systems—the functions of selecting and attacking targets—including those utilizing AI. Nonetheless, the renewed interest in legal reviews as a safety net against the development of unlawful weapons is to be welcomed. Too few States have a standing mechanism for conducting legal reviews of new weapons, and too little is known about how such reviews are carried out. For example, of the more than 100 States who developed or acquired weapons last year, fewer than 20 are known to have formal legal review mechanisms in place. Improving awareness, adherence and transparency is a virtuous cycle that can help to ensure compliance with IHL.

***

But conducting legal reviews of AI weapons brings its own challenges. For legal reviews to be effective, States that develop or acquire weapons that incorporate AI will need to navigate these complexities.

What should be reviewed?

Weapon systems of all types should be subjected to legal reviews. This extends to existing weapons that a State intends to acquire for the first time, as well as modifications that alter the functions of a weapon that has already previously passed a legal review. Because software is more readily modified than physical systems, the legal review requirement may arise more frequently in weapon systems relying on AI. Systems that learn from their environment and thereby change their functioning after activation would present a particular concern. In effect, a legal review conducted before the weapon is introduced would become invalid upon its deployment.

AI components incorporated into the critical functions of a weapon systems are not the only applications of AI that require legal review. Some other military applications of AI—for example, decision-support systems—will need to be reviewed if they form part of a weapon system (the ‘means’ of warfare) or the way in which the system is used (the ‘methods’ of warfare), to properly assess whether the employment of the weapon would be prohibited in some or all circumstances. Since a weapon cannot be assessed in isolation from the way in which it will be used, it follows that the normal or expected use of the weapon must be also considered in the legal review.

What criteria are to be used in the review?

In determining legality, a reviewer must apply existing international law rules applicable to the State, be they treaty-based or customary. When it comes to IHL, this includes the general rules of IHL (including the rules aimed at protecting civilians from the indiscriminate effects of weapons and combatants from superfluous injury and unnecessary suffering), as well as particular rules prohibiting or restricting the use of specific weapons or methods. Where a weapon is not covered by existing rules of IHL, the reviewing authority should consider whether the proposed weapon contravenes the principles of humanity and the dictates of public conscience.

Any autonomy in the critical functions of a weapon system—whether through the use of AI or other means—makes it challenging to assess compliance with these rules due to unpredictability that results from weapons that are triggered by their environment. A reviewer will need to be satisfied that the proposed AI design and its method of use will not prevent the operator or commander from exercising the judgments required by IHL. If it does, the reviewer may not be able to allow the weapon’s use, or may need to impose mitigation measures that ensure sufficient human control over the critical functions of the weapon system, consequently limiting the weapon’s autonomy.

This is because human control in the use of weapon systems is inherently required by the rules of IHL, notably the rules of distinction, proportionality and precautions in attack. Exactly the type and degree of human control over an autonomous weapon system required for legal compatibility (and ethical acceptability) is something on which the ICRC has been urging States to reach common understandings.

What to do about uncertainty?

AI will inevitably introduce uncertainty into the functioning of a weapon—meaning that the reviewer cannot predict with a reasonable degree of certainty all the outcomes of using the weapon. This unpredictability can arise through the weapon’s design or the interaction between the system and the environment of use. Foreseeing effects may become increasingly difficult as weapon systems become more complex or are given more freedom of action in their tasks, and therefore become less predictable. Uncertainty about how a weapon will perform in the field undermines the ability to carry out a legal review, as it makes it impossible for the reviewer to determine whether the employment of the weapon would in some or all circumstances be prohibited by IHL or other rules of international law.

It may be impossible to eliminate uncertainty from weapons completely, but what minimum level of predictability and reliability is required in the use of AI in weapons?  This is a crucial question to which IHL does not provide clear answers.

What process should be followed?

The ability to carry out a legal review of a weapon system that utilizes AI entails a full understanding of a weapon’s capabilities and foreseeing its effects, notably through verification and testing. But technical performance of an AI weapon system—particularly one using machine learning—can be difficult to evaluate. Traditional testing regimes will be unsuitable for weapon systems that incorporate AI in their critical functions, not only because machine learning introduces unpredictability into the functioning of the system, but also because of the interaction of the system with a dynamic environment that cannot be simulated in advance of use. Furthermore, the development and testing of systems that depend on machine learning relies on data—data that may incorporate biases or assumptions about the foreseen circumstances of use.

The lack of transparency in howAI—and particularly machine learning—systems function complicates the task of testing—and therefore assessing legality—even further. In the absence of an explanation for how a system reaches its output from a given input, it is difficult (if not impossible) to assess the system’s predictability and reliability and foresee the consequences of its use. Testing regimes will therefore need to be adapted to the unique characteristics of AI. New standards of testing and validation will likely be required to inform the review process.

***

The trend towards autonomy in weapon systems, including using AI, demands a robust and vigilant approach to ensuring legal compliance. Legal reviews are critical in this endeavor, as they act as a bulwark against unlawful weapons and a basis for formulating operational constraints on weapons that aim to counter the uncertainty introduced by AI and ensure legal compliance.

The use of AI—especially machine learning—in weapon systems raises unique challenges for legal reviews because, unlike the physical elements of the weapon system, it is not known how they function. As Dustin A Lewis writes, States need to confront and overcome these challenges in order to conduct effective legal reviews of AI weapons and ensure compliance with IHL.

Footnotes

[1] The ICRC has defined autonomous weapon systems as: ‘Any weapon system with autonomy in its critical functions. That is, a weapon system that can select (i.e., search for or detect, identify, track, select) and attack (i.e., use force against, neutralize, damage or destroy) targets without human intervention.’ Many States and stakeholder participating in the Group of Governmental Experts on lethal autonomous weapon systems have relied on a similar understanding.

***

Editor’s note

This post is part of the AI blog series, stemming from the December 2018 workshop on Artificial Intelligence at the Frontiers of International Law concerning Armed Conflict held at Harvard Law School, co-sponsored by the Harvard Law School Program on International Law and Armed Conflict, the International Committee of the Red Cross Regional Delegation for the United States and Canada and the Stockton Center for International Law, U.S. Naval War College.

Other blog posts in the series include

See also


 

Share this article

Comments

There are no comments for now.

Leave a comment