Skip to main content
en
Close

The human nature of international humanitarian law

Analysis / Autonomous Weapons / Conduct of Hostilities / Law and Conflict 10 mins read

The human nature of international humanitarian law
International humanitarian law (IHL) regulates the use of force in armed conflict. It inherently provides protections to victims of armed conflict while humanizing, at least to some degree, some of man’s most inhumane acts. Thus, IHL principles of distinction, humanity, unnecessary suffering and proportionality serve to temper the application of military necessity. In an age of emerging technologies, the international community is deep in discussion about how these principles will be applied, particularly in weapon systems that will make autonomous decisions involving life and death through the application of machine learning and the development of artificial intelligence. Such discussions should cause us to reflect on a foundational question with respect to the application of IHL—Is the law regulating armed conflict designed to provide the ‘best protections possible’ for victims of armed conflict or the ‘best protections humanly possible?’ In other words, the current standards for general IHL compliance are often described in terms of human decision-making, i.e., a human commander must make a specific legal determination such as with proportionality as discussed below.

Does this mean that the actual legal standard is tied to human decision-making? If the standard is ‘best humanly possible’, then any emerging technology would have to remain subject to human determinations of IHL application, including the recognition that these decisions will continue to be subject to human oversight and potential human error. Note that the ICRC has made two relevant statements applicable to this question [1].

If, however, the requirement is the ‘best possible’ application of IHL, and we have any belief that autonomous weapons—or artificial intelligence or weapons using machine learning—can factually apply force in a way that in at least some circumstances results in better protection for humans, then we reach a different result. In this case, the international community should be encouraging the development of autonomous weapons that apply machine learning or artificial intelligence on the battlefield because they might (are likely to) be able to apply the legal requirements of IHL in a way that results in greater protections for victims of armed conflict.

It should be noted at this point that every weapon system, including any autonomous weapons that apply machine learning or artificial intelligence, must undergo and meet the requirements of a weapons review. There is no legal possibility of fielding weapons that do not comply with all the requirements of a legal review. The significance of determining the role of a human in a lethal targeting decision is to provide the foundational rationale for that review. For an autonomous weapon to be fielded, it absolutely must be thoroughly tested and prove that it can apply IHL correctly on the battlefield.

The important question raised here is the standard for that review. If that standard is that the weapon system is to be able to apply the law in a way to provide the best protections humanly possible, then certain types of autonomous capabilities need not be researched and developed. However, if the standard is to apply IHL in a way that results in the best protections possible to potential victims of armed conflict, a vast array of possible autonomous weapons that utilize machine learning and artificial intelligence without real-time human involvement may now be capable of development and deployment.

Principle of distinction

Best protection humanly possible

To illustrate the difference between ‘best protections possible’ and ‘best protections humanly possible’, consider the principle of distinction (e.g., here and here). Under IHL, every individual who engages in an attack has an obligation to apply the principle of distinction. In particular, it is unlawful to ever target civilians. It is also unlawful to not take feasible precautions to protect civilians that might be incidentally injured or killed from an otherwise lawful attack. Failure to comply with these legal requirements is a violation of the law of war. Members of armed forces can be held individually criminally liable for failures to properly apply distinction, and assertions are routinely made alleging such violations.[2]

At the same time, few who have been in armed conflict will argue that mistakes never happen and that civilians are never wrongly, though unintentionally, targeted. Often these cases of unintentional death occur through a misapplication of the principle of distinction, based on a failure of intelligence, or sometimes just human error. In such decisions, the ability to quickly gather and analyze all available data on a target will often make the difference to a military commander who is making the targeting decision.

Best protection possible

Now, consider an autonomous weapon system that is tied to a vast array of sensors and designed to incorporate machine learning which can gather and analyze huge amounts of data much more quickly than the human brain. It might be able to do this, for example, by possessing greater capability to discern the difference between a hostile fighter and a non-hostile civilian in a crowd of people, based on sensors spread across the area that are providing otherwise unobservable data on the individuals in the crowd. Note that autonomous systems, driven by machine learning, have already demonstrated the ability to outperform humans when conducting very intricate and complex analyses, such as correctly diagnosing medical conditions and playing complex games.

If such a system could be fielded with a statistically better chance of reaching a correct distinction conclusion based on the ability to more quickly gather and analyze a much larger set of data, it would likely result in a decreased chance of innocent deaths. From a view of IHL where human decision-making is not an integral part of legal compliance, it doesn’t matter that a human was not applying the principle of distinction. Rather, what matters is that the principle was applied correctly more often or that the death and injury to civilians was less than when compared to the result of human decision-making.

Principle of proportionality

Best protection humanly possible

Similarly, consider the application of proportionality. Commanders are obliged to refrain from attacks in which death or injury to civilians and/or damage to civilian objects would be excessive to the concrete and direct military advantage anticipated from the attack (API Article 51). Perhaps the most ‘human’ aspect of that decision is the balancing of the anticipated military advantage and the potential collateral damage. For those who believe IHL requires the best decision ‘humanly’ possible, the human aspect of that decision is likely very important, even if the outcome of some proportionality decisions are strongly criticized.

Under this view, where no lethal targeting decision without human input can comply with IHL, talk of technological innovation must be tied to creating better ways to support humans in their inherently human decisions. This view does not make AI and machine learning research and development useless, but it should scope such research and development in a way that is designed to support the human decision-maker, not to create an independent decision-maker.

Best protection possible

For those who believe that the ‘best’ application of IHL, such as the principle of proportionality, is the one that results in the least collateral damage while still accomplishing the military mission, an autonomous decision or one based on machine learning or artificial intelligence may result in a ‘better’ application of the principle because it has the potential to result in fewer civilian casualties.

Technology optimists

A technology optimist will believe that the ability for autonomous weapons to come to ‘better’ conclusions than humans is absolutely possible, and in fact, probable in certain situations given enough research and development. An autonomous weapon system that is not affected by emotions (such as anger, fear and aggression) or subject to physical limitations (such as limited senses, fatigue or an inability to quickly process all the factual data available at the point of decision) is likely going to be able to apply these principles in a more legally compliant way. To the extent that the optimistic view of technology is accurate, it seems clear that the international community should be strongly encouraging the research and development of autonomous weapons with these capabilities in order to enable humans to more accurately apply IHL principles. If autonomous weapons that apply machine learning or artificial intelligence could be developed, and more civilian lives could be spared, some will even argue that States will have an obligation to develop such weapons.

Technology skeptics

In contrast, technology skeptics will argue that such technology does not currently exist and is unlikely to ever be developed. Therefore, we should not research and develop these technologies for applications in weapons or at least we should move forward with great caution. Skeptics argue that there is significant uncertainty that such research and development will ever result in machine learning or artificial intelligence that will demonstrate an ability to apply IHL principles in a way that produces ‘better’ results than humans.

Role of human decision-making in IHL

Despite the fact that there may be reason for serious caution as to the path technology will take with respect to decision-making capability, technology skeptics often do not really address the fundamental issue of the role of human decision-making in IHL. Whether or not research and development is likely to reach a successful conclusion is not determinative as to whether States who take a more optimistic view can/should engage in research and development to that end. Rather, the fundamental question is if IHL precludes non-human decision-making with respect to the application of lethal force such that States are precluded from pursuing these technological developments.  

And so, as technology continues to develop, the issues concerning the development of AI and machine learning as part of autonomous weapon systems come back to the fundamental question of whether IHL requires the best ‘human’ application of the law or simply the best ‘possible’ application of the law. The fact that it may be possible, sometime in the future, to have IHL applied in a way that reduces the death and injury to civilians because of the application of non-human decision-making should encourage us to consider and answer this question now.

Footnotes

[1] ICRC Statement, 18 April 2018 at the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts on Lethal Autonomous Weapons Systems: Towards limits on autonomy in weapon systems; ICRC Statement 15 November 2017 at the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts on Lethal Autonomous Weapons Systems: Expert Meeting on Lethal Autonomous Weapons Systems.

[2]ICTY, Prosecutor v Ante Gotovina, Ivan Čermak and Mladen Markač, Judgment, IT-06-90-T, Trial Chamber I, 15 April 2011 (Gotovina Trial Judgment); ICTY, Prosecutor v Ante Gotovina and Mladen Markač, Judgment, IT-06-90-A, Appeals Chamber, 16 November 2012 (Gotovina Appeals Judgment).

***

Related blog posts

➡ Autonomous weapons: Operationalizing meaningful human control Merel Ekelhof

➡ Human judgment and lethal decision-making in war Paul Scharre

➡ Autonomous weapon and human control Tim McFarland

➡ Autonomous weapon systems: An ethical basis for human control? Neil Davison

➡ Autonomous weapon systems: A threat to human dignity? Ariadna Pop

➡ Ethics as a source of law: The Martens clause and autonomous weapons Rob Sparrow

➡ Autonomous weapons mini-series: Distance, weapons technology and humanity in armed conflict Alex Leveringhaus

➡ Introduction to Mini-Series: Autonomous weapon systems and ethics


DISCLAIMER: Posts and discussion on the Humanitarian Law & Policy blog may not be interpreted as positioning the ICRC in any way, nor does the blog’s content amount to formal policy or doctrine, unless specifically indicated.


 

Share this article

Comments

Leave a comment