Skip to main content
en
Close

The need for clear governance frameworks on predictive algorithms in military settings

Artificial Intelligence and Armed Conflict / Detention / Law and Conflict / New Technologies 13 mins read

The need for clear governance frameworks on predictive algorithms in military settings

Editor’s note: In this post, as part of the AI blog series, Lorna McGregor continues the discussion on detention and the potential use of predictive algorithms to assist in decision-making in armed conflict settings, bringing, in addition, a human rights perspective.

***

The use of algorithms to predict likely future outcomes and therefore to support decision-making on issues such as credit worthiness, social security and bail is increasing. This use is often justified on grounds of increased efficiency, strategic resource allocation and enhancing evidence-based approaches to decision-making. However, as detailed below, the use of predictive algorithms increasingly faces challenges, including compatibility with international human rights law (IHRL). In this blog series and an accompanying article, Ashley Deeks argues that the use of predictive algorithms to ‘help [militaries] assess which actors are dangerous for purposes of detention and where future attacks are likely to occur for purposes of patrolling and targeting’ has not yet—but may soon—take root. In this post, I suggest that before introducing algorithms into decision-making, militaries need to develop clear governance frameworks for if, when and how algorithms can be used in decision-making, particularly in areas such as detention, in order to avoid conflicting with international law.

Why the challenges arising in other fields should matter to militaries

In suggesting that militaries should develop clear governance frameworks on the use of algorithms, I propose that they should look at the issues arising in other fields, particularly where algorithms have been deployed in decision-making without a clear legal, policy or governance framework in place. Concerns will, of course, be raised about drawing analogies with other fields. Indeed, even in an area such as detention, the law that governs detention in the criminal justice system differs from the law on internment in international armed conflict, which differs still from the developing rules on the legality and legitimacy of detention in non-international armed conflict. As Ashley Deeks points out, the social implications of using algorithms in one context may also differ when transferred or repurposed to another, so it is not possible to draw straight comparisons. Rather, each has to be assessed on its own merit.

At the same time, the nature of the risks that I am concerned with, particularly to human rights, tend to run through most uses of algorithms in decision-making. Risks to human rights are important in the armed conflict context given the co-application of IHRL and international humanitarian law. However, when used in a new context, the scale and nature of these risks may vary. Militaries should use experiences in other fields as a starting point and then analyse how these issues might play out in an international or non-international armed conflict. In doing so, they should carry out specific impact assessments ahead of time to determine how the use of predictive algorithms may affect individual and group rights in new circumstances.

How algorithms affect human rights in other fields

Corporate actors as well as State agencies are increasingly integrating predictive algorithms into their decision-making, including in policing and the criminal justice system. As I have argued in a recent report with colleagues from the ESRC Human Rights, Big Data and Technology project, the use of predictive algorithms in policing and criminal justice carries the risk of amplifying or introducing new forms of bias and discrimination into decision-making. For example, in a report released last month on ‘Policing by Machine – Predictive Policing and the Threat to Our Rights’, the UK human rights organisation, Liberty, finds that ‘police algorithms [are] entrenching pre-existing discrimination, directing officers to patrol areas which are already disproportionately over-policed’.

The risk of entrenching pre-existing discrimination arises because algorithms rely on input data and this data is often incomplete and discriminatory. This data may not only relate to the individual but may also incorporate assessments of an individual’s family, friends, people with ‘similar’ profiles and the neighbourhood in which the person lives. Beyond input data, the algorithmic process can also operate in a discriminatory manner depending on how it weighs factors within the system and therefore can produce discriminatory outcomes. Indeed, algorithms work on the basis of correlation, not causation, and produce outputs at a group or population level, but which are not determinative in relation to specific individuals. As Benvenisti has recently argued, this can undercut the ‘understanding that the law must treat each individual as being unique’. This is of particular concern in relation to decisions about whether to detain and whether to release people as both international human rights law and international humanitarian law require these decisions to be individualised and thus the rights of due process, fair trial and liberty are put at risk. In an armed conflict context, it could also undermine confidence in the military and trust by local populations.

These concerns are often downplayed by the claim that the risk assessments enabled by predictive algorithms are only one piece of evidence and ultimately a human is still ‘in the loop’ and will be the one making the decision. However, the nature and complexity of today’s algorithms as well as assertions of proprietary interest by the companies that own the algorithms and do not want the full code made public mean that it can be difficult for a human decision-maker to scrutinise the validity and strength of the conclusions of the risk assessment. There is also the risk that the human decision-maker overly defers or gives greater weight to the algorithmic evidence because of its perceived scientific nature. This risk may be particularly acute in ‘higher stakes’ decisions, such as detention, where judges—or competent authorities—may be concerned about the impact of going against an assessment that recommends detention or not releasing a person, in case they are wrong and the person reoffends or, in an armed conflict context, an attack materializes.

Establishing a legal and governance framework before deploying algorithms

The introduction of algorithms as a means to support decision-making has not always been accompanied by a clear legal or governance framework. This may be because algorithms can be treated as a technological tool and therefore, the impact of introducing algorithms into decision-making processes on the way in which an organization functions may not be fully understood. I argue in a recent article in the European Journal of International Law that the use of algorithms in decision-making should be seen as a governance choice. This is because algorithms—even when used in support of a decision—fundamentally change how decisions are made. Their introduction may also mean that they become one of the main ways in which to gather evidence or assess risk, potentially displacing or diminishing traditional methods.

For example, in relation to police use of algorithms, the question often asked is, in an era of cut-backs and cost-driven efficiencies, what does the integration of predictive algorithms do to the availability of resources and traditional methods for intelligence collection and building community relations, such as the number of police officers working in the community? What does any change do to the overall ability of the police to carry out its work effectively? Interestingly, interviews conducted by my HRBDT colleagues, Pete Fussey and Daragh Murray, indicate that the use of predictive algorithms in a policing context may undermine officers’ effectiveness, by impairing their autonomy and obscuring why they are directed to police a particular area. In effect an algorithm may negate officer’s own intuition and experience.

These questions are clearly significant for militaries operating in armed conflicts. I therefore suggest that algorithms should not be introduced into decision-making processes without a full understanding and assessment of how they will impact upon the operation of a particular organization. The technology cannot be examined in isolation but rather it needs to be assessed for how it affects the traditional approaches to decision-making and evidence collection and whether it is the most effective way to approach decision-making.

Developing an oversight and accountability framework

Establishing a clear legal framework and governance structure is also critical to ensure that systems of oversight and accountability are in place. Where they are absent, this will likely lead to a range of difficulties downstream, including litigation and public scrutiny of the choices key actors are making. A failure to plan may also frustrate or slow down the effective integration of algorithms into operations due to a lack of trust and confidence in how technology is being brought in. For bodies, like the military, that are perhaps not yet using algorithms in decisions such as on detention, they have an opportunity to critically assess if, when and how algorithms can support decisions and how to avoid and mitigate the risks, before the technology enters the system.

In a forthcoming article in the International and Comparative Law Quarterly, I argue with my co-authors, Daragh Murray and Vivian Ng, that before ever deploying an algorithm, it should be assessed for its impact on human rights at the conception and design stage. We suggest that if the purpose or effect of the algorithm is to circumvent IHRL, then it should not be used unless and until these effects can be removed. In her blog post, Ashley Deeks points to serious limitations in the availability and quality of input data in specific armed conflicts and the challenges with testing algorithms. This raises questions of whether algorithms will be able to reach reliable conclusions that would be of any use in armed conflicts. Alongside the risks that individualized decisions will not be made, poor data quality and the difficulties in testing of algorithms create a high-risk of discrimination in such contexts, potentially leading to arbitrary detention and undermining confidence in the military.

For example, she notes that,

although the United States military is attuned to the need for cross-cultural competence and trains its forces accordingly, it is a difficult task. Militaries will need to work hard and carefully to understand what data to use in foreign settings to develop reliable detention algorithms—and will need to ensure that their computer scientists are cross-culturally trained as well. Further, the military will need to train its algorithms on the data of people who constitute ‘threats’ and on those who constitute ‘non-threats’, but that data is less likely to be tested as rigorously as criminal convictions are.

We also suggest that the exclusive use of algorithms to make detention decision—i.e., without a human decision-maker ‘in the loop’—would be incompatible with IHRL. This is because algorithms reach conclusions through correlation not causation and therefore cannot offer individualised decisions. Having a human ‘in the loop’ will not, in itself, be sufficient. But rather, the question is whether that individual is effective. This will turn on factors such as ability to understand the algorithm, which is increasingly difficult given the complexity of modern algorithms and the weight given to such algorithms. It is critical that governance frameworks are put in place that safeguard against the risk that human decision-makers end up deferring to the conclusion of an algorithm in practice, making the ‘human in the loop’ redundant.

When discussing what an algorithmic risk score might do to the propensity of military actors to release a person in an armed conflict, the argument has been made to me that the military is less likely than a civilian judge to defer to an algorithm. The argument was made that this is because military actors operate in a context of risk and therefore are likely to be more comfortable making such decisions. At the same time, they remain high-risk decisions and issues will still arise where a military judge is unable to understand the intelligence base for the conclusion reached by an algorithm. Accordingly, from a governance perspective, it should remain a risk that is monitored and tested.

Finally, we suggest that algorithms should not be used in decision-making if there are not adequate monitoring and oversight systems in place that can identify any potential or actual harm, particularly to human rights, and effectively prevent or bring such harm to an end. The design and establishment of adequate oversight mechanisms and whether this requires external as well as internal systems is currently subject to significant debate and analysis in related areas. It will be particularly critical for militaries to assess these questions before deploying algorithms in decision-making, including on how due process can be ensured and the ability of individuals subject to detention orders to challenge the influence of an algorithm in reaching such a determination.

***

Editor’s note

This post is part of the AI blog series, stemming from the December 2018 workshop on Artificial Intelligence at the Frontiers of International Law concerning Armed Conflict held at Harvard Law School, co-sponsored by the Harvard Law School Program on International Law and Armed Conflict, the International Committee of the Red Cross Regional Delegation for the United States and Canada and the Stockton Center for International Law, U.S. Naval War College.

Other blog posts in the series include

See also

Previous posts by workshop participants

For more posts, see our past Autonomous Weapons Series


DISCLAIMER: Posts and discussion on the Humanitarian Law & Policy blog may not be interpreted as positioning the ICRC in any way, nor does the blog’s content amount to formal policy or doctrine, unless specifically indicated.


 

 

Share this article

Comments

There are no comments for now.

Leave a comment