Skip to main content
en
Close

Structural disconnects between algorithmic decision-making and the law

Artificial Intelligence and Armed Conflict / Conduct of Hostilities / Detention / Law and Conflict / New Technologies 8 mins read

Structural disconnects between algorithmic decision-making and the law

Editor’s note: There are disconnects between how algorithmic decision-making systems work and how law works, according to computer scientist Suresh Venkatasubramanian. Moving forward, he calls on us to look more closely at these disconnects. This post is part of the AI in armed conflict blog series

 

***

I’d like to reflect more broadly on what I’m beginning to realize is an epistemic disconnect between technology (and machine learning-based modeling in particular) and the law. It’s not a surprise that technological advances have sent shockwaves through the legal system. As our ability to use technology to target and profile people advances, it seems like current legal guidelines are struggling to keep up. But I argue that there are much deeper disconnects between the very way a ‘computer science-centric’ viewpoint looks at the world and human processes, and how the law looks at it. I will focus on two aspects of this disconnect: the tension between process and outcome, and the challenge of vagueness and contestability. And while I’ll draw my examples from our workshop discussions of AI in war zones, the points I make apply quite generally.

Tension between outcome and process

If we look at the guidelines for the treatment of detained persons in a war zone, we see a very detailed process that a detaining authority must follow. The assumption is that following this process keeps the authority in compliance, and more importantly, that the process embodies the norms that the designers of the guidelines wanted to capture. In effect, the process represents a form of fairness—a procedural fairness.

Algorithmic decision-making systems are not evaluated based on their process. They are evaluated based on an outcome: does the system work or not? And the definition of ‘works’ is based on (in the case of machine learning) compliance with some prespecified examples of scenarios that ‘work’ and scenarios that ‘don’t’. To use a legal analogy, this would be analogous to defining a fair decision by coming up with a rule based on past decisions that someone decided were ‘right’ or ‘wrong’ based on past outcomes. In one sense this is entirely circular: we are deciding what is ‘right’ based on someone deciding what is ‘right’. But in another sense, an outcome-based notion of rightness is the only reasonable way we can manage the complexity of an algorithmic decision system, because the number of choices that go into the design of such a system are so minute and so numerous it would be very difficult to formulate any notion of fairness or justice based on the process of training a model. A side note: cryptographers have investigated the possibility of adversarially hiding the details of a piece of code so that all that we can discern from it is its outcome. While this is not possible in general for any program, it is possible to take two programs that are functionally equivalent (i.e., have the same outcomes), but involve different processes, and obfuscate them so they cannot be distinguished from each other.

This disconnect recurs again and again, and illustrates why legal reasoning around algorithmic decision-making that ‘bypasses the black box’ is destined to fail, because it is impossible to design a process-based notion of fairness that ignores a huge process in the middle. Indeed, one of the reasons computer scientists have latched on so firmly to topics like discrimination is that there are seemingly outcome-based notions like the statistical test at the heart of the disparate impact doctrine that seem amenable to analysis, even as the larger doctrine (and its process elements) goes unheeded.

This is not a design flaw. At least for computer scientists, the focus on outcomes is a central idea, because it is linked to the deeper idea of abstraction: that we can evaluate a system purely on its inputs and outputs and abstract away the implementation. This allows us to put systems together like Lego blocks, without worrying about how each piece is implemented.

But if the expression of norms is in the form of a specific process or implementation, then we need to institute ways to freeze that implementation—or at least continually audit it—in ways that we don’t typically do with software. Even the unit tests we build for software test inputs and outputs, rather than process.

Challenge of vagueness and contestability

A second, and more subtle, concern that arises when trying to express legal notions in technological terms is the issue of ‘vagueness’, or ‘constructive ambiguity’. If we look at the principles of distinction, proportionality and precautions under international humanitarian law as guidance for when an attack is considered permissible, we see a lot of judgement framed in terms that to a computer scientist seem imprecise. One might argue that the vagueness in these terms is by design: it allows for nuance and context as well as human expert judgement to play in a role in a decision, much like how the discretion of a judge plays a role in judging the severity of a sentence. Another view of this ‘vagueness by design’ is that it allows for future contestability: if commanders are forced to defend a decision later on, they can do so by appealing to their own experience and judgement in interpreting a situation. In the context of international law, an excellent piece by Susan Biniaz illustrates the value of constructive ambiguity in achieving consensus. There is extensive literature in the philosophy of law defending the use of vagueness in legal guidelines and arguing that precision might not serve larger normative goals and might also shift the balance of decision-making power away from where it needs to be.

But what of algorithm-driven targeting? How is a system supposed to learn what targets satisfy principles of proportionality, distinction and precaution when to do so it must rely on a precise labeling that almost cannot exist by design. Models may be imprecise in a strict probabilistic sense, but they need precision in order to be built. And this precision is at odds with the vagueness baked into legal language.

What I fear is that in order to implement AI-driven systems in such a setting, designers will settle for a kind of illusory precision—where a system will be built with arbitrary precise choices made partly by programmers, but that the resulting black box system will be described as having the desirable broader normative properties. The problem is then one of transparency and contestability: the black box can no longer be interrogated to understand the nature of its arbitrary precision and its interpretation cannot be challenged later on. For more on this, I’d strongly recommend reading Danielle Citron’s work on Technological Due Process.

***

To reiterate: the disconnect between the algorithmic and legal ‘views of the world’ is deep and difficult to reconcile with ‘the right data’ or ‘the right models’. Attempts to deploy AI in settings governed by law will need to reckon with this.

***

Editor’s note

This post is part of the AI in Armed Conflict Blog Series, stemming from the December 2018 workshop on Artificial Intelligence at the Frontiers of International Law concerning Armed Conflict held at Harvard Law School, co-sponsored by the Harvard Law School Program on International Law and Armed Conflict, the International Committee of the Red Cross Regional Delegation for the United States and Canada and the Stockton Center for International Law, U.S. Naval War College.

Other blog posts in the series include

See also

Our Autonomous Weapons Series


DISCLAIMER: Posts and discussion on the Humanitarian Law & Policy blog may not be interpreted as positioning the ICRC in any way, nor does the blog’s content amount to formal policy or doctrine, unless specifically indicated.


 

Share this article

Comments

Leave a comment