As weapon systems take over more and more functions in the targeting cycle that used to be fulfilled by humans, it is increasingly difficult to establish who will bear the criminal responsibility for an attack. But is the accountability gap really only a concern of the future? Do existing weapons systems live up to this standard?
This so-called “accountability gap” was one of the main concerns among states at the CCW (Convention on Certain Conventional Weapons) Meeting of Experts on Lethal Autonomous Weapon Systems, 11-15 April 2016. According to the ICRC, autonomous weapon systems are those that “select and attack” their targets without human intervention. While not all states at the CCW shared this definition, virtually all participants agreed that accountability for weapon systems has to exist at all times and that this should be a condition guiding the development of future weapons.
It is often forgotten that there are already systems on the market with a large degree of autonomy. A good example is the so-called Active Protection Systems (APS) for vehicles, which have already been operationalized and used in combat. These systems automatically detect, intercept, and destroy incoming threats such as rocket-propelled grenades by a shotgun-like blast. Once the driver has activated the system, there is no more human interference. So who is responsible if something goes wrong? Is there already an accountability gap?
There is no public data available as to how reliable APS are, nor is it the object of this post to discuss this. Let us, however, imagine the following: once activated, an APS “commits” a serious violation of international humanitarian law (IHL). While driving though a cluttered environment a malfunction of the triggering mechanism causes it to fire a disproportionate blast into a group of civilians. Could the officer, who decided to activate the system, be criminally responsible for war crimes?
At first sight, the officer’s responsibility seems to be quite clear-cut. The link between his or her actions and the crime in question might even be easier to prove for the use of an APS than for a human subordinate: the weapon fired, because it was activated. Had the officer not activated it, it would not have fired. There is a crystal clear causal link between the two.
But it is the subjective element (mens rea), which is required for an act to be a crime, that poses problems. In most cases the officer will not have directly intended the harm or have known with certainty that the APS would malfunction. Returning to our example, when switching on the APS, the officer did not know for sure that the outcome would be an attack violating IHL. At most he or she may have been able to foresee such risk and nevertheless chose to activate the system. In other words, he might have dolus eventualis, which is a form of intent where the criminal is aware of the likely outcome, but chooses to pursue his action anyway. It is less than secure knowledge, but more than pure negligence. In most cases the criminal liability of the officer will thus hinge on the notion of dolus eventualis. Did he or she switch on the APS despite the likely outcome of a misfire?
The first problem, however, is, that the concept of dolus eventualis does not exist in all legal fora. In fact, the statute of the International Criminal Court (ICC) – the only permanent international criminal court we have – seems to explicitly discard this concept in Art. 30 and sets the higher threshold of “intent and knowledge”. In case the officer in our example were to be tried before the ICC a conviction would be unlikely.
The second question is, how far we can stretch the concept of dolus eventualis? Many officers might not have in-depth technical knowledge of the algorithms of the APS and they can hardly know under what circumstances misfiring is likely. Such mistakes are hard to foresee. One could argue that officers should know; after all, they are in charge. But this would turn dolus eventualis into something that it is not: it is a form of intent, not a sort of negligence.
Sometimes it seems that we are frantically looking for criminal liability and forget what criminal liability is actually about: individual guilt. We may feel that it is globally unjust that nobody can be held accountable for the machine’s failure. But does this increase the individual criminal guilt of the officer? Let’s replace the APS with a real soldier and assume this generally reliable soldier decides to directly target a civilian. Few would claim that the officer in charge of the soldier is criminally responsible as a principal, because he or she foresaw the likely outcome of the attack. We would rather say that the responsibility rests with the soldier, who actually fired. So why should the individual guilt of the officer increase simply because he used an APS, which may actually have a lower failure rate than a human soldier? The fact that we cannot hold the machine itself criminally accountable does not automatically mean that the officer had dolus eventualis with regards to the misguided attack. By stretching this concept to cover such cases we are abusing criminal law for a purpose it should not serve. It should always be about individual guilt, nothing more and nothing less.
To sum up: yes, there is already a accountability gap for existing systems, in the sense that for some serious violations of IHL “committed” by an APS, it will be impossible to attribute criminal responsibility to a person (whereas this would be possible if a human had pulled the trigger). Stretching dolus eventualis to fill this gap is not the right way forward and will lead to severe incoherences. This leaves two options: we either need to accept the reality that no human being will bear criminal responsibility and be content with the responsibility of the State deploying such weapons, who will be accountable under the framework of state responsibility. The latter does not require an element of intent and thus encounters none of the problems mentioned above. Or we need to create laws that allow for a negligence liability in international criminal law, as they exist for example for negligent homicide in many national systems.
- Mind the Gap: The Lack of Accountability for Killer Robots, Human Rights Watch (April 2016).
- Third Meeting of Experts on Lethal Autonomous Weapon Systems, CCW (April 2016).
- Autonomous weapon systems: Technical, military, legal and humanitarian aspects, ICRC (expert meeting report, 26-28 March 2014).