As weapon systems take over more and more functions in the targeting cycle that used to be fulfilled by humans, it is increasingly difficult to establish who will bear the criminal responsibility for an attack. But is the accountability gap really only a concern of the future? Do existing weapons systems live up to this standard?
This so-called “accountability gap” was one of the main concerns among states at the CCW (Convention on Certain Conventional Weapons) Meeting of Experts on Lethal Autonomous Weapon Systems, 11-15 April 2016. According to the ICRC, autonomous weapon systems are those that “select and attack” their targets without human intervention. While not all states at the CCW shared this definition, virtually all participants agreed that accountability for weapon systems has to exist at all times and that this should be a condition guiding the development of future weapons.
It is often forgotten that there are already systems on the market with a large degree of autonomy. A good example is the so-called Active Protection Systems (APS) for vehicles, which have already been operationalized and used in combat. These systems automatically detect, intercept, and destroy incoming threats such as rocket-propelled grenades by a shotgun-like blast. Once the driver has activated the system, there is no more human interference. So who is responsible if something goes wrong? Is there already an accountability gap?
There is no public data available as to how reliable APS are, nor is it the object of this post to discuss this. Let us, however, imagine the following: once activated, an APS “commits” a serious violation of international humanitarian law (IHL). While driving though a cluttered environment a malfunction of the triggering mechanism causes it to fire a disproportionate blast into a group of civilians. Could the officer, who decided to activate the system, be criminally responsible for war crimes?
At first sight, the officer’s responsibility seems to be quite clear-cut. The link between his or her actions and the crime in question might even be easier to prove for the use of an APS than for a human subordinate: the weapon fired, because it was activated. Had the officer not activated it, it would not have fired. There is a crystal clear causal link between the two.
But it is the subjective element (mens rea), which is required for an act to be a crime, that poses problems. In most cases the officer will not have directly intended the harm or have known with certainty that the APS would malfunction. Returning to our example, when switching on the APS, the officer did not know for sure that the outcome would be an attack violating IHL. At most he or she may have been able to foresee such risk and nevertheless chose to activate the system. In other words, he might have dolus eventualis, which is a form of intent where the criminal is aware of the likely outcome, but chooses to pursue his action anyway. It is less than secure knowledge, but more than pure negligence. In most cases the criminal liability of the officer will thus hinge on the notion of dolus eventualis. Did he or she switch on the APS despite the likely outcome of a misfire?
The first problem, however, is, that the concept of dolus eventualis does not exist in all legal fora. In fact, the statute of the International Criminal Court (ICC) – the only permanent international criminal court we have – seems to explicitly discard this concept in Art. 30 and sets the higher threshold of “intent and knowledge”. In case the officer in our example were to be tried before the ICC a conviction would be unlikely.
The second question is, how far we can stretch the concept of dolus eventualis? Many officers might not have in-depth technical knowledge of the algorithms of the APS and they can hardly know under what circumstances misfiring is likely. Such mistakes are hard to foresee. One could argue that officers should know; after all, they are in charge. But this would turn dolus eventualis into something that it is not: it is a form of intent, not a sort of negligence.
Sometimes it seems that we are frantically looking for criminal liability and forget what criminal liability is actually about: individual guilt. We may feel that it is globally unjust that nobody can be held accountable for the machine’s failure. But does this increase the individual criminal guilt of the officer? Let’s replace the APS with a real soldier and assume this generally reliable soldier decides to directly target a civilian. Few would claim that the officer in charge of the soldier is criminally responsible as a principal, because he or she foresaw the likely outcome of the attack. We would rather say that the responsibility rests with the soldier, who actually fired. So why should the individual guilt of the officer increase simply because he used an APS, which may actually have a lower failure rate than a human soldier? The fact that we cannot hold the machine itself criminally accountable does not automatically mean that the officer had dolus eventualis with regards to the misguided attack. By stretching this concept to cover such cases we are abusing criminal law for a purpose it should not serve. It should always be about individual guilt, nothing more and nothing less.
To sum up: yes, there is already a accountability gap for existing systems, in the sense that for some serious violations of IHL “committed” by an APS, it will be impossible to attribute criminal responsibility to a person (whereas this would be possible if a human had pulled the trigger). Stretching dolus eventualis to fill this gap is not the right way forward and will lead to severe incoherences. This leaves two options: we either need to accept the reality that no human being will bear criminal responsibility and be content with the responsibility of the State deploying such weapons, who will be accountable under the framework of state responsibility. The latter does not require an element of intent and thus encounters none of the problems mentioned above. Or we need to create laws that allow for a negligence liability in international criminal law, as they exist for example for negligent homicide in many national systems.
Read also
- Mind the Gap: The Lack of Accountability for Killer Robots, Human Rights Watch (April 2016).
- Third Meeting of Experts on Lethal Autonomous Weapon Systems, CCW (April 2016).
- Autonomous weapon systems: Technical, military, legal and humanitarian aspects, ICRC (expert meeting report, 26-28 March 2014).
Dear Michael, thank you for your contribution. It is quite interesting, but I would allow myself to make some remarks.
I agree with your conclusion that there exists an accountability gap when it comes to dolus eventualis, while I cannot agree that this problem is specific to autonomous systems.
First, I am not comfortable with your example of the APS discharging a blast against a group of civilians, which you qualify as an “an attack violating IHL”. Actually, I cannot see how it is different from an unintended explosion of military ordnance, or, even better, a mine killing a civilian. In both cases there is a blast, resulting in the death of civilians. However, one may say, the difference lies in the fact that in the case of the APS, it is the system that “decides” to discharge the blast, and not a pure occasion. Even if it is correct, which I personally doubt, a mine can be considered an autonomous device too, as it also “decides” on its own whom to kill. So, my point here is that I see some problems in calling APS systems autonomous weapons in pure sense. In my opinion, if the cited scenario, unrealistic as it is, does happen, it would be closer to my examples, than to really notorious fact of a machine intentionally killing a human being.
So, here arises the problem of real definition of what autonomous weapons are, and here we can have nice exercises. I think it requires a little bit more than just identifying some key parameters of the target, such as approaching the tank at a high speed in the case of APS. Because otherwise we would end up with the conclusion that any modern weapon system are autonomous weapons, since they can “willfully” deflect from the trajectory imparted by the muzzle. For example, we can conclude then, that in 2001 a civil airplane was not shot down by mistake by the Ukrainian AF, but exactly by the discharged anti-air missile that “decided” to attack the TU-154.
And here come my second point, which I have already mentioned.
The real problem with accountability you are pointing to, is not really about autonomous weapons, but about dolus eventualis in general. In domestic legal systems, if I want to fire at a bird and somehow I injure or kill my friend, I will be prosecuted exactly for culpable negligence. But in international criminal law, if I want to shoot down a military aircraft, I launch an anti-aircraft missile while being sure that my target is a military one, but I end up shooting down a civilian airplane (see a more recent story about a civilian aircraft and Ukraine), there is a problem because we all know the IHL mantra “one cannot commit a war crime by not taking precautions”. And that’s, I think, exactly the problem you’re pointing at.
If we come back to the autonomous weapons, I think, first of all, there is a need for a narrower definition of this kind of weaponry, as the ICRC’s one cited at the beginning of the article is too broad, and can encompass too many things, from mines (which do select, as they do not attack those not stepping on them or not having enough weight) to APS and almost all kinds of modern missiles. In my opinion, the real autonomous weapons are those that can distinguish at least between combatants and civilians, but this is also controversial of course. And then, once we’ve more or less agreed what we are fearing, we can look for solutions regarding accountability and other issues, but this will be another story.
For the moment, I am not comfortable with the definition of the ICRC, as it, if it really becomes operational, in my mind, will severely complicate the problem. It mixes two different aspects: that of individual accountability for not taking precautions and that of machines taking their own decisions, which in case of APS is, I think, still far-fetched.
Dear Artem,
Thank you very much for your comment. I hope the following will adequately address your issues:
1. On the term “autonomous” weapons
The point of this post was not to get into the definition of “autonomous”. States, experts and civil society have not yet been able to agree on this after long discussions. So when you say APS are not “autonomous weapons in a pure sense” do you mean that APS are not yet the final stage of autonomy? On this I would certainly agree. But for the purposes of this blog post I see autonomy as something very simple: A weapon system that takes over functions that were formerly carried out by humans. Every time a weapon does something that humans used to do, you will have increased problems with the mens rea. You have to prove that the human “behind the weapon” still knew what the weapon would do, in order to establish accountability. I think this is the essence of the question. And you can call these weapons “automated” or “autonomous” or “semi-autonomous”, but it won’t change the fact that you will run into problems with the mens rea.
2. Specifically on mines
You have brought up the example of mines in that context. Firstly, they are fundamentally different in that they are made to injure people, whereas per se an APS is not. But leaving that aside, you have mentioned that also mines have a certain degree of autonomy (according to the broad definition). Here I would agree with you. They take over the function of a human that would otherwise push the detonator. But what sets apart landmine from APS, is that they function according to a much simpler pattern, which is easy to understand for each and every soldier. It blows up, if you step on it. When you lay a mine near a road or village, you know that the likely outcome will be that you sooner or later injure or kill a person. You don’t know whom exactly, but it is likely to harm someone. This is a clear case of intent. The person behind the mine would incur responsibility. For the APS, however, it is not as clear-cut. When will it injure somebody? And could the soldier be sure of the outcome?
3. Lack of precautions as a war crime
Finally I agree that not taking precautions is not a war crime, even though it has been argued otherwise for extreme cases (see e.g. Oeter in Fleck ‘Handbook of Humanitarian Law in Armed Conflicts’, 1995 p. 457). I do not agree, however, that all is a matter of precautions. Dolus eventualis describes the case, where you had a hunch that this might go wrong, but you did not stop. To take the example you used, if you shoot at the bird and hit your friend, you were probably not aware that you were going to hit your friend (unless your friend is a bird). Whereas when you shoot down the airplane, mistaking it for a military objective, you actually hit the object you intended to hit. It is now a question of how sure you were that the airplane was a military objective. If you knew that it was likely not to be one, this would be a case of dolus eventualis. Then we are not only talking about precautions, but about directly targeting civilians. If you were sure, well then we are in the realm of precautions.
Finally, I want to recall that this post is not meant to bash APS, it recognizes that this is generally is a defensive system. But it strives to show by means of an example that there are existing systems that take over so much of the targeting cycle, that it will be difficult to establish accountability. It has been argued that for such systems, the officer remains responsible. I cannot share this view, I don’t see how the officer could or should be held accountable for a misfiring of the machine, be it an APS or another system that functions in a similar way.
“Every time a weapon does something that humans used to do, you will have increased problems with the mens rea. You have to prove that the human “behind the weapon” still knew what the weapon would do, in order to establish accountability. I think this is the essence of the question. And you can call these weapons “automated” or “autonomous” or “semi-autonomous”, but it won’t change the fact that you will run into problems with the mens rea”
You’re absolutely right saying that it is the essence of the question. And that’s exactly where we disagree.
It is true that with all these types of weapons a prosecutor will have problems with mens rea, but I argue that the nature of this problems will be different depending on the type of the weapon.
70 years ago artillerists used to calculate themselves the elevation angle and deflection, monitor the temperature of projectiles, assess the necessary amount of propellant powder etc. Today, all these processes are automatized, and if at any of these stages the computer makes a mistake, the projectile may well hit a civilian village instead of an enemy tank. I have some problems calling this a mistake of an autonomous system. In my opinion, it is exactly the scenario of the APS, and the problem of mens rea will be the same as well. The problem of not taking precautions, dolus eventualis of the operator (most probably the operator, why – see below my example on the house) etc.
And there can be another scenario, which you touched upon in your original article – a scenario similar to that of a commander and their soldier, where a commander may not be reasonably held responsible for not foreseeable acts of that soldier. To be similar to such a situation, that is to a soldier, an autonomous weapon, in my mind, should have more “autonomy” than mere being able “to do something that humans used to do”. And here, when this weapon has an artificial intelligence similar to a human mind, we will have a real problem with accountability. Because normally, the soldier would be criminally responsible for their acts, but one cannot prosecute a robot. But who should you prosecute, if the robot “took the decision himself”, or most nearly so? The person who activated the robot? The person who designed the robot? The person who produced the robot?
So, for me there is a real difference between these two cases. In the first one, the failure of the system results in civilian losses, which under a direct intent would qualify as a war crime. In the second case, it is not the failure of the system, but rather a DECISION taken by the system that leads to civilian deaths.
The first scenario is about dolus eventualis and precautions. You say that you “don’t see how the officer could or should be held accountable for a misfiring of the machine”. If we understand misfiring in my sense, i.e. the case where there is a failure of the system (AA missile, artillery calculator, or APS), I, honestly do not know the right answer in this particular scenario, but a can suggest some solutions. The first solution may be traditional, that is to say that there IS indeed an accountability gap, you cannot commit a war crime by not taking precautions, the losses not anticipated in the ex ante evaluation of the consequences of the attack should not be taken into account etc. Another approach may be to follow general principles inferred from domestic systems. So, for example, if you want to blow up an old building, you put the explosives in, you set the timer on 4 p.m., but it, for some unknown reason, blows up at 2 p.m., when there were some workers inside who didn’t expect the explosion at that time. Here someone (I think most probably it will be the responsible for the operation of demolition, but I may be mistaken) will be held accountable under (culpable) negligence for the deaths of those workers, although it was the failure of the system. Personally, I think that the second approach is more intellectually honest, as, since there is criminal accountability for both culpable and not culpable negligence in domestic systems, why should it be otherwise on the international level when it comes to the situations of armed conflicts?
The second scenario is about artificial intelligence, the situation where a weapon has a kind of personality, and this system “commits” a war crime. And here not only don’t I know the right answer to the question who should be held accountable for this and under which modality, but I cannot even suggest any theories which could elucidate the issue. I think, it is the real problem for scholars to elaborate on.
So, to sum up, I think there is a world of difference between systems that simply can “do something that humans used to do”, and the systems that may amount to actual soldiers. The problems of accountability for the failures of these two types systems are as different as these two types of systems themselves, and until there is an accepted definition of what an autonomous weapon actually is, there is no point discussing problems of accountability of these systems, because, for me, the accountability problem that you are pointing at is not the problem of autonomous systems at all, while for you it is. And if our starting points are different, so will be our answers.
A Global Commitment!
Treating Al-Qaeda-Taliban, IS & Its Partners ETC. as International Criminals
In 1996, Osama & his organization had declared War on the USA. This declaration has never been recanted by successive leaders of this non-state actor.
During the Nuremberg trials, the Nazi SS and other organizations were declared criminal entities under international law – as seen in the Charter of the International Military Tribunal (“Nuremberg Charter”), annexed
to Agreement for the Prosecution and Punishment of the Major War Criminals of the European Axis.
Thus, it would be preposterous that non-state actors engaged in armed conflict(s) and/or acts that are in breach of IHL, are deemed as criminal entities under international law but aren’t punished in accordance with international law. The fact that members of the Lord’s Resistance Army – a criminal entity which is a known non-state actor – have been prosecuted, tried, convicted & sentenced by the International Criminal Court for acts contravening IHL etc., it will be deemed paradoxical that members of Al-Qaeda, Taliban, IS etc. can’t be brought before an international criminal tribunal for atrocity crimes be held accountable and criminally responsible for atrocity crimes.