‘Conflict, Rights and the Machine: Addressing the Evolving Methods of Warfare’
Speech by Dr Hugo Slim, Head of Policy and Humanitarian Diplomacy, ICRC at the Raisina Dialogue in New Delhi on 18 January 2018.
The ICRC is delighted to have this opportunity to address the Raisina Dialogue on the pressing and important subject of lethal autonomous weapons systems (LAWS) or autonomous robotic weapons.
We are especially grateful to the Indian Government for its global leadership on this issue, and we deeply value the wisdom and inclusive approach of Ambassador Amandeep Gill in his Chairing role of the Group of Governmental Experts advising the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW).
There is a real urgency in the challenge of autonomous weapons, and the ICRC is convinced that States need to agree clear and lawful limits to their development and use as soon as possible. India’s expertise and diplomatic influence will be essential to making this happen.
The ICRC has long been involved in advising States on weapons in our role as the Guardian of IHL and from our perspective as a neutral humanitarian organisation.
Our concern in the case of these new weapons is no different. As always, we want to ensure that any new weapons are compatible with the International Humanitarian Law (IHL) and avoid adverse humanitarian consequences.
This evening, I will set the scene and then concentrate on three aspects of the challenge of autonomous weapons: legality, operational principles, and ethics.
Autonomous Weapons Today
The ICRC is aware of 90 States who already have military drones. At least 28 States have armed drones and at least 9 States have used them. Improvised commercial drones have also been weaponised and used by non-State Armed Groups (NSAGs).
The diversification, proliferation and increasing autonomy of these weapons is happening fast. This rapid escalation is taking place alongside impressive new progress in “deep thinking” in game-playing machines, which may soon represent a step change in machine autonomy.
There are already robotic weapons – like loitering weapons and sentry weapons – which, once activated, can select and attack targets without human intervention.
So the challenge of the lawful application of autonomous weapons is urgent and inter-governmental discussion at the UN CCW must move fast.
IHL and Autonomous Weapons
International Humanitarian Law is clear that any new weapons must fit with the law. Article 36 of Additional Protocol I of the Geneva Conventions makes clear that all new weapons and methods of warfare must be capable of complying with IHL.
IHL has clear principles and rules with which these weapons must conform – especially the principles of distinction, proportionality and precaution.
Any use of autonomous weapons must ensure distinction between combatants and non-combatants, and between military and civilian objects. Such use must also be proportionate in attack and defence, and must enable sufficient precautions in the way they are operated.
This is essential, for example, to prevent indiscriminate attack or the deliberate targeting of civilians, and to limit the humanitarian consequences of armed conflict.
IHL requires a direct link between the decision and intention of a commander and the outcome of an attack using these weapon systems, and the kinds of judgements required in law – distinction, proportion and precaution – must involve a minimum level of human control.
It is not machines themselves that should be law abiding. The law is addressed to human beings not machines. It is humans who are required to respect and apply the law.
The principle of State obligations and human responsibility must remain central in States’ deliberations and conclusions on autonomous weapons.
This means that the crux of the matter is the relationship between humans and machines, and this relationship needs to be one of human control over the critical functions of the machine. Only in this kind of relationship will responsibility be maintained and autonomy be managed.
The critical functions of a machine are its ability to target and attack.
The ICRC urges States to focus on three key operating principles for autonomous weapons to ensure a minimum of human control of these critical functions: predictability, reliability and human supervision.
A weapons system which is wholly or partly unpredictable or unreliable entails a significant risk that IHL will not be respected.
A weapons system must have a level of human supervision which can take responsibility for the machine and which has the ability to intervene after activation of the machine.
This principle is indeed reflected in current practice. Most existing weapons systems with autonomous targeting retain human supervision and the ability to de-activate.
The Ethics of Autonomous Weapons
There is considerable public anxiety about the risks of autonomous weapons.
Our conscience is always an important moral prompt and IHL itself requires States to consider the “principles of humanity” and “dictates of the public conscience” in its famous Martens Clause.
This demands that we take these new weapons very seriously as a significant new departure in our human relationship with weapons bearing.
Preparing to talk at Raisina, I revisited the great Indian text of the Bhagvad Gita – the great spiritual reflection that starts in a moment of war.
Two incidents of autonomy leapt out at me.
The first moment is an autonomous humanitarian pause when Arjuna brings his chariot between the two armies to look upon his enemy. Looking at all these men – fathers, grandfathers, teachers, brothers and sons – “he was overcome by deep compassion”. His bow “fell from his hand and his mind was reeling”.
How can we rely on these new machines to have an autonomous humanitarian pause?
The second incident comes later in the Mahabharata when the battle is at its height, and Bhishma – the great and inviolable warrior – deploys his unique ability to choose the moment of his own death.
This is a moment of deactivation. Making an autonomous choice in the thick of battle pierced with many arrows, Bhishma chooses to “turn his mind from war” and die.
How can we rely on these new machines to choose the right moment to stop?
Questions of autonomy in ourselves and in the machines we create are profound.
The ICRC is under no illusions. Humans have had control of their weapons for thousands of years and they have often been brutal with them – violating IHL as often as they respect it. Machines may even make us humans better at respecting IHL in some cases.
But, as Arjuna and Bhishma show us, our autonomy is precious. We should use it as the basis of our decisions. We should not give it away too easily.
The ICRC urges States to prioritise the principle of human control in the use of these new weapons and to move fast to consolidate this as High Contracting Parties to the CCW.