Skip to main content
en
Close

Autonomous Weapons Systems: When is the right time to regulate?

Autonomous Weapons / Law and Conflict / New Technologies / Weapons 10 mins read

Autonomous Weapons Systems: When is the right time to regulate?
Those wishing to control the spread and use of autonomous weapons systems generally favour pre-emptive regulation, either in the form of legally binding restrictions, or an outright prohibition. This blog reaffirms the value of pre-emption, but adds an important qualifier. It argues that the best opportunity to secure meaningful control over autonomous weapons is likely to be at the moment of ‘viability’, when a battle-ready version of the technology is anticipated to be imminent. Imminence will jolt this issue from the abstract into the concrete, while still leaving a window – albeit fleeting – within which to act before the threshold of battlefield use is crossed. In order to clarify the importance of viability, this blog draws upon the example of the 1995 prohibition on blinding laser weapons.

The state of the debate on autonomous weapons

State and non-State actors travelled to Geneva last month to attend the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) meeting. Discussion centred on the challenge and potential international response to autonomous weapons: systems that ‘once activated, can select and engage targets without further intervention by a human operator’.[1]

For a number of States and campaigners, this latest meeting was a frustration, failing to advance the debate on autonomous weapons in any meaningful way. This came despite growing calls during the meeting itself for a bolder approach.

The European Union delegate warned that technological advances in autonomous warfare had the potential to ‘outpace our ability to uphold international law’. Brazil spoke of a ‘narrow historical window of opportunity’ within which to address the challenges posed by this technology. Chile encouraged those present to ‘act today. Now! In order to set limits [on autonomy]’.[2]

As these statements clarify, those concerned with the moral and legal implications of this technology believe not only that regulation is necessary, but that it should be negotiated and imposed before the weaponry itself is employed in battle. The question, though, is are they correct? Even if we concede that this technology must be restricted, is it really necessary to deal with the matter pre-emptively? If so, is early regulation of this type actually achievable?

The question of timing

In his 1980 book, The Social Control of Technology, David Collingridge explored the difficulty in restraining technological innovation. There exists, he argued, a double-bind problem:

[A]ttempting to control a technology is difficult…because during its early stages, when it can be controlled, not enough can be known about its harmful social consequences to warrant controlling its development; but by the time these consequences are apparent, control has become costly and slow.[3]

Though not invoked directly by participants, the Collingridge Dilemma cast a long shadow over the most recent GGE meeting on autonomous weapons systems. Campaigners believe with good reason that meaningful regulation will be harder to achieve, perhaps impossible, if autonomous weapons systems become entrenched. Far better, they argue, to address this issue pre-emptively, before major powers invest even more time, expertise, and financial resources developing the technology.

There has been decidedly less engagement, however, with the second aspect of the Collingridge Dilemma, the information problem. Extensive work has already been done to highlight the potential risks of autonomous weapons. Despite this, we are still some distance from a definitive assessment of the actual danger posed by this technology to civilians and combatants.

Some frame autonomous warfare as a profound and likely inherent challenge to the moral and legal standards of war. At the more extreme fringes are those who warn of a worst-of-all future, a terminator-like dystopia, in which human judgement, and humanity itself, is extirpated from war.

At the opposite end of this spectrum are those who argue in favour of this technology. The use of autonomous weapons, they suggest, may actually enhance civilian protection on the battlefield, both by increasing the overall precision of attacks, and by mitigating the more obvious human frailties in battle, including a predilection for violent excess.

Compounding this divide is ongoing ambiguity regarding the nature of autonomy itself, and specifically, a lack of consensus over the degree of autonomy necessary to render a weapon system problematic.

How do we navigate this impasse? Do we regulate autonomous warfare pre-emptively and risk creating a guiding framework not fit for purpose if the technology evolves in an unanticipated direction?[4] Or do we wait until military first-movers deploy and benefit from autonomous weapons systems, the consequence of which could very likely be a global arms race?

The argument here is that for all its drawbacks, early regulation – regulation in advance of the battlefield use of autonomous weapons – still offers the best opportunity to control, and if necessary, prohibit this technology. In order to have the highest probability of success, however, it has to be the right kind of early.

The viability window of autonomous weapons

A prohibition on autonomous weapons is most likely to achieve the necessary buy-in from States once the manufacture and use of the technology is perceived to be imminent. During this ‘viability window’, the spectre of a proliferation cascade – a rapid increase in the demand for and acquisition of autonomous weapons – becomes an exploitable resource, one that campaigners can draw upon to generate the urgency necessary to secure support for pre-emptive regulation. Such regulation, even if it failed to secure the support of the United States and Russia, would go a long way toward stigmatising the use of lethal autonomy in war.

In order to recognise the potential of battlefield viability to serve as a catalyst for regulatory change we need look no further than the 1995 agreement to prohibit blinding laser weapons; one of the only examples of a successful pre-emptive weapons ban in the history of arms control.

For almost a decade leading up to the agreement there had been efforts to ban the technology, on account of its potential tension with the rules against unnecessary suffering or superfluous injury in war. These were unsuccessful, however, with States rejecting proposed regulation on the basis that blinding lasers belonged to the realm of ‘science fiction’.[5]

By the early to mid-1990s, the situation had changed, with anti-personnel blinding lasers advancing to the point at which they were now being considered for sale. For understandable reasons, this development troubled those seeking a pre-emptive ban. Rather than signalling its demise, however, battlefield viability seems to have aided regulatory efforts, ‘eliminating a certain indifference’ on the part of many States who had hitherto concluded that the challenge, if it emerged at all, lay in the distant future.[6]

The campaign to prohibit blinding laser weapons, it must be recognised, is an imperfect analogy to current efforts to regulate autonomous warfare. Legitimate questions have been raised as to whether autonomous violence can properly distinguish between lawful and unlawful targets. The suggestion that this same violence constitutes unnecessary suffering is a tougher sell. There was a viscerality to permanent blindness that helped set the weapons that dispensed it apart. In contrast, there has been no suggestion, as yet at least, that autonomous weapons will significantly differ from their human-operated counterparts in regard to the type of physical harm they actually dispense.

The example of blinding lasers can enrich the ongoing debate on autonomous warfare. But crucially, it can do so not through ill-fitting technical comparisons, but rather by highlighting the importance of the viability window to the regulatory process.

Where to go from here?

The GGE will next meet in 2020 to consider a normative and operational framework for dealing with the issue of autonomous warfare. In the meantime, campaigners and technical experts must continue to work together to develop a more granular definition of autonomy, one that better identifies which specific technologies are incompatible with the rules of war. A greater understanding of what constitutes problematic autonomy will be the surest guide for future action, not only in terms of what to regulate but also when.

Collaboration between ethical, legal, and technical experts should be expanded to encompass the temporal dimensions of this issue, and specifically, the question of viability. What is the likely technical trajectory of autonomous weapons from this point onward?[7] Will support for a pre-emptive ban on autonomous weapons systems mirror the case of blinding lasers, and be at its most intense when problematic technologies are considered for sale? Or alternatively, will anticipation of battlefield use be the necessary catalyst? When are both of these thresholds likely to be reached?

Definitive answers to these questions will almost certainly prove elusive. But even by clarifying them somewhat we can reduce the risk that autonomous weapons will arrive at, and cross, the viability threshold before regulators can intervene.

The argument here is not that battlefield viability is a regulatory silver bullet, guaranteed to forestall the emergence of a problematic weapon system. The focus here is on opportunity. Those wishing to restrict or prohibit autonomous warfare will likely have their best opportunity to do so when a battle-ready version of the technology is anticipated to be imminent.

Imminence will clarify the urgency of the issue and the consequences of inaction. It will galvanise the will of those already committed to a prohibition and motivate uncommitted States, some at least, to shift to a more actively supportive stance.

It is true that some States will likely maintain their opposition to the regulation of autonomous warfare regardless of the urgency generated to do otherwise. In the end, the military benefits of this technology may be sufficiently great as to overwhelm our capacity to prevent its proliferation. A ban on autonomous weapons may also fall victim to the ongoing erosion of international arms control consensus, reinforced most recently by the Trump administration’s termination of the INF treaty.

Like so many other aspects of war, the future of this issue is unclear. What is clear though is that when regulating military weapons, timing matters. How much it ultimately matters to the specific case of autonomous warfare? Time will tell.

***

Footnotes

[1] The definition of ‘autonomy’ in weapons systems remains highly contested.
[2] Audio recordings of the August 20/21 session are available at https://conf.unog.ch/digitalrecordings/index.html?guid=public/C998D28F-ADCE-46DA-9303-FE47104B848E&position=40.
[3] David Collingridge, The Social Control of Technology (London: Pinter, 1980), p. 19.
[4] This framing would likely receive push-back from those advocating ‘human control’ based regulation. The very advantage of this framework is its technological agnosticism.
[5] Louise Doswald-Beck, “New Protocol on Blinding Laser Weapons,” International Committee of the Red Cross 36, no. 312 (1996), p. 273.
[6] Doswald-Beck, “Blinding Laser Weapons,” p. 284.
[7] SIPRI’s ‘Mapping Autonomy’ report offers a valuable overview of weapons systems currently under development. See https://www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf

See also

***

DISCLAIMER: Posts and discussion on the Humanitarian Law & Policy blog may not be interpreted as positioning the ICRC in any way, nor does the blog’s content amount to formal policy or doctrine, unless specifically indicated.

Share this article

Comments

Leave a comment