Efforts to address the ethical challenges posed by autonomous weapon systems are growing. In December 2023, the UN General Assembly passed a resolution requesting the Secretary-General to seek states’ views for addressing, among other issues, the ethical challenges posed by these systems, and in April 2024 Austria is hosting the conference ‘Humanity at the Crossroads: Autonomous Weapon Systems and the Challenge of Regulation’, with the aim of bringing ethical challenges to the fore.
In this post, Alexander Blanchard, Senior Researcher at the Stockholm International Peace Research Institute (SIPRI), argues that while ethics has been a prominent part of international regulatory debate on autonomous weapon systems for years, the promise and potential of ethics for advancing that debate is still far from being realized. He holds that, for progress to happen, new efforts must be coupled with work to clarify the role of ethics for regulation.
Ethics has been prominent in the international debate on the regulation of autonomous weapon systems (AWS). It was ethical concerns that first prompted discussion on the need to regulate AWS and ever since, reference to ethics has been a constant in the policy debate. Yet, after years of discussion, the ethics-based argument remains underdeveloped and is being outpaced by other areas of the regulatory debate, while the role of ethics with respect to legal considerations remains unclear: the promise and potential for ethics to advance the regulation of AWS is yet to be fully realized.
There are three key reasons that our understanding of the ethical challenges and solutions have not progressed: the lack of historical precedent for ethics in arms control discussion; the fact that regulatory debate has been dominated by legal arguments, particularly those grounded in international humanitarian law; and the reality that ethics has been conceived as a set of rules or positions that are sometimes in competition with law.
The success of new multilateral efforts to address the challenges of AWS will depend heavily on whether these issues can first be resolved. In particular, there is a crucial need to clarify the role of ethics in the regulatory debate: because AWS can select and engage targets independently of human intervention, they pose profound questions about the human role in the use of force. These are questions that legal arguments alone cannot answer.
The beginnings of international regulatory efforts: targeted killings, algorithms, and human dignity
The UN has been debating the legal, social, ethical and security implications of AWS for over a decade. The publication of the Report of the UN Special Rapporteur on Extrajudicial, Summary or Arbitrary Execution by Christof Heyns in 2013 drew attention to these issues. Writing in the context of the widespread use of drones for targeted killings, Heyns warned that the use of algorithms to make targeting decisions could have far-reaching consequences. His concerns were primarily legal – such as the issue of compliance with international humanitarian law (IHL) – but not exclusively so. Drawing on work by philosophers and civil society groups, Heyns highlighted that the delegation of life and death decisions to algorithms touched on fundamental moral issues about the act of killing and our relationship to technology. This included the prospect that, even if AWS could be used in a way that met all relevant legal requirements, nevertheless “as a matter of principle [they] should not be granted the power to decide who should live and die.”
Heyns’s report galvanized the international community, and within six months of its publication governments approved the first multilateral meeting on AWS, taking place in 2014 at the UN in Geneva under the framework of the Convention on Certain Conventional Weapons (CCW). Since then, ethics has been a prominent part in the debate on AWS, and many states, international organizations, civil society groups, and prominent figures have raised ethical concerns about AWS. This includes the humanitarian impact of AWS on civilians; the dehumanization and violation of the dignity of those targeted; the impact on operators’ capacity for exercising moral judgement; and the risks of algorithmic bias.
Ethics in the international policy debate: an uneasy home
Despite the prominence of ethics in multilateral debate, there has been notably little progress on understanding and addressing the ethical challenges of AWS, especially when compared to progress made on legal aspects. Indeed, ethics in the regulatory debate is currently in a state of limbo: there is widespread recognition that ethics has a role to play in the debate, but deep uncertainty about what that role is. This limbo is well illustrated by the guiding principles adopted by the CCW Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapon Systems (GGE) in 2019: the relevance of ethics is affirmed in the preamble, but absent from the principles themselves. There are three main reasons for this limbo.
The first reason is the novelty of ethics factoring in arms control discussion. To be sure, ethical considerations have often preceded and motivated the development of new international legal constraints on means and methods of warfare. But after progressing to formal multilateral fora, these types of concerns are typically subsumed either by core IHL concerns – such as whether a weapon is inherently indiscriminate or causes unnecessary suffering – or security concerns, such as proliferation risk. Specific to the international debate on AWS is that ethics has remained a distinct and salient feature of the debate after its progression to multilateral fora. It appears there is little historical precedence for this.
This points to the second reason: despite the recognized importance of ethics, the regulatory debate is dominated by legal-based arguments. Since 2017, the centre of gravity for debate on AWS has been the CCW GGE at the UN in Geneva. Because the CCW is an instrument of IHL, the GGE has not been a particularly sympathetic forum for deepening ethics-based argument about AWS. Indeed, the mandate of the GGE directs it to draw on legal, military, and technological expertise, but not on ethics. To find purchase in debate under the CCW framework, ethics was often hitched to concepts with currency in legal argument, meaning in turn that the health of debate about ethics became dependent on the continued popularity of such concepts.
One example of this is ‘meaningful human control’ (MHC). At the 2014 CCW informal meeting of experts, MHC emerged as a point of common ground for addressing ethical and legal issues associated with AWS, and with the 2016 establishment of the GGE, MHC came to act as a proxy for discussions on ethics. However, once a ‘lightning conductor’ for debate on AWS, MHC has lost much of its salience. If discussion on ethics at the GGE is to progress, it may need a new concept to carry it. But with debate at the GGE progressing towards substantive discussion on risk mitigation measures, safeguards, and the elements of a two-tier regulatory approach, finding a new concept for ethics may become more challenging.
Third is the way ethics has been conceived in the debate. Broadly, ethics is the study of moral phenomena: it investigates what people ought or ought not to do, and what justifications can be given for such claims. Ethical reflection is particularly useful when novel technologies create new conditions for action, when the morally right thing to do may not be immediately clear. On this basis, ethics can help formulate morally good solutions, particularly as a source of reasoning about law: it can motivate us to create new regulations, correct or remove deficit regulations, or provide principles for action over and above existing regulations.
However, in the regulatory debate on AWS, this view of ethics as an open-ended enquiry is sometimes eclipsed by a view of ethics as a set of fixed rules or positions. This reflects the character of multilateral debate: time for deliberation is limited. But this view creates the risk that ethics appears as a parallel body of directives or rules in competition with law, rather than a source of moral reflection on law and the content of regulation. This can frequently leave parties to debate treating ethics as an opaque source of additional requirements or obligations to be wary of, rather than as a useful aid to identifying whether legal concepts require elaboration and concretization.
Taking the road less travelled
The view of ethics as separate from law can obscure a more fundamental point about the relationship between the two. Legal concepts don’t interpret and apply themselves; they are interpreted and applied by agents according to what the ancient Greeks called an ‘ethos’ – a set of background beliefs and ideas belonging to a given institutional and historical setting. An ethos often already implies an ethical stance: this might be about the direction of regulatory efforts, or more broadly about the relationship humans ought to have to technology. For instance, and as mentioned above, regulatory efforts on AWS are increasingly conceived (at least at the CCW GGE) as an exercise in risk management. That is an approach that entails a specific normative stance towards a technology: opting in to using it whilst assuming that it can (and should) be improved. The value of ethics here can be to make what is implicit explicit, to clarify whether regulatory efforts are steered towards goals that have been deliberated about and agreed upon collectively.
Efforts to address the challenges of AWS are now expanding into other multilateral fora, providing an opportunity to deepen the ethical argument around AWS in policy debate. A significant milestone was reached in December 2023, when the UN General Assembly passed a resolution requesting the Secretary-General to seek views of states for addressing, amongst other issues, the ethical challenges posed by AWS. But progress will not come about from a change of forum alone, and work should begin to clarify not only the arguments and rationale underpinning ethics-based interventions but also a shared understanding of the role ethics should have in the debate. AWS reconfigure traditional understandings of how, where, when, and by whom decisions to use force are made, and ethical reflection is needed to help formulate good solutions to this reconfiguration. We need ethics because legal compliance alone is insufficient for steering society in the right direction.
See also:
- Ingvild Bode, Falling under the radar: the problem of algorithmic bias and military applications of AI, March 14, 2024
- Ingvild Bode, Tom F.A. Watts, Loitering munitions: flagging an urgent need for legally binding rules for autonomy in weapon systems, June 29, 2023
- Vincent Boulanin, Marta Bo, Three lessons on the regulation of autonomous weapons systems to ensure accountability for violations of IHL, March 2, 2023
- Laura Bruun, Autonomous weapon systems: what the law says – and does not say – about the human role in the use of force, November 11, 2021
Comments