Skip to main content
en
Close
Will AI fundamentally alter how wars are initiated, fought and concluded?

In the debate on how artificial intelligence (AI) will impact military strategy and decision-making, a key question is who makes better decisions — humans or machines? Advocates for a better leveraging of artificial intelligence point to heuristics and human error, arguing that new technologies can reduce civilian suffering through more precise targeting and greater legal compliance. The counter argument is that AI-enabled decision-making can be as bad, if not worse, than that done by humans, and that the scope for mistakes creates disproportionate risks. What these debates overlook is that it may not be possible for machines to replicate all dimensions of human decision-making. Moreover, we may not want them to.

In this post, Erica Harper, Head of Research and Policy at the Geneva Academy of International Humanitarian Law and Human Rights, sets out the possible implications of AI-enabled military decision-making as this relates to the initiation of war, the waging of conflict, and peacebuilding. She highlights that while such use of AI may create positive externalities — including in terms of prevention and harm mitigation — the risks are profound. These include the potential for a new era of opportunistic warfare, a mainstreaming of violence desensitization and missed opportunities for peace. Such potential needs to be assessed in terms of the current state of multilateral fragility, and factored into AI policy-making at the regional and international levels.

New technologies are transforming the nature of modern warfare. A topic of particular interest is how AI might be leveraged in the development of military strategy and combat decision-making. This discussion has largely focused on weapons systems and the risks and challenges concomitant to this. A less explored, albeit more complicated area, concerns whether the integration of AI into military decision-making will modify or eliminate the roles played by humans. AI, for example, may offer pathways for overcoming cognitive limitations and mitigating against human fallibility. There is no doubt that positive outcomes may accrue, including reduced civilian harm and better compliance with international humanitarian law (IHL). However, such possibilities also create new risks and challenges. Chiefly, insofar as armed conflict is an intrinsically human phenomenon, will AI fundamentally alter how wars are initiated, fought and concluded?

Altering the calculus of warfare

A first scenario to consider is whether AI-enabled decision support systems might impact a state’s decision to initiate military action against another country or a domestic group. This ‘war calculus’ is generally understood to take into account both legal norms and political considerations. One of the most important principles of international law is the prohibition against the threat or use of force as codified in Article 2(4) of the UN Charter, with Articles 51 and 42 as exceptions to this principle in situations of self-defense, and maintaining or restoring international peace and security respectively. Non-legal factors also influence decision-making, including the potential for reputational damage, public sentiment towards war, possible reactions of allies and non-allies, projected losses etc.

Might AI systems change the way this calculus is carried out and actioned? For instance, could AI systems be leveraged by potential belligerents to assess the likelihood of a successful military outcome, or to prescribe a military strategy that would facilitate a successful outcome?

It could be argued that this might reduce the likelihood of states entering ‘un-winnable’ wars, or those initiated to save face or seek revenge. Iraq’s invasion of Kuwait in 1990, for example, was driven —at least in part — by Kuwait’s refusal to forgive the USD 14 billion debt accrued during the Iran-Iraq war, and Saddam Hussein’s long-standing angst over the status of the Warbeh and Bubiyan Islands (which Iraq believed were improperly annexed during the United Kingdom’s protectorate over Kuwait from 1899-1961). If AI-enabled decision-support systems can deter such conflicts, then the outcomes for civilian populations are undoubtedly positive. However, far more dystopian scenarios can be envisaged. For states that have expansionist or imperial aspirations, such use of AI would be considered a powerful tool. At its worst, AI systems might usher in a new era of war where computer systems are used to identify where victories might be possible, encouraging encroachments on territory, the wiping out of opposition groups and pre-emptive warfare.

AI and the waging of warfare

A second question is whether AI systems will impact how military decisions are made in combat situations. To the extent that such systems work to distance soldiers from the battlefield, in essence making the decision to take lethal action easier, the likely answer is yes. Indeed, warfare requires that soldiers overcome an evolutionary aversion to killing fellow humans. First documented in WWII, this tendency is so strong that militaries take proactive steps to desensitize soldiers, generally through simulation training, but also using dehumanization narratives and inflating the threat posed to the ingroup. Several of the advances in drone technology and autonomous weapon systems can be seen as contributing to these aims. Principally, their physical distancing of soldiers from the battlefield eliminates the worst sensory impacts of killing, such as scents and sounds. Features such as pixelated images of targets and smart/soft touch triggers, are further examples. While it has been less studied, to the extent that AI reduces the act of making a decision from a soldier, the impacts may be even more acute.

This is not to say that all outcomes would be negative. Advocates of employing AI in military decision-making often point to the fact that, vis-à-vis computers, humans are capable of holding and processing extremely limited amounts of information. They foresee innovations such as AI-enabled digital-advisors that can store, analyse and interpret information in volumes and at speeds that would outstrip the most advanced and experienced teams of military experts. From a strategic viewpoint, any acceleration in decision-making, action and response, proffers a clear operational advantage. Benefits might also extend to legal compliance. Systems able to combine granular battlefield insight with the totality of IHL rules, jurisprudence and scholarship would allow for more precise proportionality assessments and targeting decisions.

Moving from war to peace – some human fallibility might be worth it

A final scenario to consider concerns the role of human versus AI-enabled decision-support in the conclusion of wars. While it may appear counterintuitive, the vast majority of conflicts end, not through military victory/defeat, but a negotiated solution. If we unpack these processes, the centrality of human judgement, influence and emotions is inimitable. Examples include the roles played by Nelson Mandela (South Africa), Carlos Filipe Ximenes Belo and José Ramos-Horta (Timor Leste), and the individuals making up the National Dialogue Quartet (Tunisia). AI systems capable of decision-support equivalent to these thought-leaders is likely to be a long way off, if possible at all.

Critical junctures and unanticipated events can also influence the conclusion of war. The 30-year conflict between Indonesia and the Free Aceh Movement (GAM), is a case in point. By May 2004, following a year of martial law, the Indonesian armed forces had gained a strong military advantage and many analysts believed that it was poised to claim victory. Then, on 26 December 2004, a 9.1 magnitude earthquake struck the coast of Sumatra, resulting in a tsunami that left a quarter of the Acehnese population dead. Rather than using the disaster to consolidate its position, the Indonesian government bucked expectations and opened Aceh to 195 international humanitarian agencies, permitted intervention support by 16 foreign militaries and reassigned half of its 40,000-strong defence force to humanitarian duties. GAM responded by declaring an immediate cessation of hostilities, and within ten months a peace agreement had been signed with Indonesia granting Aceh special autonomous status.

While crediting these events to the tsunami would be an oversimplification, the enormous loss of life, generosity of member states and belligerents’ shared religious interpretation of the disaster, created powerful incentives that all stakeholders leveraged. It is hard to envisage how an AI decision-support system could outperform humans in such circumstances. Moreover, the scenario underscores that warfare is a very human process, where algorithmic logic will not necessarily have a role. If AI-enabled military decision-making became the norm, it is possible that opportunities for peace could be lost.

Some final thoughts: why risks and opportunities need to be contextualized

This post considered the use of AI in military decision-making, and the potential impacts on how wars are initiated, fought and resolved, especially if this implies a reduced role for human decision-making. Whether the risks presented will be made out (ushering in a more dangerous and less predictable era of military engagement), or opportunities capitalized upon (suggesting more clinical approaches that limit civilian harm), largely depends on how militaries envision success. If success is understood as avoiding war, and better compliance with IHL when wars do take place, then the integration of AI in military decision-making does have the potential to manifest in positive outcomes. However, if success merely denotes faster military decision-making, the potential for risks is heightened.

It is important not to view these scenarios in binary terms, nor as mutually exclusive. Success will mean different things for different states at different times. This said, engaging in discussion around what future trends might look like is imperative. Indeed, if the fast pace of technological advancement has taught us anything, it is that anticipatory planning is critical to crafting out the future we want. These discussions should place the current state of the multilateral system at the fore. As noted by UN Secretary-General António Guterres in his 2023 UN General Assembly address, the world appears to be transitioning towards a multipolar order marked by a rise in geopolitical tensions, authoritarianism and impunity.  Many scholars anticipate that this will bring about changes within existing accountability structures and a “thinner web” of international law. If correct, it follows that the normative value of IHL compliance will increasingly be trumped by the pursuit of mere military effectiveness. This outlook suggests that a cautionary approach to the development of military AI applications and its regulation is warranted, and that strengthening multilateralism is key to limiting the risks posed by digital military technologies.

 

See also:

 

Share this article