Artificial intelligence (AI) has received a great deal of attention in recent months with the growth of chatbots such as ChatGPT. But less attention has been paid to how AI could be used in conflicts and its impact on civilians. On his visit to London earlier this month, we asked ICRC director general Robert Mardini for his thoughts on AI in the military domain and the humanitarian sector.
Article 24 July 2023 United Kingdom

What is your message to states and other parties looking to use AI and new technology for military purposes?

Military technology and AI are being developed at incredible speed, in some cases faster than the international community’s ability to agree effective governing frameworks, which is concerning. We always want to ensure that advances in technology assist humanitarian action, and benefit civilians and other protected persons in conflict zones, rather than increase the risks they face.

From our perspective, conversations around military uses of AI need to incorporate the principles of international humanitarian law, as set out in the Geneva Conventions and their Additional Protocols. The international community should take a human-centred approach to how AI is used in places affected by conflict. There need to be guard rails in place to help strengthen existing protections and mitigate remaining risks.

What are the risks around AI that ICRC is concerned about?

I have no doubt that AI can be a positive force for humanitarian action and the people we seek to help. But there are also many risks that need to be considered.

For example, from our perspective of working in conflict zones, the use of AI in military decision making, or autonomous weapons, could pose significant risks to both civilians and combatants. We cannot accept the idea that life and death decisions are delegated to machines or algorithms. Human control and judgment must be integral in any decision affecting human life or dignity.

When you look more broadly at new technologies, it’s fair to say that today’s armed conflicts are no longer confined to land, sea and air – they’re being fought in cyberspace. Cyberattacks can have significant repercussions on civilians as the digital systems and tools that they rely on are degraded, destroyed, or disrupted as part of military action.

For example, when a hospital falls victim to a cyberattack, it hampers the delivery of medical care to civilians. When water or energy infrastructure are targeted in the cyber sphere, essential services get disrupted – it is civilians who pay the price. Cyberattacks that disrupt communications networks leave civilians unable to access accurate and up-to-date information and can result in them losing touch with loved ones at critical moments.

For years now we have been very clear that the rules of war, the Geneva Conventions, apply in cyberspace. We have also been researching the idea of a digital red cross/red crescent emblem, which would make it easier for those conducting cyber operations during armed conflict to identify and spare protected facilities.

How high up the agenda was AI in your meetings?

It was very high on the agenda across the board. It is clearly an important priority for the UK as highlighted by the Global AI Safety Summit that they will be hosting later in the year.

We are very fortunate to have a strong, constructive dialogue with the UK. Such dialogue allows us to raise our humanitarian concerns when it comes to issues around AI and new technology in conflict zones.

Our posture with all states when talking about AI in military scenarios is that they need to take a human-centred approach and that any use of AI in weapon systems must be approached with great caution.

As the UK gets ready to stage the AI conference later this year, we look forward to contributing to these critical discussions so that IHL is part of the conversation.

 

How much of a concern is the increasing involvement of civilians on the digital battlefield?

It raises a number of concerns. Under IHL, parties to a conflict must distinguish between civilians and combatants, and between civilian objects and military objectives. Such distinctions are usually self-evident in physical conflicts. But digital warfare risks blurring this line.

The digitalization of society and the battlefield means it has never been easier to involve civilians in military cyber and digital activities. Indeed, we’re seeing a trend of civilians becoming more involved in conflicts, in some cases directly contributing to military operations via digital means.

The rules of war are very clear: civilians are protected against attack unless they directly participate in hostilities. Whether civilian participation in digital warfare qualifies as direct participation in hostilities is a complex legal issue. But if states encourage civilians to engage in such activities, then they potentially expose them to the risk of grave harm.

We thus need to uphold the clear line between civilians and the military. States should refrain from encouraging civilians to directly participate in hostilities, or at the very least ensure that civilians are fully aware of the risks they may be exposed to and how they can protect themselves.