Skip to main content
en
Close

‘Back to basics’ with a digital twist: humanitarian principles and dilemmas in the digital age

Analysis / Back to basics: humanitarian principles in contemporary armed conflict / Humanitarian Action / Humanitarian Principles / New Technologies 14 mins read

‘Back to basics’ with a digital twist: humanitarian principles and dilemmas in the digital age

The digital transformation has been a key vector of progress for the humanitarian sector. It is also a source of additional pressure on principled humanitarian action, triggering dilemmas and risks that tend to be understated or overlooked.

In this post, ICRC Senior Policy Adviser Pierrick Devidal reflects on some of the challenges and opportunities that digitalization creates for humanitarian organizations’ ability to operate in line with the fundamental principles of humanity, impartiality, neutrality and independence.

The digitalization of humanitarian action has made aid faster and more efficient. Telemedicine brings advanced medical care to war wounded victims in remote areas. Digital cash transfers enable faster economic assistance and increased autonomy for those who receive it. Facial recognition helps separated families find their loved ones. Social media are enhancing engagement with populations in need and facilitating the provision of information as aid. Big Data helps improve early warning systems and humanitarian needs assessments. And the list goes on. Digital technologies are exceptional assets for humanitarians striving to alleviate the suffering of populations affected by conflict and disasters.

That’s the bright side.

The digital transformation is also making aid less human and more opaque. The social media that enable marginalized communities to participate in the global debate are also fueling misinformation, disinformation and hate speech and feeding polarization that tears societies apart. Data flows generated by humanitarian services can be repurposed to surveil people and groups, exposing them to further risks of targeting or persecution. And that list goes on, too. The technologies that have empowered humanitarians to do many things faster and on a bigger scale are generating daunting practical and ethical challenges, from digital exclusion to surveillance humanitarianism and algorithmic discrimination.

That’s the dark side.

These digital paradoxes are here to stay. So in the wake of the digital hope and hype, humanitarians need to make accrued efforts to bridge the digital divide and to avoid a potential digital hangover. To address these digital dilemmas, it is important to remember that the fundamental principles that underpin humanitarian action – humanity, impartiality, neutrality and independence – have been critical tools to confront challenges across time and space. They can and should continue to do so in the ‘digital age’, if humanitarians make a conscious effort to keep them at the centre of their strategies.

The principles provide a useful framework to avoid the pitfalls of the binary frame that too often characterizes the digital debate: enthusiasm depicting digital tools as the panacea to face the challenges of the future vs. skepticism portraying them as existential threats. As is often the case, the truth is in the middle, and the fundamental principles can help navigate the middle ground and handle some of the difficult questions humanitarians are faced with. Here are a few examples.

Eroding humanity through automation and datafication?

The principle of humanity compels humanitarians to do as much as they can to reduce the suffering of victims, improve their well-being and respect for their rights and dignity. Digital technologies can help them do that in many ways and humanitarians have a duty to explore how these tools can help them advance this fundamental objective, while doing no harm – or rather, while minimizing the unintended harms they may create as much as possible.

However, one thing that digital technologies cannot do (yet) is to provide the empathy inherent to respect for human dignity. So when, in the course of the digital transformation, we replace a human being with a digital tool, such as an app for medical advice, the display of empathy disappears because the human interaction is minimized. The space for affected populations to talk and be listened to shrinks. Can chatbots really make anyone who went through the traumas of war feel understood (‘for digital empathy, press 1’)? Can the complexity of human suffering and humanitarian settings fit in the binary 0s and 1s of the digital world? And if we reduce empathy, are we being faithful to the principle of humanity?

If mobile apps, chatbots and other digital interfaces can enhance the accessibility of assistance and the autonomy of people in need, it is crucial to ensure that they always come in support – and not in replacement – of human interfaces. The management of digital tools requires time and resources, and it is critical to ensure that they do not become a zero-sum game at the cost of human resources, including empathy. It can sometimes be a challenge for contemporary humanitarians to remember that the apparent and short-term cost-effectiveness and convenience-driven advantages of digital technologies need to be balanced with the essential human requirements that make humanitarian action what it is. Under the pressure of humanitarian emergencies and the need (and donors’ requests) for better effectiveness in aid delivery, there is always a risk to ‘move fast and break things’.

Productivity – or what is perceived to be productivity according to digital metrics of speed and scale – should not become a superseding humanitarian metric. Human suffering and dignity cannot be digitalized, people cannot be reduced to data points, and empathy cannot be automated. Digital technologies should not jeopardize humanitarians’ ability to understand and attend to the complex human dimensions driving humanitarian needs. And sometimes, they do. It is therefore essential to find effective checks and balances to mitigate this risk. Keeping humanity as the central and overarching objective of humanitarian efforts will help ensure that innovation does not become an end in and of itself and that vulnerable populations are not reduced to avatars or test users for new forms of digital colonialism. To better manage those risks, the ICRC recently appointed a Special Envoy for Techplomacy and Foresight in a conscious attempt to ensure that our efforts to anticipate the future do not obfuscate the digital problems of the present.

Is impartiality possible in an algorithmic world?

Impartiality is a vital principle for humanitarian action, which must always be based on an objective assessment of the most urgent needs. However, practitioners know that in conflict-affected contexts, it is often hard to identify and measure those needs – because they can be instrumentalized, hidden or inaccessible, or because we do not have the means to see or understand them.

A few years back, the emergence of Big Data gave hope that needs assessments would be improved by data based-evidence and Artificial Intelligence (AI) or Machine-Learning (ML) enhanced systems. These digital solutions were meant to help reduce inherent biases and blind spots and ensure a more systematic and objective triage and analysis of needs. In parallel, they were promoted as means to improve traceability, accountability to donors and fraud prevention.

Since then, we have learned that Big Data can become Bad Data and replace human biases with algorithmic ones that project inherent systemic discrimination and inequalities (e.g. race– or gender-based) integrated within the data sets used to feed ML. We have also seen how, instead of effectively reducing fraud, they can turn people in need into presumed fraudsters. To many, despite the ‘wow’ effects that AI tends to generate (e.g. ChatGPT) – it remains very ‘A’, but not very ‘I’.

Unlike their human counterparts, algorithmic biases are not easy to identify or understand because AI systems often lack transparency. They also generate mistrust and risks that are more difficult to mitigate, because they apply at a bigger scale, are automated and vulnerable to cyberattacks. Similarly, when humanitarian needs assessments overfocus on data and digital feedback loops, they risk missing some of the essential human aspects of those needs. Will feedback technologies capture body language and tone? Can they capture different accents, and languages? Will they capture non-verbal clues and what may be expressed through emotions, but left unsaid? And what additional barriers will women, girls, and others on the wrong side of the digital divide encounter in having their needs or feedback assessed via these technologies?

Thus, while data and algorithms can enhance human intelligence and bring useful additional elements to understand and measure humanitarian needs, they can also jeopardize the ability of humanitarians to abide by the principle of impartiality. They can reduce our understanding and control of the different criteria that come into play in the assessment of needs, threatening the transparency of humanitarian action. They can defeat humanitarians’ efforts to ensure that the experience and specific situation of marginalized and ‘invisible’ categories of populations in need are better considered and accounted for in the design of humanitarian programmes.

Keeping transparency and human intelligence (despite its flaws) at the center of humanitarian action is key to preserving our ability to respond to needs impartially. We must bear in mind that the prism of an undecipherable mathematical code designed in a computer lab may warp how we understand these needs. Humans should remain in the loop and at the centre of humanitarian decision-making processes so that impartiality is not abandoned to algorithmic ‘black boxes’. The international community has a duty to better regulate the use of AI in support of humanitarian programs – solutions are technical, but also legal and diplomatic. Conflict settings, like all societies, are fraught with inequality and discrimination. They require tailored regulatory action to ensure that AI helps preserve impartiality, instead of jeopardizing it.

Can humanitarian neutrality survive the splinternet?

In conflict situations, neutrality is often the safest and most effective way to provide impartial humanitarian assistance on all sides of the frontline. The problem with using digital technologies to operate in those polarized settings is that they can affect the perception of the neutrality of the organizations who use them. Digital technologies are not neutral because they align with the political values and objectives of those who create and promote them. Increasingly, what digital tools humanitarians choose to carry out their activities is likely to be seen as a political decision.

Partnering with tech companies that engage in security and defence activities can trigger serious perception issues and affect the trust of affected populations. As we can see in the context of the armed conflict between Russia and Ukraine, the companies that develop and sell these technologies are often not neutral. Microsoft and Google are firmly engaged on the side of the Ukrainian government, providing digital support and cyber capabilities. In parallel, they have restricted or stopped providing products and services in Russia, affecting civilian populationsaccess to online information or basic cybersecurity tools, in a context of ongoing cyberattacks.

In parallel, key humanitarian donors such as the USA and the European Union are resorting to sanctions that specifically target digital services and products from other countries. In response, targeted countries are accelerating their efforts to strengthen their own ‘digital sovereignty’ and ‘national internets’. These developments further fragment the global digital space into ‘splinternets’ and accentuating digital divides. The politicization and polarization that accompany the digital transformation are deepening, leaving little space for a neutral approach to digital technologies.

The knock-on effect is significant for humanitarian organizations and their ability to be perceived as neutral. Most of them heavily rely on commercial solutions from companies (also used by governments agencies and parties to conflict) that increasingly engage in politics and conflict related issues – without necessarily having an alternative. There is a growing unease in using their products and services out of fear that adopting certain types or brands of technology can affect the perception that populations and parties to conflict have of humanitarians’ neutrality – a critical enabler for trust, access and security.

In the face of increasing digital fragmentation and polarization, it is essential to ensure that humanitarian action is not instrumentalized in the name of digital politics or strategic competition. Techplomacy can help reach out to key players of the digital transformation and convince them to carve out a neutral space for conversations around the responsible use of digital technologies in humanitarian settings. The ICRC’s research efforts to assess the possibility of a digital emblem to protect humanitarian organizations in the digital space is an illustration of such efforts, which should be supported by States. Humanitarian data and digital systems must be preserved from political and commercial instrumentalization. There is a need for new policy instruments to ensure that humanitarian neutrality does not get lost in digital translation.

How to preserve independence from increasing digital dependencies?

The principle of independence requires humanitarian organizations to remain detached from political, military, economical or religious powers and from the strategies that are associated to them. In reality, however, in a globalized world of interconnectedness and interdependencies, operating in humanitarian settings is usually about how to effectively manage different dependencies – access depends on parties to conflict, funding depends on donors, acceptance depends on populations, etc. – so as to preserve a certain level of operational autonomy. This operational independence is a visible aspect of neutrality, and key to respecting impartiality.

While digitalization can improve efficiency, it can also make humanitarian operations more dependent and vulnerable. Expanding humanitarian actors’ digital perimeter allows more diverse and accessible humanitarian services, but also creates a larger attack surface and a risk of getting caught in the cyber crossfire. Providing digital services at scale often requires moving data into the cloud, which implies significant financial investments and an increased risk of being locked-in to a proprietary commercial relationship with large and powerful for-profit digital suppliers. Investing in digital solutions sometimes requires disinvesting from analog ones that can be vital (i.e. Very High Frequency radios) in a context where increasing cybersecurity risks or internet shutdowns can quickly paralyze humanitarian operations that overly rely on digital means.

And if true independence increasingly appears chimeric in a digitally interconnected world, it is important for humanitarian organizations to think twice, and carefully, about the relevance of digital solutions vis-à-vis their ability to maintain operational autonomy and continuity in fragile and fragmented digital environments. Digital technologies come with trade-offs that need careful management. Investing in digital R&D and preparedness have become essential to reduce exposure to risks. With this in mind, the ICRC recently set-up a delegation for cyberspace to establish a dedicated institutional capability to safely explore, among other things, if and how Free Open-Source Software can be a safe and sustainable alternative to commercial solutions, and how cybersecurity and data protection can effectively mitigate digital risks. This is an important endeavour that can help better manage digital trade-offs and reduce unnecessary digital dependencies.

A principled and digital way forward

There are no perfect solutions to these conundrums. It is important that overcaution does not become an excuse to prevent the development and use of innovative humanitarian solutions, some of which can make a tremendous and positive difference in the lives of people affected by conflict, violence and disasters. It is, however, equally critical that humanitarians proactively manage the fascination and confirmation biases that often characterize our relationship to digital technologies.

The fundamental principles are a useful compass to guide a professional and responsible approach to innovation and digital strategies, centred on protecting the rights and dignity of people, while helping them preserve the core elements of the humanitarian mantra that are so essential to the delivery of their mission. Humanitarian actors can and should do more to ensure that the promises of the digital transformation deliver positive outcomes for populations affected by conflict and disasters. States, donors and tech companies must support their efforts and respect and protect their commitment to humanity, impartiality, neutrality and independence.

See also

Share this article

Comments

There are no comments for now.

Leave a comment