Skip to main content
en
Close

The future is now: artificial intelligence and anticipatory humanitarian action

Analysis / Humanitarian Action / New Technologies 7 mins read

The future is now: artificial intelligence and anticipatory humanitarian action

As the world faces simultaneous disasters and burgeoning risks, humanitarian actors need to develop more efficient ways of delivering aid to vulnerable populations. One current trend involves the use of Artificial Intelligence (AI) and Machine Learning (ML) to process large amounts of data quickly to inform – and even autonomously undertake – decision-making processes. While these processes have the potential to facilitate faster and better anticipatory humanitarian action, they can pose unforeseen challenges if left unregulated and unchecked.

In this post, Christopher Chen, Associate Research Fellow at the Centre for Non-Traditional Security Studies, explores the promise and perils of using artificial intelligence and machine learning in the context of anticipatory humanitarian action. Building on insights gleaned from a data governance and protection workshop co-hosted by the S. Rajaratnam School of International Studies and the ICRC, he highlights some of the implications of the use of new technologies in humanitarian action and how the principle of ‘do no harm’ can be applied in a digital age.

Speaking on the need for a more proactive humanitarian system, Sir Mark Lowcock, Under-Secretary-General for Humanitarian Affairs and Emergency Relief Coordinator, stated in 2019 that ‘[t]he best way to [address gaps] is to change our current system from one that reacts, to one that anticipates’.

Heeding this call, aid workers and organizations have been trying to use new and emerging technologies to facilitate earlier, faster, and potentially more effective humanitarian action. Part of this technological shift involves the use of artificial intelligence and machine learning to improve the efficiency of humanitarian responses and field operations.

However, such a promise is also accompanied by potential pitfalls. Inadequate data governance and data protection might cause unintended harm to vulnerable populations, while poor artificial intelligence and machine learning implementation can exacerbate and intensify existing bias and inequalities. As such, these risks and challenges must be identified and mitigated to ensure that the use of new technology does no harm and protects the life and dignity of those it is intended to serve.

Future-proofing the aid sector?

Embracing innovation is part and parcel of future-proofing the humanitarian sector. Future-proofing refers to the process of anticipating future shocks and stresses and developing methods to minimize their adverse effects. AI/ML-based interventions contribute to this by automating and positively impacting various aspects of humanitarian work.

The ICRC defines Artificial Intelligence (AI) systems as ‘computer programs that carry out tasks – often associated with human intelligence – that require cognition, planning, reasoning or learning’. It also defines Machine Learning (ML) systems as ‘AI systems that are “trained” on and “learn” from data, which ultimately define the way they function’. An example of this is IBM’s ML system that analyzes drivers of migration and uses the data to forecast cross-border movements and refugee flows. As a UN OCHA report states, these systems can facilitate analysis and interpretation of large and complex humanitarian datasets to improve projections and decision-making in humanitarian settings.

AI may be harnessed to improve workflows and optimize the disbursement of aid. For example, in July 2020, predictive analytics frameworks implemented by the UN and other partner organizations forecasted severe flooding along the Jamuna River in Bangladesh. In response, UN OCHA’s Central Emergency Response Fund (CERF) allocated and released funding – roughly $5.2 million – to several humanitarian agencies, which enabled them to provide humanitarian assistance to vulnerable populations before flooding reached critical levels. This was CERF’s fastest-ever disbursement of funds in a crisis.

This illustrates how developments in AI/ML and predictive analytics make it possible to anticipate when disasters are about to strike. This facilitates a more proactive, anticipatory approach to humanitarian action and enables humanitarians to deliver more timely assistance to populations.

Caveat emptor: risks to vulnerable populations

While aid agencies might benefit from the use of AI/ML, these technologies can inadvertently bring risks to vulnerable populations. Without the right safeguards, AI/ML could exacerbate inequalities and further marginalize vulnerable groups.

Consider the scenario where AI/ML processes are used to identify suitable target populations for a particular humanitarian programme. What happens if the algorithm, for some reason, decides that certain people – who would usually be entitled to participate in the programme – should be excluded? Learned bias in AI/ML can lead to the further discrimination of vulnerable populations. This is far from a hypothetical, as over the past few years, there have been many high-profile cases of ML systems demonstrating racial and gender biases. A study in 2019 found that face recognition technologies across 189 algorithms are least accurate on women of colour. It is easy to see how this can be problematic in a humanitarian setting, where vulnerable populations might be subjected to such biases and their corresponding effects, that is, discriminatory disbursement of aid and false positives in the identification of missing people.

AI also needs substantial amounts of quality data to be trained effectively. However, humanitarians often work in places where accurate data on populations are inaccessible. Political contexts might also prevent the collection of sensitive datasets. This makes the training of AI/ML processes particularly difficult. Without access to quality data sets, AI/ML systems cannot be trained and used in a way that avoids amplifying the risks identified above.

These challenges underscore the importance of users having a healthy skepticism when they engage with AI/ML processes. The reality is that it is difficult to code for values; fairness and justice cannot be automated.

Ghosts in the machine

We live in an age of rapid technological progress. While technologies like AI/ML can make humanitarian work better and more effective, they might also unexpectedly evolve past its original intended purposes, thus threatening the very populations that humanitarians are trying to protect. As such, humanitarians need to realistically assess the capabilities and limitations of AI/ML.

The implementation of AI/ML projects in humanitarian settings should be carried out equitably; partnerships and programmes should not be decided solely by stakeholders from Geneva and New York. This requires active engagement with practitioners on the ground and recipients of aid to identify capacity and systemic gaps. Humanitarian organizations using AI/ML in their work must build feedback loops into their processes for monitoring and evaluation purposes.

At a recent workshop hosted by the S Rajaratnam School of International Studies and the International Committee of the Red Cross, discussions centred around the opportunities and risks of using new technologies in humanitarian action, as well as the importance of data governance protocols. In the Artificial Intelligence and Machine Learning breakout group, participants acknowledged the need for humanitarians to adopt a human-centric approach when using AI and ML.

Especially in situations involving vulnerable and at-risk populations, human control and judgement in applications of AI/ML should be prioritized. AI and ML systems should only be used to augment analytical processes; it should not replace the human element involved in decision-making. This will help to preserve a level of ethical accountability and ensure that digital transformation in the sector takes place in a fair and ethical manner.

The most important takeaway is that, while guidelines or legislative frameworks are important elements of an ethical and safe AI/ML ecosystem, they need to be underpinned by a human-centred approach.

See also

Share this article

Comments

Leave a comment