Skip to main content
en
Close

Drones and distrust in humanitarian aid

Analysis / Humanitarian Action / New Technologies 12 mins read

Drones and distrust in humanitarian aid

Drones are a useful tool for humanitarians. How can we mitigate the distrust that comes with them?

Drones are an increasingly common tool in humanitarian aid, but issues revolving around public perception and trust continue to slow their global rollout during disasters. Humanitarians can use new technologies that make it easier to tell drones apart from one another in flight to justify trust in the technology: they will also benefit from more research into why people distrust drones, and how the data that drones collect is being used in the communities that they serve.

In this post, drone technology researcher Faine Greenwood describes how the international aid community and private industry can address the long-standing problem of drone distrust with a combination of improved technology, expanded research into public opinion, and a better grasp of the risks that drone technology may present to the public.

 

Humanitarians are using drones more widely than ever. Their benefits are apparent: they are an inexpensive and relatively easy-to-use way to collect valuable aerial data during disasters, allowing aid workers to expand their perspective on potentially dangerous situations while reducing physical risk to themselves. While small drones are no substitute for direct human contact (and do not appear to be used that way), aid workers can use them to gather real-time data on complex situations fast – a force-multiplier that allows smaller teams of aid workers to collect more decision-supporting information than was ever possible in the recent past.

But as popular as drones now are in the humanitarian world, one theme has constantly complicated their wider adoption: trust. In the aid world, most practitioners operate under the not-unreasonable assumption that drones – even small consumer drones that bear no resemblance to an armed Predator UAS – are likely to frighten and intimidate the people that they are attempting to help.

This concern has driven the aid world’s cautious approach to using the technology, especially in responses to conflict. The use of humanitarian drones in conflict environments has long been something of a red line, as enunciated by Daniel Gilman and Matthew Easton in a 2014 OCHA report. There are good reasons for this: it’s extremely hard for people on the ground to identify a drone or what it is doing. The data that drones collect, were it to fall into the wrong hands, can be used to harm and to target people, a risk that is heightened in conflict settings. Compounding the problem, humanitarian aid workers, militaries, and other armed groups often use identical consumer drones (which are widely available) – making it far too possible for a humanitarian drone to be confused with a drone being flown by a non-neutral actor or organization.

In today’s world, natural disasters and political conflict are not always clearly divided. Humanitarian drones have become more and more widely used in complex environments, such as in refugee camps close to border regions, like the Cox’s Bazar camps on the border between Myanmar and Bangladesh. As drones become an ever more regular feature of our humanitarian aid efforts, we will need to work harder than ever – and work smarter – to build and maintain public trust in the technology.

Humanitarians, perhaps better than anyone else, recognize that public perception of what we do and why we are doing it matters just as much (if not more) than our actions themselves. The public’s perception of our neutrality could be grievously damaged if a drone that looks a lot like our drones is used to drop a bomb, or collects data that is then used to target vulnerable people.

The humanitarian world has acted to find ways to balance the value of drone technology against the equal importance of protecting the privacy and safety of people impacted by disaster. In 2014, the UAViators Code of Conduct, drafted by a group of humanitarian practitioners, introduced the first set of best practices for drone use geared specifically towards aid: the document is being revised as of 2021, with a strong focus on community engagement. The ICRC’s Handbook on Data Protection in Humanitarian Action has valuable information on how drone data can be collected safely and ethically. These efforts should be continued and commended. But there is always more we can do to build public trust in the technology that we use. Here are some ideas.

How to build trust in drone technology

First, new technical developments can help us build trust in drone technology. Around the world, governments are starting to roll out remote identification systems for drones, as part of a larger global push towards UTM, or unmanned traffic management systems. These are systems that make small drones identifiable in airspace (via digital or analog means), in a way that is similar to how manned aircraft identify themselves. In the near future, these systems could, at least in theory, give all actors in a given disaster area a more sophisticated means of telling drones apart from each other. Some versions of these systems could even allow anyone on the ground to pull up a smartphone application that identifies the drones they’ve spotted overhead, and gives them some basic information about who is flying it and what its purpose is. This could reduce uncertainty during tense or dangerous situations, and make it harder for drones to be mixed up with one another.

However, these systems are inherently limited, and will likely require an operational UTM or unmanned traffic management system (not a given during disaster) and mobile phone access. Setting up temporary remote ID systems is possible, but remains a very novel idea.

Second, drone manufacturers have a major part to play when it comes to humanitarian drones and trust. We as aid workers must be able to trust that the consumer drones we buy are well-secured and can’t easily be hacked or compromised: the responsibility for this lies in large part (if not entirely) on the manufacturer. Drone manufacturers also need to help humanitarians protect their neutrality.

Consumer drone companies market their products to a very broad set of actors: police, militaries (as a supplement to military-designed drones), the general public, and humanitarian aid workers. It benefits these companies to be able to point out that humanitarian aid workers use their products. At the same time, these companies also often highlight their police and military customers in their advertising materials as well. In the absence of any clear mechanism for telling humanitarian drones from non-humanitarian drones (like police and militaries), humanitarians may have to stop using consumer drone products simply to protect their neutrality. If consumer drone companies want to continue enjoying their association with humanitarian organizations, they will need to work closely with aid workers to develop better techniques, tools, and technologies for ensuring that humanitarian drones can be differentiated from the drones flown by everybody else.

Knowing why people distrust drones is key

Third, while we need to develop better technical methods for building trust in drone technology, we also need more research and more information on why people distrust drones, and how different people feel about the technology across the world. Instead of assuming how people might feel about drones, we need to work harder to ask them ourselves. Limited research on civilian drones and public perception exists, and most of that research comes from Western countries.

The studies we do have are important reading.

For example, a 2018 study on East African perceptions of drone technology found that some respondents were less concerned about a drone taking photos of themselves, and more concerned about the embarrassing prospect of drone photos revealing ‘rubbish’ on their roofs and backyards to their neighbours. That’s a very valuable finding, and it’s also a revelation that differs markedly from the popular idea that people are most likely to associate drones with bombings and violence.

We have evidence that some people are disturbed by the inscrutability or intelligibility of drones, as suggested by recent research from the University of Southern Denmark, findings that support a push for better systems of identifying drones to people on the ground, allowing people to determine what their purpose is. And purpose matters: a 2020 study conducted in Singapore found people were considerably more likely to support drones uses that ‘benefit society at large’ (like disaster response) and less likely to support those that either individually benefited or harmed individuals (like general photography and issuing speeding tickets).

Existing research indicates that race, gender, and political affiliation all play major roles in how a given person might feel about drone use. A 2017 US study found that African Americans were less likely to support police UAV use than White Americans were, while another 2019 study conducted in the US found that participants were more likely to support police drone use over majority African-American neighbourhoods than over majority White neighbourhoods. In the humanitarian world, we’ll need to better understand demographic differences like these – in every place that we operate – if we want to maintain public trust in the technologies we use.

Discussions about humanitarian drones and trust today often centre around community engagement, and for good reason: it’s unacceptable for drone pilots to act like digital colonizers, parachuting into a disaster and collecting data without explaining why, and then failing to clarify to the community how that data might benefit them. But community engagement becomes a rather hollow term if it fails to consider the context in which their drones fly – and that’s why we need more research and more analysis on people’s perception of drone technology.

Different populations – local communities, government authorities, humanitarian actors – are likely to have very different ideas about when the risk of using drones outweighs the benefits (as exemplified by the US research showing more support for police drone use over Black communities). With that in mind, we’ll need to consider local power dynamics as we develop best practices for community engagement: who gets to sign off on using drones, and why? Are they really speaking for the best interests of the community?

Fourth, we also need more research into how well our efforts to spread drone data into the community are working. As one example, a 2016 FHI 360 research project conducted in Tanzania found that community awareness of the existence of open-source drone data was generally quite low. People won’t be able to assert their rights over the data we collect if they don’t know the data exists in the first place. People are also skeptical of drone use if they can’t connect drone data to benefits for them and their communities. And measuring that benefit is a challenge: too often, drone pilots deliver data to the ultimate end user and fail to follow up to the community on what happens to that data, or how it is used, after the flights are over.

What are the risks of drone data?

Fifth and finally, we need a better understanding of what the risks connected to drone data actually are. Although most drone users have a general idea of which drone operations are riskier than others, there is little concrete evidence or research that we can refer to that validates these perceptions. In the drone world, we assume that lower altitude drone flights are more likely to capture detailed imagery that can be used to identify a person: do we know how to quantify the risk for that person? Are there techniques or best practices we could be using to redact or modify drone data to ensure that it contains no Personally identifiable information– and if we use these techniques, where might that fit into the legal frameworks we operate under? Do we have concrete case studies or examples of incidents where drone data was used to harm people? The better we understand the risks that drone data presents to both people affected by disaster and to aid workers, the better we can address these risks – and better earn the trust of the people we work with.

If the history of technology is any guide, drones are in a transition phase, shifting from the new to the mundane. We can’t brute-force our way into convincing people to accept new technology during that transition phase. Now, we have an opportunity to demonstrate to the public that they can trust humanitarians to use drones responsibly, in ways that take cultural and contextual differences into account. What happens next depends on us.

See also

Share this article

Comments

Leave a comment