Skip to main content
en
Close

You can’t handle the truth: misinformation and humanitarian action

Analysis / Humanitarian Action / New Technologies / Technology in Humanitarian Action 9 mins read

You can’t handle the truth: misinformation and humanitarian action
Misinformation is on the rise, and the humanitarian sector has not escaped the consequences. The misinformation environment not only sustains itself in a vicious cycle, but also increases data protection challenges and disrupts humanitarian protection and assistance work. In this post, Rachel Xu from the Yale Jackson Institute for Global Affairs overviews some key characteristics of a fertile misinformation environment, the challenges these pose for data protection, and the implications of misinformation for humanitarian operations.

Misinformation has become an unwelcome ubiquity in digital life. While propaganda and false stories have a long history, digital communication has enabled an unprecedented proliferation of misinformation, a situation only exacerbated by the current COVID-19 pandemic. Although many concerns fixate on how misinformation impacts online discourse and social well-being, the ways in which misinformation manifests into physical real-world harms are relatively poorly understood, especially during health and/or humanitarian crises. Understanding how false information can be used to harm humanitarian operations and vulnerable populations is a critical first step.

The digital information environment is primed for misinformation…

Discussions surrounding misinformation[1] often scrutinize malicious actors or automated bots – after all, digital automation has allowed misinformation to spread at a scale and speed never seen before. However, while bots and malicious actors do spread and seed misinformation, much of the problem has to do with the information ecosystem itself, which is a network of people that is shaped by the way those people share and react to information. False information in isolation is relatively harmless; it becomes harmful when humans absorb information, interpret it with their own biases and prejudices, and are compelled to action. There are a number of qualities that make the current digital information environment fertile ground for misinformation.

To begin with, false information spreads faster than true information. In a large-scale study of false news, MIT found that false stories travel, on average, six times faster than true stories. Notably, this was found to not be the work of bots; bots shared true and false stories at approximately the same rate. It was, in fact, humans who were more likely to share false information. The study found two qualities of false information that advantage it over true information: novelty and anger. False news is more likely to be novel (owing to the fact that it is, in fact, false), and charged with anger (as it often seeks to provoke a reaction). Without critical engagement and broader digital literacy on behalf of the reader, the novelty catches their attention, and the anger compels them to pass it on.

Furthermore, there is simply too much information. Traditionally, false information could be weeded out with verification by trusted sources, independent fact-checking, and/or individual common sense. In the current information environment, however, the sheer volume of information is overwhelming; the noise makes any story appear potentially viable or possibly suspect. Flooding the information environment is a common disinformation tactic used by actors to destabilize public discourse. Moreover, social media has no journalistic or reporting standards. Everyone gets a platform and, in theory, they are all equally valid voices. To pave a way through the chaos, users are more likely to fall back on their own biases and seek out information that aligns with and confirms their world view.

As such, misinformation drives a self-sustaining vicious cycle. Rumours thrive with fear and bias. People are more likely to accept rumours if they are afraid, uncertain of their future, and feel vulnerable or that they have low agency. As they search for shortcuts to navigate the information overload, they rely on their existing biases to provide an anchor. New information that validates this bias is therefore more easily accepted than information that may contradict it, which in turn nurtures more bias and fear, and therefore a higher likelihood of accepting and passing on biased or false information in the future. The fact that humans tend to gather in like-minded groups only further entrenches these echo chambers.

This environment of fear and bias is at its peak during health crises, and has been particularly pronounced since the outbreak of COVID-19, during which time rumours, stigma, and conspiracy theories have resulted in increased discrimination and violence against specific demographics. Health crises also tend to lead to increases in the use of digital and data-based tools to contain the spread and allocate resources. With both heightened misinformation and bias, as well as increased data collection and use, data protection in times of crisis is paramount.

However, a crisis environment can actually make data protection more challenging. For example, in an overloaded information environment where false information has a natural advantage, it becomes difficult for people to distinguish between secure, legitimate data processing efforts and unlawful data collection or scams. This is particularly concerning during times of crisis because people are more willing to sacrifice personal information, rights, and freedoms in attempts to guarantee safety and security. Moreover, personal data can be used for microtargeting, which creates personalized online environments for data subjects. This proliferation of individualized online realities entrenches social echo-chambers, which are then more likely to accept and disseminate misinformation.

 … which makes disinformation tactics more effective

A misinformation-prone environment makes disinformation campaigns more powerful. While there have been few systematic studies of disinformation campaigns and misinformation proliferation in the humanitarian sector to date, the number of documented case studies is on the rise.

Within the digital information environment outlined above, disinformation campaigns can be highly effective when deployed against humanitarian operations. As humanitarian operations become increasingly digital, strategies need to expand to consider not only cybersecurity (discussed further in the Hacking Humanitarians blog series), but also the state of the broader information environment and propensity for false information to propagate and disrupt operations.

Disinformation campaigns can be deployed by State, non-State, or private actors, and campaigns can be conducted over a number of media channels, often via multiple networks and targeting multiple audiences at once. Disinformation campaigns in the humanitarian sector typically impact activity in three ways.

First, they can contribute to creating new crises and/or exacerbate existing ones. Malicious actors can take advantage of the information environment to facilitate a kinetic effect, such as forcibly displacing populations, and/or inciting violence against populations. This disinformation tactic utilizes the environment by building prejudice and bias to compel violent action or behaviour.

Secondly, malicious actors can take advantage of the information environment to disrupt or derail humanitarian activities by mounting a defamation campaign against humanitarian organizations, tarnishing their image and thus eroding peoples’ trust. Humanitarian operations are contingent upon trust from key stakeholders, including but not limited to: vulnerable populations, State actors, and non-State armed actors. The humanitarian principle of neutrality allows humanitarians to safely operate in dangerous and politically-fraught contexts to provide aid without fear of attack. But if there is no trust, there is no safe access. This places humanitarian operations and humanitarians at risk.

Thirdly, disinformation campaigns can diminish political will. Disinformation campaigns often target the general public and can shape opinions regarding fraught international crises.[2] By destabilizing public opinion within countries that are in a position to provide humanitarian support, malicious actors can reduce political pressure and will to support humanitarian activities in countries experiencing conflict or other situations of violence.

The more we learn about misinformation, the more we learn that there are no perfect inoculations against it. There are already rich discussions surrounding media literacy and regulation of private media platforms. But for humanitarians, a crucial first step is investing in more systematic research regarding misinformation and disinformation in the humanitarian sector. There are enough examples to indicate that disinformation is having a pronounced impact on humanitarian activity, and the stakes are high. Just as digital humanitarians need to first understand and define the scope of their cyber operations in order to design coherent legal, technical, and operational cyber strategies, humanitarian organizations need to first be able to better understand the trends and adequately typologize the attacks in order to combat them. Moreover, systematic documentation is crucial for elucidating the harm for other actors, such as private companies, and compelling them to action.

In a complex digital society that aims to replicate real life in online spaces, the ways people communicate will continue to evolve, and thus solutions can only hope to evolve with them. Understanding the problem and recognizing vulnerabilities in the status quo as they emerge is only the first of many steps to ensure a safer digital information environment for all.

[1] Misinformation is false or incorrect information, which can be spread regardless of any intent to deceive. Misinformation is often spread by people who believe the information to be true, the source of which could be due to a genuine misunderstanding but could also result from a targeted disinformation campaign. Disinformation is false information that is spread (often covertly) with the deliberate intention to deceive or otherwise exert influence. Disinformation can seed and proliferate misinformation in the information environment (above and beyond existing misinformation due to genuine misunderstandings).

[2] Watts, Clint. Messing with the Enemy: Surviving in a Social Media World of Hackers, Terrorists, Russians, and Fake News. Harper Business, 2018.

See also

Share this article

Comments

Leave a comment