Skip to main content
en
Close

The price of virtual proximity: How humanitarian organizations’ digital trails can put people at risk

Accountability / Humanitarian Action / Technology in Humanitarian Action 6 mins read

The price of virtual proximity: How humanitarian organizations’ digital trails can put people at risk
Your text messages, phone calls, social media posts and cash transfers can say a lot about youeven when their content is encrypted. If this creates risk in times of peace, imagine the risks in times of conflict. This was the starting premise for the ICRC and Privacy International’s joint report, The humanitarian metadata problem: ‘Doing no harm’ in the digital era. Just published today, the report examines the risks associated with the humanitarian sector’s use of certain technologies, with a particular focus on one thing: metadata.

What are metadata?

Metadata—literally, ‘data about data’—are information about a communication. For example, metadata about a text message would include the time it was sent, the location (e.g., based on the cell towers that were used to transmit the message), and the phone devices involved (including their operating systems). It would not include the content of the communication. Similarly, any financial transaction made with a bank card generates metadata about the card, the circuit used to pay, where the money is being spent, and possibly, what the money is being spent on.

Our telecommunications networks rely on metadata in order to get the right messages or payments to the right place. However, this usually means that metadata are less protected and easier for third parties to access, compile, interpret, sell or use against individuals. For a helpful analogy, see our 2 minute explainer on ‘What are metadata?’

 

Caption: Find this helpful? We’ll be sharing two more videos that illustrate why metadata matters to the humanitarian sector. Find them using #MetadataMatters.

How much can metadata really say about a person?

Think of it this way: the time at which you send your communications can reveal your sleeping pattern; your location, your daily commute or travel itineraries; the recipient, the people you contact the most, and might therefore be closest to you. All of this information can be cross-referenced or triangulated across different platforms—on social media, messaging apps, financial records, etc. As such, metadata—especially metadata collected over long periods of time—can profile an individual with great depth and presumed accuracy.

This triangulation raises concerns around the use of metadata for surveillance. You no longer have to trace people’s movements or bug their phones to gather intimate details about them (not to mention, those methods could be detected and concealment tactics used). This ‘new’ kind of surveillance is cheap and accessible, including to private actors with vested interests in targeted advertising and customer profiling (e.g., banks, large retail companies, etc.).

What’s the link with the humanitarian sector?

The humanitarian sector is increasingly relying on certain technologies to engage with and serve people in need (e.g., the use of social media to engage with people remotely, the use of Smartcards or mobile money for cash transfer programmes, etc.). However, this means that the sector is taking part in—if not driving, in certain areas—the generation of information records that could place the people they’re trying to help at risk.

For example, imagine a humanitarian organisation that uses text messages to communicate with people registered in a cash transfer programme. The phone records created by those text messages could be used to profile those individuals as risky borrowers, simply because they once received financial aid.

Metadata can also be used to infer people’s political affiliations, gender identity, involvement in criminal activity or affiliation with an opposition group. We have heard intelligence agency directors state: ‘We kill people based on metatada’.[1] If metadata generated by the humanitarian sector are used to profile people in need, this can have serious implications for the ‘do no harm’ aspect of our work. That same metadata can also be used more broadly, for example to assist in military operations, undermining the neutrality, impartiality and independence of humanitarian action.

But who has access to these kinds of metadata records?

It depends. Here’s one example: metadata about a text message is accessible to the telecommunications service provider. That metadata might also be accessible to certain government agencies. The issue becomes more complicated if the telecommunications service provider is part of a global conglomerate—as this can involve other jurisdictions and governments. Metadata can also be intercepted when traveling across networks.

Another example would be metadata about a cash transfer which was flagged by a bank as ‘suspicious activity’. This could render that metadata accessible to every other financial institution in the same banking group or jurisdiction, to national intelligence agencies, or to foreign intelligence agencies that have information-sharing agreements with the national intelligence agencies…

Obviously, there are legal safeguards that protect individual data in certain countries or regions. The push for these has been all the more refuelled by the recent Cambridge Analytica controversy. Still, legislation around metadata and data protection is not uniform worldwide—and the places where humanitarian organisations operate tend to be under-legislated, or under-enforced due to low rule of law or lack of enforcement capacity.

Based on all of the above, what is the report’s single greatest recommendation on how to move forward?

There needs to be a better mapping of who has what kind of access to which kind of metadata, and for how long. Without this mapping, we will never be able to anticipate—based on different stakeholders’ interests—what risks might arise from the generation or collection of metadata from humanitarian staff, volunteers and the people they serve. In the meantime, the report also promotes increased digital literacy within and outside of humanitarian organisations.

***

The report’s findings will form part of the discussion at the ICRC’s Symposium on Digital Risks in Situations of Armed Conflict, taking place December 11 and 12 in London, UK. This symposium will be attended by close to 200 participants from government, the United Nations, private tech companies (including social media / software companies), academia and the humanitarian sector.

***  

Related blog posts

***

Footnotes

[1] Video: The end of privacy: Re-evaluating the NSA, Symposium John Hopkins University, 2014

Share this article

Comments

There are no comments for now.

Leave a comment