What is it about biometrics?
Biometric data are personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a person, which allow or confirm the unique identification of that person. As the use of biometrics has become more widespread, the applications and equipment that they use are getting cheaper and easier to deploy. This is making them increasingly attractive for organisations that need to validate people’s identities on a regular basis. This includes humanitarian organisations, which have developed and implemented biometric ID systems in increasing numbers due to the efficacy and accountability they are perceived to bring to their operations.
These dynamics have created a tangible impetus for the humanitarian sector to use biometrics for beneficiary registration and aid distribution. Staff in the field see other organisations using biometrics and reporting various benefits and it is logical that they wish to use the same tools. Management want their operations to be as efficient and ‘agile’ as possible. And there is implicit pressure from donors, which are increasingly demanding “end-to-end auditability”, and making more-and-more humanitarian funding contingent on demonstrable anti-fraud and accountability processes. Though donors do not explicitly require the use of biometrics, such systems appear to offer – and are certainly marketed as – the most attractive means of satisfying multiple humanitarian programming requirements. Biometrics are also playing a central role in the scaling up of cash-transfer programming across the sector, with many financial service providers viewing them as a correspondingly simple way to meet to meet their KYC (Know Your Customer) and other legal ‘due diligence’ requirements.
Quite apart from aid distributions, the ICRC also has an interest in biometric technology like DNA profiling to assist in the identification of human remains in the determination of the fate of the missing. It is also exploring the potential of using facial matching technology in its restoring family links programme to locate persons sought by family members following separation due to humanitarian emergencies. With facial recognition technology frequently in the news, and even vendors calling for regulation, the need for an overarching Biometrics Policy which minimises risks to beneficiaries, and with it the ICRC’s reputation, is palpable.
Biometrics and data protection
Biometric data are recognised as sensitive in both law and practice. The EU’s General Data Protection Regulation (GDPR) – which has galvanised jurisdictions across the world into adopting, revising, proposing or considering their own data protection laws – designates biometrics as “special category” data, bringing higher thresholds for both collection and protection of the data. The African Union Convention on Cybersecurity and Personal Data Protection, the “modernised” Council of Europe Convention on the Automatic Processing of Personal Data, and many other national privacy laws also contain special rules restricting the processing of biometric data. Although these rules do not apply to the ICRC, which has adopted Rules on Personal Data Protection that reflect its status as an international organisation, the core principles upon which they are based are the same.
The International Conference of Privacy and Data Protection Commissioners (ICPDPC) first raised concerns about biometrics in a 2005 Resolution warning that “the widespread use of biometrics will have a far-reaching impact on the global society” and “should therefore be subject to an open worldwide debate”. In 2012, guidance from EU data protection supervisory bodies cautioned that “The use of biometric technologies is also gradually spreading from their original sphere of application: identification and authentication to behaviour analysis, surveillance and fraud prevention”. Concerns about the use of biometrics in various humanitarian settings were set out in Privacy International’s 2013 report ‘Aiding Surveillance’, which argued that there had been a “systematic failure to critically contemplate the potential ill effects of deploying technologies in development and humanitarian initiatives, and in turn, to consider the legal and technical safeguards required in order to uphold the rights of individuals living in the developing world” (p.7). In 2015, the ICDPPC adopted a Resolution on Privacy and International Humanitarian Action which also stressed the risks, inter alia, of ID systems and biometrics.
Biometrics and the protection of humanitarian beneficiaries
Regardless of applicable legal regimes, biometric data is particularly ‘sensitive’ in humanitarian emergencies because, if retained after collection, it creates a permanently identifiable record of an individual. For the beneficiaries of humanitarian assistance this can be problematic because they may not want to be identifiable forever, particularly if there is a risk that data may be leaked or subject to unauthorised access by third parties. Where biometric identity management systems are concerned there is also a significant risk of ‘function creep’ over time, opening up the possibility that the data will ultimately be used in ways that beneficiaries do not want, understand or consent to.
While data protection policies and robust data security can mitigate these risks, the use of biometric technologies by humanitarian organisations also raises important ethical issues. In various large-scale humanitarian contexts, affected populations have expressed serious concerns about the use of biometrics and potential access to the data by non-humanitarian organisations, particularly for security and migration control [1]. Because biometric data is attractive for these purposes, humanitarian organisations have already come under pressure from States to disclose it. They are also vulnerable to cyber-operations by State and non-State actors seeking unauthorized access.
For the ICRC, the protection of personal data whose disclosure could put its beneficiaries at risk, or otherwise be used for purposes other than those for which it was collected, is an integral means of preserving its neutrality, impartiality, and independence, as well as the exclusively humanitarian nature of its work.
Ascertaining legitimacy and purpose
The ICRC Biometrics Policy, adopted by the ICRC Assembly in August 2019, was elaborated over an 18-month period and is the result of extensive research, analysis, consultation and reflection. This process included a review of all situations and scenarios in which the ICRC is processing or considering the use of biometrics, an evaluation of the “legitimate basis” and specific purposes for the processing, and the identification of organisational, technical and legal safeguards.
Establishing a legitimate basis is straightforward where the ICRC processes biometric data in accordance with specific objectives associated with its mandate – for example to identify individuals in its work on Restoring Family Links and determining the fate or whereabouts of the missing – and where particular objectives cannot be realised without using biometrics. In this case the ICRC processes the biometric data as a matter of “public interest” (in the implementation of the ICRC’s mandate).
The issue is much more challenging when it comes to using biometrics for beneficiary management and aid distribution, where the processing of the data may not be viewed as an integral part of an ICRC mandate-based activity requiring the identification of individuals. Because the purpose here is primarily linked to efficiency, and insofar as aid can be (and long has been) distributed without the need for biometrics, the ICRC would have to establish that the “legitimate interest” it has in establishing any biometric identity management system does not outweigh the potential impact on the rights and freedoms of the individuals concerned. This balancing test is typical to data protection law whenever a data controller relies on their own interests as a basis for processing.
In its analysis the ICRC found that there was, however, a balance that could be found that would still allow the institution to leverage the advantages that biometric authentication offers in respect to efficiency and effectiveness and ensuring end-to-end accountability in its aid distributions, while minimizing the risks to which its beneficiaries would be exposed.
This balance rests on operations wishing to use biometric data in the registration and verification of beneficiaries limiting the processing to a token-based system. In practice this means that beneficiaries may be issued with, for example, a card on which their biometric data is securely stored, but that the ICRC will not collect, retain or further process their biometric data (and will not therefore establish a biometric database).
The token/card may be used to verify beneficiaries during aid distributions to ensure that the aid reaches those individuals for whom it has been earmarked, but no other use of the biometric data will be possible. If the beneficiary wants to withdraw or delete their biometric data, all they will need do is return or destroy the card. If authorities seek to compel humanitarian organisations in a particular country to hand over the biometric data of beneficiaries, the ICRC will not face such pressure because it will not in fact have this type of data.
What about consent?
While the ICRC is committed to rendering its data processing as transparent as possible to its beneficiaries and affected populations, it does not believe that consent provides a legally valid basis for data processing in many emergency situations.
This is because consent to data processing cannot be regarded as valid if the individual has no real choice: for example, where the provision of aid is effectively dependent on the provision of personal information, and consent is therefore unlikely to be “freely given”. In addition, the power imbalance and situation of the beneficiary means that there is no real “choice”, and the individual is induced to accept what is proposed by a humanitarian organisation. Moreover, where biometrics are concerned, it is extremely difficult to ensure that consent is genuinely “informed”, since in many situations the affected population may not be able to fully comprehend the technology, information flows, risks or benefits that underpin the processing of their biometric data.
The Biometrics Policy, in line with the ICRC Rules on Personal Data Protection, requires the ICRC to explain the basis and purpose of data processing to its beneficiaries, including any data-sharing arrangements, regardless of the basis for the processing. The ICRC also seeks to ensure that beneficiaries have the opportunity to ask questions and object to data processing if they so wish, particularly where data may be shared with third parties. If people do not want to provide their biometric or other personal data, or to see their information shared for humanitarian purposes with partners, the ICRC will respect their wishes.
What the Policy means for the ICRC
In the first instance, the ICRC can be confident that it has now identified a “legitimate basis” for using biometrics in its operations and programmes and, subject to the implementation of the organisational, technical and legal safeguards contained in the Policy, developed appropriate safeguards to protect its staff and beneficiaries from the risks that arise.
Programme managers wishing to use biometrics will need to ensure that the purpose and modalities for processing biometrics comply with the Policy. Any new use cases will require a Data Protection Impact Assessment and the development of appropriate safeguards for beneficiaries. The ICRC Biometrics Policy also provides for risk assessment and consultation with affected populations prior to the deployment of any biometric solution.
The Policy also makes it clear that the ICRC will only use biometric data where it enhances the capacity of the organisation to implement its humanitarian mandate and will under no circumstances share biometric data with third parties, including authorities, that may use them for non-humanitarian purposes. Even where exclusively humanitarian grounds for sharing biometric data can be identified, strict conditions must still be satisfied before the data can be transferred by the ICRC.
Finally, the Policy also commits the ICRC to the transparent use of biometrics. Publishing it on the ICRC website and explaining it here is part of that effort.
Looking to the future
The ICRC is committed to working at the cutting edge of technology in a way that enables the organisation to realise demonstrable benefits for its operations and beneficiaries where possible. This means that the ICRC will keep developments regarding the availability, security, cost, effectiveness and impact of biometric technology under review. It will also review its own Biometrics Policy at least every three years to consider new use cases and technologies where appropriate.
The purpose of the review is to keep abreast of technological developments that could mean the use of a specific biometric poses significantly more or less risk in the future than they have been judged to pose now. This means that the decision not to establish a biometric database for the purposes of identity management at the present time will effectively be kept under review.
The ICRC will also try to assess beneficiary perception of how biometrics are used. The Policy may then be amended if necessary, potentially to widen the scope for using biometrics, or to introduce new safeguards.
***
[1] See for example ‘Yemen’s Houthis and WFP dispute aid control as millions starve’ (Reuters, 4 June 2019), ‘Rohingya Refugees Protest, Strike Against Smart ID Cards Issued in Bangladesh Camps‘ (Radio Free Asia, 26 October 2018) and ‘Over 2,500 Burundi Refugees in Congo Seek Shelter in Rwanda’ (Voice of Africa News, 8 March 2018).
***
Other blogs on this topic
Digital risks for populations in armed conflict: Five key gaps the humanitarian sector should address, Delphine van Solinge, 12 June 2019
The price of virtual proximity: How humanitarian organizations’ digital trails can put people at risk, Tina Bouffet & Massimo Marelli, 7 December 2018
Protecting the digital beneficiary, Gus Hosein, 12 June 2018
Humanitarian experimentation, Katja Lindskov Jacobsen, Kristin Bergtora Sandvik & Sean Martin McDonald, 28 November 2017
The data divide: Overcoming an increasing practitioner-academic gap, Larissa Fast, 2 November 2017
Comments