Different actors, different perspectives
Data in crises are amassed by and for different actors, with different aims. Humanitarians gather data about the flows and conditions of refugees or internally-displaced persons, the numbers of children who die before their fifth birthday, the symptoms and treatments of cholera or Ebola patients, the biometric data of those receiving assistance, or community feedback about programs. These data, often collected using digital technologies such as mobile phones, tablets and apps, are then used to inform or improve operational activities—whether the provision of shelter, water and sanitation, food, medical interventions, cash, or other activities. Peacekeepers and security managers gather data about threats and incidents of violence between civilians or between armed actors, with the aim of informing decisions about the deployment of peacekeepers or the security measures employed to protect staff and programmes. Humanitarians, peacekeepers, and peacebuilders all generally work in situations of violence. Researchers, however, typically work on conflict or crisis. They seek to understand the causes and consequences of violence, also through collecting and using data. For example, the Uppsala Conflict Data Program systematically tracks battle-related deaths and deliberate violence against civilians reported in public sources to create an annually-updated dataset of armed conflict and organized violence. Unfortunately, the data collected and used by those working in and on conflict and crisis rarely intersect.
Data collection by practitioners
Consider this: those working in conflict gather data which they use to inform their programmatic and operational decisions. We know from experience that data collected by and for field operations contain inaccuracies. This is a reflection of the many challenges of amassing data in difficult environments—whether a situation of armed conflict or at the height of a deadly epidemic such as Ebola. For example, the ‘cleaned’ data from after the Ebola crisis had ended showed that the actual number of new cases in all three of the most-affected countries is substantially lower than that reported in the midst of the crisis, between one-third and one-half less (see Table 2 here). Over time, it was possible to remove instances where patients were double-counted and to match individual cases with their lab results. These and other revisions lowered the number of new cases per week at the height of the epidemic.
During a crisis, however, these cleaned and verified data are virtually impossible to obtain. Instead practitioners make choices based on the available data, recognizing that these data are often incomplete and flawed. This does not excuse deficient data or unsound data collection practices, but rather points to the exigencies of circumstance as well as practitioner time frames (urgent or short term) and the reasons why they collect and use data.
Data collection by researchers
On the other side are those carrying out research on crisis situations. Academics emphasize sound methods to produce qualitative and quantitative data (since data are not only composed of numbers). They seek clean, detailed and reliable data to test assumptions or explain the phenomena under study. Academic goals typically diverge from those of the practitioner in that scholars seek to create or add to theory, to compare, cumulate or replicate findings, and to contribute to a body of knowledge that builds over time. Replicability necessitates transparency about methodological choices and limitations. Cumulation requires clear definitions or standards to enable comparison of like with like and over time.
All this entails debates about methodological choices and theoretical contributions, usually through peer review. This process takes time, often years from a project’s conception to publication of findings. For example, the UCDP data collection efforts have generated over several decades a robust and rich body of theoretical knowledge about the causes, dynamics, and consequences of armed conflict.
Diverging realities
Each of these groups operates in its own world. Academics typically write for other academics, presenting their findings in academic conferences, referencing scholarly concepts and debates. They seek and are rewarded for novelty or originality in their research and writing, and not for its real-world applicability or relevance. Researchers are often retrospective (and sometimes predictive), while practitioners are reactive, responding to the latest crisis. These differences limit the possibility that researchers can contribute insights that are relevant and useful for practitioners, or that practitioners can inform the theoretical questions and contributions of scholars.
The collection and use of data amplify the scholar-practitioner gap in new ways. Returning to the Ebola example above illustrates this divide: operational actors made decisions based upon available data at the time and researchers later analyzed the situation based upon the cleaned and verified data. Each perspective is valid in its own way, yet each presents a different version of reality. Because data are employed to inform research and practice, they increasingly constitute the frame within which we interpret and seek to understand the world around us. If practitioners and scholars are using different data sets, it likewise stands to reason that their understandings of reality differ, much as the proverbial blind men gain different understandings of an elephant based upon their respective vantage points.
Bridging the divide
Setting aside debates about the promise and pitfalls of data and technology in humanitarian settings, which are many (for example, see here and here), where does this leave us? I propose two primary takeaways.
First, this suggests the need for more, not less, conversation between academics and practitioners about the data we all collect and use. These data can be complementary, but making this possible requires discussion and preparation before a crisis. Data standards—one element of the call for humanitarian reform (see here and here)—could forge a path toward achieving complementarity. Convening scholars and practitioners who collect and use data about conflict to discuss definitions and desired data points could contribute both to the development of the Humanitarian Exchange Language (HXL) and to the dissemination of data standards across the humanitarian community. These datasets, in turn, could be made available to researchers wanting to understand the causes and dynamics of conflict or to increase the efficacy and value of humanitarian response. Of course, this requires attention to ethics and privacy issues surrounding data in humanitarian settings. The new Humanitarian Data Centre could be one venue for convening these discussions.
Second, I suggest we need more involvement of those directly affected by conflict and violence and those who collect their data. What data do they need or want? What are they reporting on and why? What would humanitarian data and information look like if they started from the needs of local responders and affected communities? Many of the data systems that exist to support humanitarian efforts are built to support coordination and the international response, to monitor project progress, and to populate reports to donors. These data move up, and not necessarily back down to those collecting or directly affected by the data. For example, project data are collected to serve monitoring and evaluation plans, which are meant to demonstrate impact and account for funds provided by donors. End-line evaluations are retrospective and occur too late to improve programming or the impact of such programming on those which it affects.
What would it mean to reverse this flow, to have a push factor that defines data from the bottom and moves it up and back down in a feedback loop? Better yet, a loop that functions in near real-time? Many of these issues plague academic research as well, where data of various kinds are extracted from or about communities and settings to support academic research in articles that are published behind firewalls (my recent article on this topic admittedly being one of them). Several sources have pointed to the ethical issues of data collection and evidence in humanitarian action (e.g., Evidence Aid’s recent series).
As we approach Humanitarian Evidence week, these issues deserve more attention and discussion, not least because the data divide infringes upon our collective ability to respond effectively to conflict and crisis.
***
Larissa Fast, Ph.D., is a scholar and practitioner, working at the intersection of research, policy and practice related to humanitarianism, conflict and peacebuilding. She is Senior Research Fellow at the Humanitarian Policy Group/ODI in London (UK), and a former Fulbright-Schuman scholar. She has published extensively in both scholarly and policy-focused venues, including for the International Review of the Red Cross.
NOTE: Posts and discussion on the Humanitarian Law & Policy blog may not be interpreted as positioning the ICRC in any way, nor does the blog’s content amount to formal policy or doctrine, unless specifically indicated.
Comments