⇓ More from ICTworks

Do ICTs Make Evaluation More Inclusive Or More Extractive?

By Linda Raftree on January 4, 2016


ICTs can help make evaluation more inclusive, yet they also bring new challenges and new kinds of inequities and exclusion that we need to be aware of and solve for. Evaluations that address equity issues or that aim to serve as a tool for promoting social justice and inclusion face the same “real world” challenges that any evaluation faces in terms of data quality, analysis and dissemination.

In addition, there are specific challenges related to the integration of ICTs for equity-focused evaluation, including key ethical questions that need careful consideration.   Another potential challenge is that some evaluators may become so fascinated with the speed and reach of the new technologies (“collecting responses from thousands of people in real-time – surely the findings must be valid”) that they overlook some of the basic evaluation good practice guidelines such as sample bias.

Has the use of ICTs changed the nature of exclusion?

We normally think that ICTs, especially mobile phones, will allow evaluators reach and contact to previously remote or unreachable populations, thus making mobiles and ICTs interesting tools for inclusion and equity-focused evaluations. This was a recent focus of an SSIR article, “Community Voices for Social Good.” But exclusion may still be an issue, just now at a more micro level and due to community- or household disparities and different levels of access to technology which varies across gender, economic, age, tribe/ethnicity, etc.

There is also the danger that some agencies may see the new data collection tools as a way to strengthen an extractive approach where it becomes easier to collect data for analysis by the “experts” with even less need to interact with vulnerable groups. Evaluators need to consider intersectionality when it comes to digital exclusion – in other words, how do multiple forms of exclusion combine in ways that create exponential vulnerabilities — when designing an evaluation and when planning for data collection using ICTs.

Is the use of ICTs for data collection increasing risk for marginalized individuals or groups?

Using ICTs for data gathering can become a serious issue if evaluators and NGOs are not aware of potential privacy risks and if they are not familiar with digital data security protocols. Organizations might be using outdated research ethics and consent policies that do not consider digital dimensions. Remote surveying through a mobile platform might put a respondent’s physical and emotional safety at risk because it’s difficult for a researcher to know in what context a person is answering a survey (Alone? In public? With a spouse standing there?)

Other potential risks arise when data is ‘opened’ by an organization, social enterprise or donor without sufficient consideration for local vulnerabilities and safety or because of the possibility of re-anonymization of that data in the not so distant future. In addition, when surveys are done via mobile phone, it can be difficult to explain consent and/or to guarantee that a respondent has properly understood the potential future uses of his or her data.  Some data specialists also question whether privacy can ever be guaranteed, particularly when multiple source of data are being collected. Different sources of “anonymous” data can be triangulated, often making it possible to identify individuals.

Has the use of ICTs introduced new forms of bias?

Another potential concern when using ICTs in evaluation is new forms of bias that ICTs might bring. Evaluators are beginning to experiment with the incorporation of ‘big data’ into their processes, and need to be very careful that the large amounts of data they can now collect more quickly and cheaply are not skewing results towards the more digitally visible parts of a population. This is especially important when ‘harvesting’ data from social media sites that may attract a particular type of audience that fits a certain profile.

Another potential risk is that an increase in remotely collected data and/or automatically generated big data may lead to an increase in situations where data is ‘harvested’ and then interpreted by ‘experts’ from afar who have a limited understanding of the local context. Additionally, the ease of harvesting data from digital platforms may lead to an overall bias towards quantitative data and mono-methods of data collection, which also creates bias. Lastly, though more and more data is available, evaluators and researchers need to also pay close attention to the quality of the data at the source. If not, there may be cases of people sitting at headquarters or at a donor agency with very little contextual awareness of a program using poor quality data to make important/critical decisions about a program strategy and/or funding

Big data introduces new kinds of challenge. In contrast to conventional survey data, where the definition and data collection processes are well understood much big data is collected through complex algorithms to which researchers do not normally have access. Also these algorithms are often updated overtime so that it is difficult to know how comparable data is over time (GoogleFlu is a frequently cited example of this problem). The quality and precise definition of data is also often not fully understood. Patrick Meier (2015) also warns about the dangers of false big data. Finally, response rates to many kinds of ICT data can be extremely low (sometime less than 10 per cent).

Some organizations are digging in to find ways that these challenges can be addressed. VotoMobile and Dimagi, for example, shared their experiences on a panel “Do ICTs make us more inclusive or more extractive?” held in October 2015 at the MERL Tech Conference in Washington, D.C. Both organizations showed that it is possible to achieve a high level of methodological rigor through carefully designed ICT-enabled studies, but that achieving rigor may be technically complex, expensive and time consuming.

Following the presentations by VotoMobile, Michael Bamberger explored some of the potential challenges and solutions for greater methodological rigor when using ICTs for evaluation, as outlined below:

  • Large samples do not automatically reduce sample selection bias. Two kinds of bias must be addressed: respondents are usually not representative of the sample population [Type A bias] and owners of mobile phones differ in important ways from non-owners/users [Type B bias]. Current efforts to reduce bias in mobile phone surveying and use for evaluation are normally aimed at addressing Type A bias but not type B bias.
  • Carefully designed, expensive strategies might be needed to increase response rates and reduce sample bias. This might even require a similar amount of human intervention as a conventional evaluation process; for example, community meetings and preparatory or post-interview visits to sample households might all be essential to increase response rates. Many logistical challenges also need to be addressed, and these might add expense to the original evaluation or data gathering plan. The case studies that VOTO Mobile shared illustrate the extremely careful and expensive sampling methods that must be used to ensure a representative sample. The Dimagi case showed that in selecting a representative sample it is important to recognize that different channels – SMS, mobile apps, interactive voice response (IVR) may all reach and/or appeal to different people/different populations, so several channels may have to be combined.
  • Valid evaluation data will normally require mixed method approaches, combining ICT-enabled approaches with human-centered ones. There’s also a real need to understand social context and the mechanisms of social control in equity-based evaluation, and that does not go away just because ICTs are added. ICTs are not a quick-fix to improve evaluation. Rather, incorporating ICTs into M&E requires long-term commitment and evaluation capacity development. As part of this, more focus on theory is recommended [i.e. theory of change] to provide a framework for designing an evaluation and interpreting findings early on in the program design and planning stages.

Summing up

In equity-based evaluation, the incorporation of new technology tools should be focused on reaching previously marginalized groups and opening space for voice and participation rather than on making top-down planning and data extraction easier for institutions and evaluators.

The potential unintended consequences of introducing technology-based M&E also need serious thought. The risks and challenges associated with the introduction of technology tools vary and depend on the target population or group and factors like age, poverty levels, and gender. It is important to consider how ICTs contribute to and detract from inclusiveness and how evaluators can mitigate bias, risk and exclusion in tech-enabled evaluation design and roll-out.

Thus, while ICTs have tremendous potential to complement and, in some cases, renovate current approaches to program evaluation, a number of issues need to be addressed as evaluators move forward with the use of ICTs in their practice.

This post is co-authored with Michael Bamberger and Veronica Olazabal. This panel was made possible through a grant supported by The Rockefeller Foundation’s Office of Evaluation.

Filed Under: Data, Featured
More About: , , , , , , , ,

Written by
Linda Raftree has worked at the intersection of community development, participatory media, rights-based approaches and new information and communication technologies (ICTs) for 20 years. She blogs at Wait... What?
Stay Current with ICTworksGet Regular Updates via Email

One Comment to “Do ICTs Make Evaluation More Inclusive Or More Extractive?”

  1. Thanks for this excellent and timely piece, Linda. I think you’re absolutely right that ICTs aren’t a panacea for inclusiveness – even they they are often marketed as such. Reaching more people, as you say, doesn’t necessarily ensure that all voices are heard equally.

    That’s why our mission at VOTO is to amplify the voice of the underheard. We were born out of a belief that the voices of marginalized populations are underrepresented in development. Traditional techniques to reach citizens overlooked systemic challenges such as language, literacy, and distance when collecting citizen feedback.

    The reason tools like VOTO are so powerful, is that we can now reach those people at greater frequency — and, particularly relevant for the donor community — at a much lesser cost (in fact, at 5% of the cost, in some cases: http://www.alliancemagazine.org/feature/civic-solutions-a-new-era-for-citizen-feedback/). But we will only reach the most marginalized and amplify the voices of the underheard if the same methodological rigor that is applied to traditional evaluation is applied to evaluations using ICTs. ICT Is the tool that enables us to do things 10 times faster and 100 times cheaper, but it doesn’t on its own lead to inclusivity. That’s still up to us.