⇓ More from ICTworks

Lessons Learned Measuring, Evaluating, and Learning with Big Data

By Wayan Vota on July 9, 2020

big data report rockefeller

The world today is more connected, interdependent, and data-rich than at any time in human history. Yet we increasingly see populations divided into those who benefit from the policies, products, and services driven by advances in data science, and those who are left behind or actively harmed.

At its best, the global development sector creates tangible improvements in people’s lives, with evaluation among the sector’s most critical tools for knowing what is and is not working. By taking advantage of new thinking emerging from the field of data science, development sector practitioners and professionals can make their toolkit even more powerful.

The benefits – and challenges – of big data are now spreading rapidly throughout the world, increasingly reaching the poorest and most inaccessible areas and, in turn, revolutionizing the way global challenges can be solved. This includes:

  • Providing new opportunities to understand and track problems in society,
  • Designing and scaling new solutions to address them,
  • Enabling evaluators to move more rapidly in measuring and assessing the impact that development programs have on poor and vulnerable people.

These themes are explored in the new Rockefeller Foundation report, Measuring Results and Impact in the Age of Big Data: the Nexus of Evaluation, Analytics, and Digital Technology. Within this report are two sections, highlighted below: lessons learned and recommendations for future actions.

1. Big Data MERL Lessons Learned

The new, complex, and unfamiliar big data ecosystem

Many policy makers, development agencies, and evaluators are still unfamiliar with the complex nature of the big data ecosystem. The many ways in which data are generated, transformed, marketed, used, and regulated are completely different from the much simpler and familiar evaluation ecosystem.

The high profile of big data in the media – as an almost magical way to solve world problems but also as a source of threats, fraud, and invasion of privacy – further complicates obtaining an objective understanding.

From the perspective of the evaluator, big data can seem unprofessional, as it does not conform to conventional evaluation practice, and also threatening, due to the concern that big data, which is the exciting new tool, will begin to compete for evaluation budgets and authority in the assessment of development programs. This lack of understanding makes it more difficult to develop a dialog on how to promote integration of data science and evaluation.

The benefits of integrating data science and evaluation

Continued integration of data science and evaluation offers many benefits for both professions as well as for the promotion of social good. From the perspective of evaluators, access to new sources of data and to the new analytical approaches helps resolve many of the challenges discussed in Section 6.1.

Many of these benefits relate to the economical and rapid access to a wide new range of data sources, and also to an escape from the constraints imposed by the small sample sizes with which evaluators frequently have to work. With convergence, it becomes possible to incorporate contextual variables, access longitudinal data sets, provide more robust estimates of baseline conditions when using retrospective evaluations, and measure processes and behavioral change.

It also becomes easier to identify and include vulnerable and difficult- to-reach groups in evaluations – thus reducing an important cause of selection bias. Data visualization also makes it easier to disseminate findings to a wider audience and in a user-friendly manner.

Once evaluators become more familiar with new tools of data analytics, it will become possible to conduct more sophisticated analyses by working with many more variables and conducting dynamic analysis over time. These new tools have great potential for the evaluation of complex programs, which are particularly difficult to model and evaluate with conventional evaluation tools.

Once the two professions begin to work more closely, it will also become possible to integrate predictive analytics with experimental designs and mixed methods, so as to strengthen and broaden the understanding of causality.

From the perspective of the data scientist, closer cooperation with evaluators can help address a number of the perceived weaknesses of many widely used approaches which were originally developed for the assessment of commercial activities such as on-line marketing. Theory-based evaluation can potentially strengthen some of the limitations of data mining by providing clearer guidelines on how to define key evaluation questions and a framework for interpreting findings.

Evaluators also have detailed procedures for assessing the quality of data and for assessing construct validity. Poor data quality is often a major issue for data science. This issue is often not fully addressed by many data scientists who are perceived by evaluators as having a mantra that all data is bad and biased. Mixed methods and the complementary strategies of triangulation have the potential to broaden and deepen the interpretation of findings.

On a more philosophical or ideological level, many evaluators are concerned with issues such as social justice, empowerment, and equity, while many – but certainly not all – data scientists do not perceive the need or value of incorporating values into their analyses.

Many development evaluators assume that the design and implementation of programs often include intended or unintended biases against the poor and minorities – an assumption which indicates the potential benefit that could come from a value orientation. A related assumption is that many of the evaluation datasets may exclude important sectors of the population, usually the poorest.

Consequently, many evaluators will seek to assess the adequacy and inclusiveness of the data they are using. In contrast, many data scientists do not have skepticism about their data, or they believe that machine learning can teach the computer to identify and correct for these kinds of limitations.

Some evaluators will argue that this will only happen if the researchers’ experience in the field makes them aware of this potential limitation and of the political, organizational, and economic reasons why these gaps may occur and how they may be concealed. This skeptical approach has already proven useful in assessing some of the social exclusion biases in some of the widely used algorithms mentioned in Section 2.4.

The requirements for integration

Unfortunately, integration of the two disciplines does not occur automatically. In fact, experience has shown there are many situations in which it has not occurred, and where a number of factors can mitigate against integration. There are a number of conditions which are necessary for this to occur, including a conducive policy environment, data access, access to computer and tech facilities, and appropriate organizational structures.

  • Conducive policy environment. Policy and regulatory requirements may be necessary to permit or promote convergence. These may include regulations concerning access to public data, privacy and confidentiality regulations, and rules concerning issues such as algorithmic transparency. In many countries that have large stocks of survey data stored in sectoral or geographic silos, a major investment may be required to create accessible data platforms so that data from different sources can be combined and accessed. In some countries, governments consider their data confidential, or they are reluctant to share with civil society and commercial agencies. In these cases, a fundamental change in attitudes towards the purpose and use of publicly generated data may be required.
  • Data access. Even assuming a more conducive policy environment, access to many kinds of data can be expensive or difficult. In many cases, only a few large and influential government, international, academic, or commercial institutions may have access to important data sources such as social media, ATM and phone records, or satellite images. There are also proprietary, reputational, privacy, and sensitivity issues affecting data access.
  • Access to computing facilities and technical analysis expertise. While some kinds of data, such as social media’s Facebook and Twitter, may have been processed to make them easily accessible to the general public, other kinds of data, such as satellite images or phone records, may require access to large computing facilities. Many kinds of data analytics may also require access to large computing facilities or to specialized analytical expertise. These are all considerations that may significantly limit access and use.
  • Organizational structures that promote integration. In many development agencies, linking the data center and the evaluation office, or supporting evaluation may not be part of the data center mandate. Similarly, the evaluation office may not be familiar with the work of the data center. Effective coordination between the two offices is essential to the integration of data science and evaluation. It is essential for the collaboration to be institutionalized, with regular meetings, sharing of plans, and perhaps joint budgets and joint training for some activities. Management should also identify pilot programs where the two offices can assess the value of collaboration.

Challenges and concerns

Facilitating convergence will require a number of challenges and concerns to be addressed. The following lists some of the most important steps to take.

  • Determine who controls big data, how is it used, and who has access, and identify the barriers to greater access
  • Recognize that big data has the potential to empower poor and vulnerable groups, and to be used to hold government and other powerful groups to account. There is already extensive evidence that big data can be used by governments, donor agencies, and other powerful public and private groups to strengthen top-down control. Big data can be used “extractively” to obtain information on and about different groups and to make decisions on what services they need, without having to “waste time and money” going to affected communities to consult with them. However, citizens are often not even aware that this information is being collected about them.
  • Address privacy and security. Privacy and security are increasingly recognized as important and complex issues which the public and many development agencies do not fully understand and are not able to fully address.
  • Avoid economic and ethnic bias. Economic and ethnic biases are built into many widely used algorithms, and the incomplete coverage of many big data information sets often excludes the poorest and most vulnerable groups. While the low cost and speed with which data can be collected makes it possible to overcome many of the conventional evaluation challenges to including remote or difficult-to-reach groups, other biases related to the nature of big data have to be addressed.

2. How to Move MERL Tech Forward

We have argued that big data and data analytics have a demonstrated potential in the design, analysis, and use of development evaluations. A wide range of tools, in use in the private sector for at least a decade, is already being used by some development agencies for planning, research, and emergency programs.

However, to date, most of these methods have not been widely used in program evaluation. We also discussed some of the reasons why integration has been slower among evaluators. The following looks at some possible steps or actions that have potential to promote and facilitate the integration of big data and development evaluation.

Building bridges

  • Strengthen organizational structures. Large agencies that have both an evaluation office and a data development office should strengthen the linkages between the two. Support to evaluation activities should be included in the mandate of existing data centers, and mechanisms for cooperation should be clearly defined. These might include: attending each other’s management or operational meetings; involving the data center in the planning of evaluations; and involving the evaluation office in the discussions of the data center work program and the kinds of databases they will generate or integrate.
  • Identify opportunities for pilot collaborative activities. Collaboration on selected evaluation programs should be considered following a careful assessment of the value-added or case for expanding collaboration. The evaluation staff could reciprocate with data scientists and data centers, by utilizing its expertise to assess data quality and collaborating in strengthening the quality of data centers’ data.
  • Provide analytical support to selected evaluations. Opportunities should be identified to apply data analytical techniques to the analysis of selected evaluations.
  • Collaborate on the creation of integrated databases. Many potentially useful databases, available within an agency or organization, or from its country partners, are not utilized for evaluations because they have never been linked and integrated. The tools for creating these integrated databases are well understood and tested, and could be a practical way to strengthen evaluation capacity. This collaboration for integrating databases could also be considered by a few carefully selected larger scale operations in countries – those that have large volumes of under-utilized survey data from different sectors that could be integrated into an extremely useful resource for many evaluations and many different research agencies. India is often cited as one example of a country with huge under-utilized data potential, and Box 4’s example of the program to combat human trafficking in the Philippines illustrates how previously untapped data could be integrated into a single data platform and used effectively. The Broward County Youth Protection Program in Florida case study in Box 5 illustrates that similar opportunities exist in countries such as the United States.

Integrated evaluation and research capacity development

Developing a common set of tools and approaches to the evaluation of development programs is essential for building the base for collaboration. At present, common understanding is largely lacking, as many evaluators are not familiar with the sources of big data or the analytic approaches, and, similarly, many data scientists do not use, and often are not familiar with, evaluation tools and approaches. Promoting the base of common understanding requires the incorporation of big data approaches into the training curriculum of evaluators and vice versa. This, in turn, requires collaboration to develop the common curriculum through:

  • Setting up workshops, conferences, and other forms of exchange to identify common approaches, areas of perceived disagreement, and practical tools that can be applied in the field
  • Inviting data scientists to contribute to evaluation journals (particularly on-line journals) and conferences, and vice versa for data science journals and conferences
  • Drawing lessons from pilot collaboration to assess what works and identify any barriers
  • Developing capacity through training for new professionals and for the staff at all levels of experience
  • Organizing exchanges and on-the-job training
  • Including a data scientist in the evaluation team – if the organization is sufficiently large – either as regular staff or as a consultant.

Critical need for landscaping research

Systematic documentation on how widely big data is used by evaluators is currently lacking, along with understanding of what has worked well and what are the challenges. At present, the few studies that have been conducted and mainly anecdotal evidence suggest a low level of data science utilization by evaluators who also present significant questioning of big data approaches.

To expand from this level of anecdotal evidence, there is an urgent need to conduct basic landscaping research which calls for:

  • Documenting how effectively evaluation and data development centers coordinate or work together in different organizations and sectors
  • Filling in knowledge gaps on the levels of consensus, differences, and tensions in different sectors and organizations
  • Producing case studies on examples of cooperation.

Critical roles for funding and grant- making agencies

Given the great potential for convergence, combined with the slow rate of progress on the ground, funding agencies can play a critical role in creating the space for dialog and collaboration, and provide the seed funding in critical areas. Funding can provide the critical impetus in all of the steps for moving forward: bridge building, capacity developing, landscaping, and supporting pilot initiatives to implement convergence on the ground.

Filed Under: Management
More About: , , , , , , ,

Written by
Wayan Vota co-founded ICTworks. He also co-founded Technology Salon, MERL Tech, ICTforAg, ICT4Djobs, ICT4Drinks, JadedAid, Kurante, OLPC News and a few other things. Opinions expressed here are his own and do not reflect the position of his employer, any of its entities, or any ICTWorks sponsor.
Stay Current with ICTworksGet Regular Updates via Email

2 Comments to “Lessons Learned Measuring, Evaluating, and Learning with Big Data”

  1. Ehud Gelb says:

    There is a danger that conclusions – right or wrong – based on big date will neutralize useful traditions, relevant experience and enforce “over confidence”. Considering this comment will enrich BD benefits.

  2. Jitimu says:

    Embracing and effectively utilizing Big Data technology can have a positive impact on food security in Africa if used well.