⇓ More from ICTworks

Does Technology Make Monitoring and Evaluation More or Less Efficient?

By Guest Writer on June 29, 2015

ictandme

“The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.”

This quote, attributed to Bill Gates, refers to how technology works to complement good systems, not replace them. During break-out sessions at the recent ICTforAg Conference, we discussed why this is also true of the relationship between information and communication technologies (ICT) and monitoring and evaluation (M&E). ICTs – most notably smart phones and tablets – are often used as a more efficient means to obtain and analyze project data. Although M&E is often seen as a driver for the use of ICTs, there are still many challenges for organizations to realize the full potential of ICT in M&E.

It’s Not Just About Technology

Panelists Ben Jacques-Leslie from the Abdul Latif Jameel Poverty Action Lab (JPAL), Katherine Scaife Diaz from TechnoServe, and Michael Reiter from FINTRAC presented on their organizations’ history of integrating ICT into M&E. Each presenter touched on the imperative of working better and more efficiently. This spurred an interest in more efficiency in data collection, which has been addressed through the use of ICTs.

Technologies such as satellite imagery have been piloted to more accurately and efficiently measure yield, while current point of sale technology could improve the accuracy of input costs estimates and commodity sales prices.

But the question that remained in everyone’s minds was: Is efficiency and confidence in data collection achieved through technology, or is it more a function of using sound M&E systems? Mr. Jacques-Leslie argued that there is currently little evidence that technology actually improves data quality and that this is something that JPAL intends to study.

Going back to that Gates quote, which Katherine Scaife Diaz shared at the beginning of the session, no amount of technology can work around inefficient M&E systems. Technology applied to an inefficient system will only magnify the problem.

Data Collection Methods & The Human Element

Although the disdain for paper-based data collection was palpable in the room, I saw many heads nodding when a participant noted that paper might still be the best data collection method in places where there simply isn’t the infrastructure and experience to support the use of ICTs. If technology use is appropriate – and data collection becomes more efficient as a result – can decision-makers really make full use of all the data? And in this scenario, what is truly efficient? Is it spending less time collecting the same amount of data or collecting even more data with the newfound time?

If it is the latter, does that mean it’s time for the development sector to embrace advanced algorithms that use development data to create simple triggers to help decision makers, thus removing much of the human analysis element? Does the sector have sufficient trust in the current quality of data to move in that direction?

Data quality is of utmost importance to M&E, but ensuring data quality continues to be a struggle within the sector. The group explored many technology-based ideas to increase data quality and participant tracking, including biometric data collection (iris scanning, finger print scanning) and technology enabled systems to identify outlier data.

Unfortunately, many of these solutions have yet to be developed, are cost-prohibitive, or are fraught with data privacy concerns. As a result, the human element surfaced again. Many of us agreed that data is only as good as the skills of the person collecting it. Some enumerators have higher capacity than others, yet system-wide data quality checks rarely account for this. Could data quality be improved through the use of reputation-based data verification systems that focus data quality checks on lower capacity enumerators while giving higher capacity enumerators more leeway?

Unquestionably, ICT has the potential to transform the efficiency and usefulness of data collection, as well as boosting organizational confidence in data collected through M&E systems. But it also has the potential to create inefficiencies if applied in inefficient operations where human capacity and technological infrastructure is low.

Garrett Schiche is a Monitoring and Evaluation Technical Advisor at Lutheran World Relief

Filed Under: Agriculture
More About: , , , , ,

Written by
This Guest Post is an ICTworks community knowledge-sharing effort. We actively solicit original content and search for and re-publish quality ICT-related posts we find online. Please suggest a post (even your own) to add to our collective insight.
Stay Current with ICTworksGet Regular Updates via Email

One Comment to “Does Technology Make Monitoring and Evaluation More or Less Efficient?”

  1. Thanks for the thoughtful post.

    I just wanted to push back a bit on the notion that electronic methods of data collection haven’t necessarily improved data quality over older, paper-based approaches. In my own “horse races” a few years back, the quality gains from going electronic were dramatic — simply by virtue of being able to program simple consistency checks, scan barcodes to grab lengthy ID numbers, and use other cheap-and-easy methods of upping quality. Today, our SurveyCTO software has armed data-collectors with other cheap and easy methods of ensuring quality, including random audio-recorded audits, automatic detection of suspicious behavior (like moving too quickly through a survey), and more. The technology is improving all the time, and it needn’t be expensive or complicated to dramatically improve one’s data quality. (See http://www.surveycto.com/product/quality-data.html for a list of specific data-quality methods supported in SurveyCTO.)

    I’ve been discussing with the folks at J-PAL the possibility of conducting a proper RCT to evaluate different methods of ensuring data quality. It’s something that’s long overdue: real quantitative evidence on the returns to different methods of QC. I’ll take your post as, in part, a vote in favor of getting that going ASAP.

    Thanks again for your post.