⇓ More from ICTworks

Lessons From a MERL Tech Confessional: It’s All About Assumptions

By Guest Writer on October 31, 2016

merl-failure

It’s been said that good science is the art of getting less wrong over time. Maybe the same principle applies to the use of technology for monitoring, evaluation, research, and learning? That’s exactly what we explored at our MERL Tech 2016 breakout session, “A MERL Tech Confessional: What’s your M&E sin?”.

In a format that fell somewhere between a Fail Festival and a group therapy session, participants – ranging from M&E specialists and academics to entrepreneurs and technologists – exchanged their failures and frustrations from their experiences with MERL Tech.

Our goals were twofold: first, in the spirit of a confessional, to provide a safe space for colleagues to reflect candidly on their own mistakes working with MERL Tech; and second, to develop a shared taxonomy of common threads – or problem areas – that span those experiences.

Deconstructing a confession

What does it take to critically evaluate our failures? The first step is to create a trusting environment. In some cases, mistakes we make in the design and implementation of MERL can have significant effects on our programs’ measurable success, the constituents they’re meant to serve, relationships with partners, and even the potential for future funding.

Sharing these failures with our peers can be a daunting experience. To mitigate this, we first asked participants to reflect internally on their “confession”, asking them to define what their MERL goal was, where things went wrong, and what they would have done differently. Next, participants shared these with each other in small groups, bouncing off ideas while growing comfortable with sharing their story of failure.

Surfacing common themes among MERL failures

After discussing in small groups, we opened the floor for each team to share their immediate reactions. Before long, striking similarities emerged among the different confessions. Although each story came with its own context and background, several themes emerged across every MERL Tech failure:

  • Varying assumptions among program stakeholders meant that the actual implementation of MERL Tech left some groups less satisfied than others. For example, participants reflected on times when they identified a technology partner to develop a dashboard, without sufficiently articulating precisely what that would entail. Both sides brought their own perceptions and assumptions about the nature and purpose of that technology, causing tension to surface during implementation once it was too late to reevaluate conditions or conduct due diligence.
  • Different definitions of key terms led to a MERL strategy that didn’t necessarily achieve or track what the different partners set out to measure. We heard, for instance, several stories of implementing organizations and the tech firms they partner with each holding different perspectives on what precisely each meant by terms like “dashboard” or “tool”. Similarly, implementers and donors realized too late that their definition of concepts like “sustainability” or “justice” were not aligned, leading to misguided M&E.
  • Participants also brought up the role of data in MERL Tech interventions as being a contributing factor to their failure. Some spoke of data manipulation by different stakeholders, linked to negative incentives by certain players in the MERL cycle that reward numbers for the sake of quantification, not for the sake of improved learning. Others shared stories of occasions when MERL efforts tracked data about the inputs of a program, rather than about the constituents and their desired outcomes.
  • Finally, the teams identified social pressures as a contributor to some of their MERL Tech sins. Working under tight proposal or implementation deadlines in complex environments, participants acknowledged that it’s sometimes tempting to refrain from interrogating their own MERL strategy in case it would significantly delay or sidetrack the program itself.

What’s the point of a session like this?

Acknowledging and even embracing failure in the MERL Tech community can only be a healthy exercise, for at least three reasons. First, by identifying common threads of challenges, we are better positioned to develop and share strategies to mitigate future mistakes. At the MERL Tech Confessional session, for example, participants suggested that failures associated with assumptions and definitions could be avoided by having discussions early and often with potential technology providers to clarify doubts and terms.

Second, sessions like these help different stakeholders in the MERL Tech ecosystem be better equipped to address each other’s’ frustrations. In the room at the MERL Tech Confessional, for instance, was a representative of an M&E technology firm who was there simply to hear directly from implementers about their challenges working with tech providers. Through this session, they and their company will be better equipped to more develop more effective MERL interventions.

Finally, the MERL Tech Confessional reminded us of the importance of pausing to reflect on our MERL failures in a constructive and trusted environment. The digital development community has already witnessed the success of events like Fail Festivals, but this session provided a more intimate setting devoted exclusively to the MERL context.

Hopefully, at MERL Tech 2017, we’ll have a similar space to discuss how we’ve grown “less wrong” together over the previous year.

By Samhir Vasdev of IREX.

Filed Under: Featured, Management
More About: , , , ,

Written by
This Guest Post is an ICTworks community knowledge-sharing effort. We actively solicit original content and search for and re-publish quality ICT-related posts we find online. Please suggest a post (even your own) to add to our collective insight.
Stay Current with ICTworksGet Regular Updates via Email

Sorry, the comment form is closed at this time.