In recent years, organizations and employees are being measured not on their stories, but on their numbers. When you introduce your organization, people will always ask you about your numbers – What’s your reach? What’s your impact? Meanwhile, funders and senior management are running after the concepts of big data – year-over-year increase in reach, impact measurements, geographic comparisons, and more.
The MERL Tech conference is returning on October 3-4, 2016. Suggest session ideas to get on the agenda and register to attend.
To measure these numbers, organizations are increasingly implementing monitoring and evaluation frameworks. These frameworks collect large amounts of data and produce insights that help the organization’s employees learn more about their programs. However, the irony is that the people this data is supposed to help treat these frameworks as a useless burden, which eventually makes the data useless.
Why are monitoring and evaluation frameworks failing?
Today’s M&E frameworks are drawn from the online world. This is a problem because they are being used to measure and evaluate the offline world. Here are some of the things that today’s monitoring and evaluation frameworks are doing wrong:
- Tracking unnecessary information: M&E frameworks often end up capturing a lot of information that is not necessary for a program. For example, if we are conducting an education program that works to improve students’ learning outcomes by providing books, we don’t have to collect information about students’ mid-day meals.
- Capturing data at wrong intervals: Data can be captured at weekly, monthly, or even quarterly intervals. It’s essential to choose the right interval for your specific project. For example, school infrastructure won’t change in a year, so it shouldn’t be captured on a quarterly basis. On the other hand, student learning outcomes can change on a weekly or monthly basis, so it does not make sense to only capture that data point on a yearly basis.
- One-way data flows: Often, the people who collect data don’t get to see the results of their data or how their data contributed to larger program changes. That makes the task of data collection seem uneventful and useless. If people don’t believe that their data has any value, they will not have an incentive to capture good data.
- Unclear balance between metrics and programs: In my interviews with field staff, I have found that field officers often don’t know whether meeting their M&E metrics or doing quality work is more important. Obviously, both are important. However, field officers usually tend to focus on one and ignore the other.
How can we fix monitoring and evaluation frameworks?
Clearly monitoring and evaluation frameworks have plenty of potential pitfalls. Here are five easy fixes that will help you make sure your M&E framework is actionable and effective.
- First things first: change the mindset: A fundamental problem with M&E frameworks is that field staff doesn’t know why M&E is important or how can it can improve their organization’s programs. It’s important to show field staff the larger picture – explain exactly how this data will be helpful for them, not just for the funder report.
- Get the data, give the insights: Make sure that any data that you get from the field goes back to the field as insights. People will start valuing the M&E framework once it starts giving insights. These insights do not need to be complex. They can be as simple as “There are 45 new constituents this month, which is 20% better than the previous month”. Once data is turned into clear insights, field officers will realize its importance.
- Create a human-centric data collection framework: M&E frameworks are created with data as the focus. That seems logical, since the framework is meant to collect and analyze data. However, humans are ultimately using the framework to input data and gain insights. If the framework isn’t designed around humans – the way they work, how they think, and what they need – it will be useless. To design a human-centric data collection framework, it is important to do the following:
- Design data collection around your field staff. It’s important to create workflows that suit your field staff, and the best people to tell you that are the field staff themselves.
- Optimize data collection with your staff’s field visits and daily workload. If someone goes to the field three times a week, make sure that their work can be done in three days. If your field staff only goes to a particular area once a month, you should not ask them to collect data in that area on a weekly basis.
- Make the data collection form as small as possible by getting rid of information that can be calculated or repetitive information. For example, if you are collecting the number of students and teachers in a school, don’t ask your staff to also collect the student-teacher ratio. This can be calculated. As another example, if you are measuring students’ learning outcomes on a weekly basis, don’t re-collect each student’s basic information (name, age, grade) each time. Instead, collect each student’s basic information once, then just update each student’s learning outcomes in the future. (Our Collect tool makes this possible through its monitoring feature. Read more here.)
- Use a platform that makes M&E happen in real time. Field officers often wonder whether the data that they submitted has errors or is even being used. Ensure that your framework tells your data collectors that data has been received, verified, and used. This will help you gamify data collection to keep your field staff engaged.
You may also be interested in our Ultimate Guide to Data Collection ebook that has more tips and tricks.
- Iterate. Iterate. Iterate. The most important part of building a good M&E framework is constantly learning from your data and improving your processes.
- Learn from the collected data. If the data you collect is not changing significantly over time, you might want to reduce the frequency of collecting that data. If particular fields are being ignored over and over, you might want to get rid of those fields.
- Learn from the field staff. Take regular feedback from field staff to understand what’s working and what’s not working. For example, you might have set that a certain data parameter should be collected early in the month. However, your field staff might have learned that it makes more sense to collect that data point at the end of the month. Listen to them and incorporate regular feedback.
Tip: Select a tool that lets you change your data collection form while you are collecting data. This means you won’t have to stop the data collection process any time you get feedback.
- Make the entire process engaging. The final step is to engage everyone in your organizations and not just your funders and senior management. The basic step is to figure out what insights will make sense for various employees of the organization:
- Create reports that cater to various levels in the organization. The information that is most relevant to a field worker is different from what is most relevant for a program manager. It’s important to understand the outcomes and indicators for various types of people — program heads, field heads, administration, etc.
- Engaging insights can make your employees look at your data. Always present comparisons rather than absolute numbers. For example, show the percentage change from last month rather than presenting absolute numbers for each month. Show geospatial comparisons rather than giving a table that lists data for each geography. The more engaging your data is, the more people will use it.
- Visualize your data to make it engaging. A dry number-filled report will only result in unopened emails and reports. Use maps, bar graphs, pie charts, and other visualizations to make it easier for users to interpret data.
- A/B test your reports to see what employees like more. Try different versions of reports or send them at different frequencies to see what leads to better engagement.
- Use reports during program review sessions. This helps employees get in a habit of referring to reports. It’s important for data to become a part of daily organizational planning, rather than a one-off exercise.
There is no single formula to make the most actionable monitoring and evaluation framework. However, we have seen organizations facing all of these problems and have iterated with them to make actionable, effective M&E frameworks. The key is to ensure that you catch problems as they come up and iterate to solve them.
By Richa Verma and originally published by SocialCops.
And since you’ve read this far, you really should suggest session ideas to get on the MERL Tech 2016 agenda and register to attend now.
Sorry, the comment form is closed at this time.