M&E is lame… As is MEL, MERL, MEAL, PME, and PM&E.
Why do we do M&E? To improve our work surely; but how do we improve? By learning what works, what doesn’t work, and using that knowledge to improve our practice. By making learning the top priority and putting it first, we have perhaps a more apt acronym – LAME (“Learning And M&E”).
But what might happen when we put learning front and center? We can better understand this by tackling some of the big beasts of the M&E world:
1. We need to throw away our log frames
Log frames suck. In some very specific engineering-type cases they’re great, but most of the time they try to impose a linear cause-and-effect mentality onto a highly complex system. They can’t do this of course, so they get it wrong and we end up underperforming or delivering the wrong things for years. Or, more likely, simply continually justifying to our funders why – even though we did what we said we would – the ‘exciting and innovative ICT project’ hasn’t had the impact we thought it would.
How about if instead we had a range of different, flexible and learning-oriented ways of capturing our work, plans and success/failures?
For a more nuanced view of how the sector sees logframes, see INTRAC’s great report or read this great article suggesting ways to adapt logframes to become more participatory.
2. Theories of Change just make things worse!
As a thinking-tool, theory of change is a great exercise to go through – it forces us to think through WHY we are doing what we do. Fantastic! As an M&E tool though, it can case as many problems as it solves. Often what happens is we take this – let’s be honest – usually pretty simplistic thinking based on anecdotal evidence – and use it to define a log frame. Now the theory of change we made up in a workshop (maybe even with a couple of stakeholders – but sadly, usually not!) has become the central justification for a linear approach to a complex system, giving it an added layer of justification rather than challenging it.
How about if instead of this top-down advance planning we accept that, often, we don’t know in advance what will work – so we need to steer projects as we go, based on the best guesses we can – i.e. guesses by those directly involved and affected.
Some interesting debates on Theory of Change at Duncan Green’s piece can be found by checking out Logframes on Steroids and reflections on ToCs from LSE.
3. Learning implies CHANGE…
Much of the time evaluation is a discrete exercise, conducted at the end of a project or at the end of each year or each phase. And the majority of these evaluations are conducted primarily to provide information to various funders. Funders who – by and large – aren’t that interested in hearing about interesting learning and changes based on this. With a few exceptions, most simply want to know whether you delivered what you said you would, and why it hasn’t had the impact you said it would (it rarely does… see (1) above!).
Can we turn evaluation on its head and put learning and continual improvement first, and upward reporting second..?
For examples of how this can work, see recent work in Development on Adaptive Programming, and work spanning back decades in other sectors such as Agile and Lean Startup.
4. What is the minimum data we need to collect..? M&E the Cisco way..?
With the rise of big and open data there is a tendency by many in the tech world to collect as much data as possible. This is despite advice from the privacy and transparency movements to “collect only the minimum data required to accomplish programmatic aims” (more on this in Oxfam’s great new Responsible Data Policy).
There are some interesting parallels in the way data is collected in some parts of the commercial sector. And while I am emphatically not one of those people who glorifies the private sector, in this case there are some interesting points to learn from.
I used to work at Cisco. This has never felt particularly relevant to International Development until I got brooding over something recently We actually did tons of M&E at Cisco. It wasn’t called M&E of course, not in the world of international corporations; instead it was called the equally unexciting name of “Customer Satisfaction”.
At Cisco, everyone is a customer and literally everything is measured as customer satisfaction. It’s unbelievably simple, everything is measured out of 5 – training, partners, staff. Everything. What does this have to do with M&E for the aid & development world though?
Well… if we’re serious about putting learning and continual improvement at the centre of our M&E work, then what are we actually interested in. For most projects we only really want to know one of two things. Is it failing or is it excelling, and why? The vast majority of projects do neither – they plod along doing “OK”; not innovating, not making great leaps forward in our knowledge and understanding, but doing good work and not wasting money.
So why are we wasting millions (billions) of pounds measuring the hell out of every aspect of these projects, when delving into this data, 9 times out of 10 tells us nothing very interesting, and certainly nothing that we didn’t already know…
We learn from failure. If a project is doing badly, it needs emergency attention to get it back on track, or it needs stopping to ensure more harm isn’t done, or it means it was designed badly and valuable learning can be gleaned from exploring this in depth. And we learn from success. If a project is doing amazingly well, maybe it truly is innovating, breaking new ground, working in new and interesting ways, tackling new problems, or having a significant impact.
A simple and pervasive satisfaction score provide a quick, cheap and easy way to identify these successes and failures. If you ask every stakeholder to rate every engagement on a standard scale. It takes them seconds to do; it is quick and easy to collate the data (no need for pages and pages of spreadsheets); it can be done simply using paper and pen or mobile phones, no need for complex interfaces. It creates an alert system that rapidly and easily highlights where further intervention is needed or where pockets of excellence are happening.
This is not a new idea (the similarity between beneficiary feedback and customer satisfaction has been observed by others), but cheap and pervasive technology means it may be an idea whose time has come.
Numbers are notoriously bad at telling you why something happened, but an in-depth analysis is prohibitively expensive and time-consuming. So use the numbers to target where to spend the time and money figuring out the details.
So to conclude…
- M&E isn’t really lame, but maybe it should be! L stands for Learning, and it should always the first and foremost reason for doing M&E in the first place
- Start with the funders – while we have to collect loads of data and report against logframes, there is little point developing innovative new ways of doing M&E work. But if we can promote those few funders who are open to different ways of working, maybe the others will follow suit
- If there is a funder/donor out there who fancies experimenting with this approach, and an NGO/project who find the idea appealing, then it would be great to explore! Aptivate could definitely build the system into its new open-source Kashana platform to give it a try.
- Collect as little data as possible – streamline it so you’re collecting next to nothing; you might find you can really target more qualitative interventions well and actually learn something.
- Learn. Change. Learn some more. Repeat this mantra daily.
Matt Haikin is a participatory ICT4D practitioner, consultant and research working at Aptivate (the digital agency for the aid and development sector) in the UK. He recently led the development of the World Bank’s Guide to Evaluating Digital Citizen Engagement, is co-chair of Bond’s Technology for Development group and is currently on sabbatical working with grassroots participatory technology projects across Africa, Asia and Latin America (follow his travels on his blog at www.matthaikin.com – and if you know of any interesting partiicpatory technology projects which could use his skills – please contact him directly!).
For queries about his and Aptivate’s work, please contact [email protected] – and if you’d like a demo of www.Kashana.org just ask!
Hi
I completely appreciate the sentiment being conveyed here. Learning DOES imply change and I think a shift toward that emphasis is inevitable. I hope, however, that we never call it “LAME” as that’s one of those words which is in common parlance but might not be appreciated by those with disabilities. “LM&E” will hopefully suffice.
Linda
Linda, the title was meant to be deliberately provocative as a way to grab people’s attention – I’d be horrified if anyone literally used this acronym!! 🙂
Great to read this! IICD (International Institute for Communication and Development) (www.iiicd.org) id that for many years, gathering very valuable information and knowledge from the beneficiaries of ICTs solutions in developing countries.
Till we were said by our donors that this kind of M&E was not valid to get funding 🙁 (Indeed it was done for learning not for check and control). We had even on-line questionnaires to gather information. and we complement with F2F knowledge sharing events to understand better some issues.
It’s a good choice to take part in training, it’s a fast way to improve yourself
so we can change our current life
and also, a good wholesale platform for you to sell African products on it but free to open shop http://www.amanbo.com