⇓ More from ICTworks

We Need a Culture of Evidence-Informed Action in Technology for Social Change

By Guest Writer on November 20, 2017

Stop Pilotitis

As a profession, the technology for social change field is at least a decade old. We can and should be working to professionalize, setting standards and improving practice, and developing a body of evidence on which we can all draw.

But get any group of practitioners together and similar stories will emerge:

  • Challenges in moving from pilots to effectively ‘scaling’ interventions
  • Duplicating effort, funding and building multiple platforms and tools to perform the same functions
  • Technology interventions that are disconnected from the context, real needs and capacities in the target environment, the implementing organization’s existing infrastructure and systems, and the wider field’s existing efforts
  • Conflating the availability of a consumer technology in a market (access), or household-level ownership of a technology (ownership) with meaningful use of the technology (use), leading to exclusion of burdened groups such as women, people with disabilities, and older people from tech-enabled services and accountability mechanisms

These problems persist because technology for social change has a weak culture of evidence and accountability:

  • No requirement or culture of demonstrating appropriateness or empirical research before technology is implemented in a given project
  • Few, if any, evaluations of technology projects, and very few published or shared
  • A culture of reporting by blog post (partly encouraged by frequent funding through pilot, innovation and core funding without the rigor of programmatic funds)
  • A relatively intimidating technical field reducing the intensity of critique of our proposals and reporting by non-technologist colleagues, including donors, compared to more mainstream areas of work
  • No industry-standard criteria for monitoring and evaluation (M&E) of technology projects
  • Scarce funding for M&E of aid, and less for tech-specific enquiry
  • Lower rates of follow-on or repeated funding for technology to the same organizations or for the same project, in contrast to strong relationships between donors and humanitarian agencies or with particular development issues in particular places

There is evidence that this is starting to change. For example, the Principles for Digital Development propose best practice which could be used as a standard at project level – although at present, implementers are encouraged only to make a corporate commitment to the Principles by adopting them.

The growing M&E technology field meets several times a year at MERLTech conferences – although the focus is usually on using technology for M&E, not how to tease out the contribution tech makes to social change.

Requiring evidence-based working in technology-enabled projects would require overcoming cultural, resource, and technical constraints, some of which are summarized in SIMLab’s Monitoring and Evaluation Framework. But sustained investment in changing the way we work could allow:

  • Improved project-market fit, with appropriate technology being utilised more of the time
  • Better value for money, as tech-enabled social change projects become less risky propositions
  • Publication of results meaning that future projects can iterate on past learning, contributing to a global understanding of what works
  • Systematic sharing of learning allowing collation of knowledge across thematic, geographic, time or other themes through meta-evaluations and other exercises
  • Increased ownership by target populations through feedback and participatory governance measures
  • Improving impact.

It is possible to build a culture of evidence in a field previously not known for it. The development of the accountability agenda in humanitarian aid following the catastrophic response to the 1994 Rwandan genocide shows how a culture of self-reflection, improvement, and holding each other to account – however imperfectly realized – can be traced to the realisation of failure as a sector.

Now, accountability is a given, a requirement on all aid projects, and built into most organization’s daily work. Every three years, ALNAP produces the State of the Humanitarian System report, based in part on shared evaluations. The SPHERE project and the Core Humanitarian Standard set clear benchmarks for quality. Although there is always more to do, the humanitarian sector’s progress on evidence and accountability can be a model for us to follow.

For now, though, ICT4D professionals can still operate more on conviction than evidence. Ultimately, the population targeted by our intervention may pay the price, in wasted time or resources, or worse, in actual harms caused by inappropriate and exclusive practice.

SIMLab will close in 2018. In the months we have remaining to us SIMLab will try to make meaningful progress towards a better future with the resources we have – we’re working with DIAL to finalize openly-licensed Framework approaches to Monitoring and Evaluation, and Context Analysis.

Both are open for consultation RIGHT NOW on our website – head over to our resources page to find links to the Frameworks in Google Doc, ask questions, suggest improvements and get right in there and edit.

But there’s more we’d love to see done. Both the Frameworks could be improved way beyond what we’re able to do now, with more design work, improved sample tools and resources, and translations. I’d like to see better partnerships between academic researchers and practitioners in the ICT for Development field.

Most importantly, I’d like to see a repository for practitioner evidence (like evaluations) for tech for social change projects, hosted by an impartial body or jointly by a network of organizations, donor-funded and supported to solicit and manage contributions, and with resources to conduct analysis on the evidence that arrives so that it can become open knowledge.

ALNAP, the humanitarian body I mentioned earlier, does this already. There is a precedent. Can we challenge ourselves to be as generous with our learning as humanitarian agencies working in some of the toughest contexts in the world?

Without this kind of investment in improving our practice we’re just experimenting without learning anything – and that’s not ethical, when the projects we work on affect people’s lives. We are bound by our ethical codes – be they the Digital Principles, human rights, or humanitarian principles – to do better.

By Laura Walker McDonald and originally published as The Evidence Agenda: appealing for rationality in tech for social change

Filed Under: Thought Leadership
More About: , , , , , ,

Written by
This Guest Post is an ICTworks community knowledge-sharing effort. We actively solicit original content and search for and re-publish quality ICT-related posts we find online. Please suggest a post (even your own) to add to our collective insight.
Stay Current with ICTworksGet Regular Updates via Email

4 Comments to “We Need a Culture of Evidence-Informed Action in Technology for Social Change”

  1. Well said, Laura.

    One of the incredibly frustrating aspects to this is that there already IS a lot more evidence-based robust evaluations etc. than people realise – but 90% of it is from academia. And, despite decades of people saying “oh how can we get academics and practitioners to work together more closely”, I very rarely see this happening…

    In part this is due to different funding-drivers, but I wonder what we, as a ‘social tech community’ could do to bring the two strands of work together… Both have a lot to learn from each other, but so far nobody seems to have worked out the incentives (on an individual researcher/employee level I mean) to make it happen!

  2. Erica Hagen says:

    “A relatively intimidating technical field reducing the intensity of critique of our proposals and reporting by non-technologist colleagues, including donors, compared to more mainstream areas of work” YES. It is as though humanitarian and development practitioners throw up their hands and say yes to any and every tech intervention…or, frown at all of them and stay away, which is not helpful either. Much of the same critical thinking can be applied to ICT4D as it is to other fields, but, because of the (pseudo) market-driven aspect and specialized tech knowledge (perhaps played up by companies building the software), somehow it gets a pass.

    Also, agree with Matt above: there is critical evaluation available. But some of it’s impractical and unhelpful due to its academic nature. I would suggest a look through some of these?
    http://www.makingallvoicescount.org/publication/

    • Agreed!

      I’ve definitely noticed a tendency (beginning to change) on the mnore market-driven side of practice to embrace tech because “hey its innovative (sic) so it must be good”… and a tendency on the academic/evaluative side to simply highlight the methodological errors in.. well, everything…

      Neither of which are terribly helpful for those at the coal face trying to figure out the most useful ways they can incorporate tech into their work before Acme Corp do it for them in a much less ethical way…!

      But… How do we change it? Other than on a personal level (I try to keep a foot in all camps) – I genuinely don’t even know where to start!!