⇓ More from ICTworks

Evaluating Tech Tools – From Pilot Projects to Entire Sectors

By Guest Writer on February 11, 2015

buy-options

ICT4D and M4D as fields are both exciting and clumsy, still in their infancy and most consistent in how quickly they are changing. The M&E Tech Deep Dive in New York in early October 2014 mustered up familiar conversations around monitoring and evaluation, as well as around ICT4D. Beyond the use of ICTs for M&E, finding common ground on the evaluation of ICTs was challenging.

There is wide consensus that technology is not a silver bullet, and further, that it simply can’t be separated from the context in which it operates or is applied. How then, can we think about monitoring and evaluating the role of a particular ICT or digital platform in the context of a larger initiative?

To Evaluate or Accept?

The Deep Dive affirmed that one organization’s innovation is another organization’s absence of evidence. In other words, participants from different sectors and backgrounds had similar questions yet seemed to be speaking different languages. Two participants summed up the range of sentiments well:

  • The first commenter conveyed that while ICTs (including mobile) are put forward to simplify and transform, they require a lot of work, and can be far more complex and costly (both in ethical and material terms) than the proposed benefits they offer. We need evaluation!
  • The second commenter highlighted that the global shift toward digital is increasingly inevitable, in some form or another, and that investing significant organizational resources to prove that ICTs are useful may seem like a futile exercise, with possibly high costs, to affirm the obvious. Is evaluation really worthwhile?

Appropriate uses of technology are found somewhere in between trial-and-error and a growing, albeit very incomplete, repository of experiences and evidence base. There is a clear bias in the stories and experiences that are circulated (and thus those that get noticed and counted), toward organizations that are well resourced enough to synthesize and publish their findings, toward individuals that leverage social media, and toward those who attend global conferences.

These groups’ stories and experiences often drive, and even become, the conversation. Those without the resources for visibility, though with important insights and perspectives on the same applications, are often left out of the conversation entirely.

There’s a lot we don’t know about ICT4D. There are many voices that remain peripheral, or altogether invisible, in the conversation. What we do know is that there are big differences in thinking about ICTs at the project level and thinking about them more widely as part of an evolving sector. We also know that all programs (whether in the public, multilateral or non-governmental sectors) involve power dynamics and decision-making challenges during organizational planning and program execution.

As we continue listening to, learning about, and engaging with various approaches to M&E of ICT for development, here’s a summary of what we discussed in one session at M&E Tech NYC.

Separate monitoring of ICT tools/platforms into three phases:

Applying a tool or platform in ‘development’ involves different phases and approaches. In one discussion at M&E Tech NYC, we distinguished between three phases of ICT application and evaluation, with the intention of better assessment and learning for continuous improvement. These three phases were:

  1. Rollout: The rollout phase is almost always messy, and things often go wrong. (In other words, there is no ‘formula’.) In technology, it is accepted practice to ‘fail fast’ – a thing breaking or dropping out mid-action is not only normal, but expected. This recognition can alleviate some of the pressure on how evaluations count what is ‘good’ and what ‘needs to improve’. While it doesn’t minimize the importance of understanding the cost (and who bears it), evaluation of a rollout might focus more on organizational learning than longer-term ‘impact’ of activities. Every phase should calculate, minimize, and mitigate risks to people involved or affected.
  1. Implementation: Implementation takes place after rollout, when lessons have been learned and a tool or product has been adapted. Evaluating implementation normally focuses on how people, context, decisions, and outcomes interact with the technology. How to do this, unsurprisingly, varies and is often a source of great debate.
  1. Long-term use, adoption, and sustainability: Resources, training, buy-in, and legitimacy issues are some of many factors that affect the long-term use and sustainability of ICTs. System requirements, upgrades, and interoperability considerations add new sustainability questions to consider. Regardless, evaluating sustainability and rollout, without distinguishing between the two phases may compress different kinds of activities into an unhelpful average that undermines, rather than enhances, learning and shared benefit.

Evaluative Questions

Along with the different phases mentioned above, it may be useful to answer three kinds of questions when evaluating the role of technology tools and platforms in development programming:

  • What is the role of technology in organizational processes and is there ease of adoption among those using it? Here we can try to identify how and where technologies and platforms change (or are intended to change) organizational processes as well as how receptive team members are to using them. If multiple technologies and platforms are used at different points in an organizational process (say, prioritizing service delivery according to need and geography), trying to isolate the impact of just one may not be the goal as much as identifying points for greater compatibility between and among systems and tools used.
  • What is the role of tech in decision-making and program outcomes? Here we may want to ask how the data collected (whether through SMS, maps, sensors, etc.) change the way decisions are made, transform working or reporting relationships (whether due to shifts in cost, power dynamics or anything else) or how the ICT’s role has shifted from rollout to implementation to long-term adoption.
  • What level and type of tech support is provided or needed? ICT4D or M4D ‘solutions’ are not often actually simple “solutions” to problems. Most tools and platforms require ongoing support for the people using them, for data analysis, and for integrating a tool’s functions with other tools or systems being used for similar purposes. Keeping an eye on how much support and learning is needed (and supplied) as part of ICT evaluations can help to assess how sustainable, user friendly and cost-effective the technology tool or platform is.

Confusing Reality

The gap between immediate utility and longer-term impact, for better or worse, of ICTs can be attributed to any number of factors, including politics or competing priorities, neither of of which is unique to ICTs.

Though it is clear that this gap reflects another rift, one between how the use of ICTs is conceived for specific projects and programs (often aligned with proposals and funding timelines) and the realities of a sector for which the infrastructure is still being laid, with many different forces and factors shaping accessibility, support, service areas, civil and political rights, and information regulation among others.

Anna Levy is the Governance Project Director at Social Impact Lab

Filed Under: Solutions
More About: , , , , ,

Written by
This Guest Post is an ICTworks community knowledge-sharing effort. We actively solicit original content and search for and re-publish quality ICT-related posts we find online. Please suggest a post (even your own) to add to our collective insight.
Stay Current with ICTworksGet Regular Updates via Email

One Comment to “Evaluating Tech Tools – From Pilot Projects to Entire Sectors”

  1. Lisa says:

    Great article! We need more resources to help otherwise nontechnical organizations (like most nonprofits) learn to implement tech well. I like this: “In technology, it is accepted practice to ‘fail fast’ – a thing breaking or dropping out mid-action is not only normal, but expected.” It’s so true, but it can come as a big shock to orgs implementing solutions if they’re not used to that (especially if they see tech as a way to prevent failure and reduce risk, as it ultimately can be).