⇓ More from ICTworks

Ranking the 6 Problems with ICT4D Product Rankings

By Guest Writer on October 6, 2016

mande-platforms

I still remember how excited I was, the first time that FrontlineSMS got asked to participate in a “How to Choose the Right Tool for You,” technology product ranking. It was validating – we were being included in a special list of tools that development practitioners should consider while doing whatever it is they do.

The second time, was great too! We were gaining steam. The third time, we still exchanged smiles around the office. The 147 and counting other times, though, were less exciting and revealed some very fundamental, structural flaws in the way software product rankings are done.

ICT4D Product Ranking is Hard

Before we launch into the issues, though, I want to express empathy – the digitization of international development is a crazy, crowded place with hugely complex digital and political economies.

It’s hard to know what “good” looks like, let alone what “inclusive and appropriate,” means when applied across geography, industry, political context, infrastructure, population type, and any of the other million variables that we encounter daily. Worse, it’s a marketplace, so a lot of providers are very confident that they are the solution, in an effort to win customers.

So let me start by admitting, I don’t believe there is, or even should be, one solution – and it’s hard enough having to pick one, let alone understand all of them and how they fit together.

That, of course, is where product rankings and indexes are supposed to come in – they’re supposed to be able to tell you what tool or tools you should pick, and why. And, honestly, most of them don’t make the process any easier. This isn’t to beat up on reviewers, it’s to say that their task is mostly impossible – and even where possible, is either misinformed, obsolete, or irrelevant the moment after it’s published.

The 6 Problems with Product Rankings

1. The Rate Problem.

Tools evolve weekly, if not daily. As a tools provider, my job is to constantly make things more functional, easier, cheaper, and just plain better every day. If I, and the dozens of others like me, are doing my job – your ranking is wrong almost as soon as it’s published.

Worse, I work with a team of developers, meaning that if you pay me, chances are, I can build most of what you need – so even when a feature isn’t obvious in the public-facing version you sign up for, that doesn’t mean that it hasn’t been built or that it couldn’t be very quickly. These inaccuracies compounds over time, at an accelerating rate, which means that no matter the ranking resource, the only reliable thing about it is that it’s misleading its readers.

2. The Blackmail Problem.

“But, we gave you the opportunity keep it updated!” Nearly every researcher or index/ranking creates “opportunities” to consult or update or fix their inaccuracies. While that seems like good practice for the researcher, that’s A LOT of work over time, especially if your tool is listed in more than one index.

The idea that tools providers have a duty or responsibility to correct or update a research product that has dubious, at best, commercial value for free is worse than unfair, it’s basically blackmail. The researchers don’t produce the index for free and it’s hardly ethical to say “improve our work or we’re going to publish something misleading about your platform.” I know it’s not malicious, everyone is in the same boat here – trying to do something valuable on as tight a budget as possible. The problem is, these rankings aren’t valuable – they’re expensive, wrong, and – most importantly – confusing users.

3. The User Problem.

Said simply, there is not one user. There is also not one group of people most users need to reach. We all talk about the importance of context and then act surprised (or worse, cynically knowing) about ‘pilotitis’ and the failure of digital projects to scale. Context doesn’t scale – minimizing replication requirements and then tailoring them to new contexts does. Most rankings fail to evaluate replication capacity, and skew heavily to the needs of funders and extractive data practices, which isn’t who many of us are hoping to serve.

4. The Market Problem.

“But a big funder has paid for this, so they’re going to use it to prioritize their funding – so you have a commercial interest in contributing to this.” No they’re not. And because they’re not, we don’t. The exception to this is when you’re doing scoping work for a single, funded project – these are assessments, not rankings. If you want to have a big kid conversation about the politics of organizational procurement, let’s talk – this isn’t that post.

5. The Incomplete Problem.

Every ranking omits tools they shouldn’t – and they do it mostly because of branding. There are dozens, if not hundreds, of options for most technical solutions from an enormous range of providers all over the world. The platforms that researchers choose to include tells you more about their biases than the full range of available options, most of the time.

6. The Philosophy Problem.

Lastly, and maybe most importantly, a lot of design decisions are intentional, based on a belief about the best ways to achieve a goal. For example, in messaging, SMS is not secure. I know that – and I prefer it anyway, not because I don’t like security, but because I think nothing is safer than the knowledge that a platform is insecure.

It’s when you start trusting “secure” platforms and behave in ways that put you or your users at risk, and then those platforms turn out not to be secure, (hint: very, very, very few things are perfectly secure) that everyone gets exposed. This isn’t to start that conversation (but I’m open to it, if you want to), it’s to say that rankers often mistake philosophical and design decisions for quality of engineering, without digging into ‘whys’. Those whys are important for users, for the projects they support, and rankings themselves.

How to Do It Right

So, finally, I have a proposal – it’s modest and unsolicited: stop funding rankings. Start, instead, funding internal capacity building workshops – invite experts, practitioners, and providers (whose time you pay for) to start from the issues and your context, and build practical, ethical, user-centered approaches from the ground up.

Admittedly, it’s a slower approach, but it replicates better. You’d be amazed at the way understanding scales.

By Sean Martin McDonald, CEO of FrontlineSMS

Filed Under: Featured, Solutions
More About: , , , ,

Written by
This Guest Post is an ICTworks community knowledge-sharing effort. We actively solicit original content and search for and re-publish quality ICT-related posts we find online. Please suggest a post (even your own) to add to our collective insight.
Stay Current with ICTworksGet Regular Updates via Email

One Comment to “Ranking the 6 Problems with ICT4D Product Rankings”

  1. Ed says:

    Well put Sean, as a result of what you’ve effectively articulated, those in the know take these reviews and rankings with more than a pinch of salt. The same applies to media releases. Those who have experience implementing solutions know all to well that ‘loud’ doesn’t equate to ‘best’. Features look great in bulleted lists, but weeding out the marketing hype from fact is often the biggest challenge to any implementer. Due diligence on the part of the implementer, taking a host of facts into account, and often actually testing products before decisions are made to adopt them, is still #1 on the list for successfully solving problems with technology. The real success stories come from crowdsourcing your marketing… it’s free and usually far more valuable to potential adopters… but not always easy to get right.