You might think that the topic of collecting data via mobile devices would be a rather dry discussion of data management and statistical methodology. You would be very, very wrong. The Technology Salon all but came to blows as we wrestled with privacy issues, total costs of ownership, and other elephants in the room.
When you combine some of the brightest mobile-for-development minds from projects stretching from agriculture to health to democracy, all of whom are facing increasingly common problems, perhaps that’s to be expected. Stories were shared around the basic challenges of data collection, picking the system to use, and the complications of different sectors.
Join us at the next Salon – subscribe to get invites!
The right tool exists
Excitingly, the core problem is no longer having the right tools for the job – from SMS to feature and smartphone apps to tablets, there is a wealth of tools for organizations to gather information in the field. The first problem we uncovered was actually better tools to support deciding which tools were sufficient for the task at hand, and which tools among those were best. Some organizations had done deep dives into research, others had asked their networks.
There is at least one good collection of tools, put together into an Online Selection Assistant. Sadly, the project appears to no longer be funded and is not up to date with who has which features. It remains a solid starting point and sparked some interest from the Salon attendees in contributing to keep up-to-date.
What is the “best” tool?
What actually is “best” itself deserves unpacking. There will likely be some requirements and constraints of the project and the environment (Is there a reliable data network, or must you use USSD or SMS messaging? Is the carrier willing to help?). These criteria must not only be checkboxes and claims, but a desire emerged around some level of actual third party grading of their functionality, a la Consumer Reports for mobile data collection systems.
Naturally, the technology itself can meet all of these criteria, but the final decision must be made around whether or not it meets the needs of the users who will be enumerating data with the device.
Now about that data…
The flip-side of this is of course, what data are we gathering? Donor types always seem a bit surprised when merging existing data sets from different studies is difficult or downright impossible, and implementers seem to desire a better ability to standardize their own data, but the coordination is lacking – each program will want to measure a specific angle, and wedging that in to a broader data context is, at best, overhead.
Deeper into the rabbit hole is, of course, security. Much of the data being gathered has privacy concerns, from health status to putting lives at risk around election monitoring. Yet, the underlying technologies are essentially unsecured, particularly if a carrier can be forced to reveal data by the government; and few data gathering apps for smartphones take security seriously enough, given the potential consequences for those involved.
As with other cross-organization standards like IATI, the coordination to build in data, functionality, and security standards are not insurmountable – but there does have to be a lot of will behind making it happen from all sides. Judging by the raucous nature of the room, I’d say we’re close
Jon Camfield is the technology strategist at Ashoka’s Changemakers, which open-sources social innovation through a competition and matchmaking model.
Jon,
You state that the Online Selection Assistant (which you link to) is both (1) not up to date and (2) a solid starting point. I’m not sure how providing outdated or incomplete information can be a solid starting point for anyone interested in mobile data collection.
I’m particularly not a fan of that current tool because it incorrectly eliminates our EpiSurveyor mobile data collection tool from several results based on inaccurate information:
– On one of the first questions (“select operating system”), I chose “Symbian” . . . and EpiSurveyor was eliminated from the recommendations — even though we have more than 8000 users around the world using our Symbian app.
– when I picked Apple’s iOS as the operating system . . . EpiSurveyor was eliminated from the recommendations — even though we do, indeed, have an iOS app in Apple’s app store.
– when I indicated that an SMS based system was required . . . EpiSurveyor was eliminated — even though groups like IRC and JSI and others are using our SMS-based version of EpiSurveyor all over the world.
When a tool incorrectly steers people away from the most widely used mobile data collection system in international development (now with nearly 10,000 users in more than 170 countries), I think it’s a very poor starting point.
Joel
P.S. To their credit, I have written to the Nomad project people in charge of the tool, and they have said they will update the information — though they have not indicated when that will happen, and until then anyone seeking information will be provided with an inaccurate report.
It remains the only cross-solution database of mobile data tools that I’m aware of (OK, there’s also the even-older spreadsheet from the UNGP/MobileActive collab, http://www.mobileactive.org/go-to-mobile-data-collection-resources) ; which puts it at a notch above google searching and asking a few friends in “the industry.” It’s very unfortunate that EpiSurveyor is so poorly represented, as it provides a powerful cross-platform tool.
Even so, unless this gets another round of funding, it’s likely to gather dust.
It sounded to me like the data behind this tool could be opened up. If we can, I’d be willing to do the set-up to make this into a more open and possible-to-update tool.
We could either just hack some google forms to do this, or even build a site around it with community ratings for tools, and provider access to keep their products accurate and up-to-date.
I’m serious, I might add. I already have a good Drupal+Simile/Exhibit recipe to implement the infrastructure quickly with (see http://www.changemakers.com/citizenmedia/toolkitexplore, http://www.joncamfield.com/resume and even http://www.audreyandjon.com/recipes/box as examples – it’s possible I like using Simile a bit too much). Depending on how crazy we go, it’d be nice to get some help munging through the data, if anyone has some fall-term interns who could lend a few hours of time.
Cheers!
Jon,
I don’t think that an accessible but absurdly inaccurate database is “a notch above Google searching”, I think it’s a notch below. In fact, the air of authority that is put forward is particularly dangerous: it’s authoritative . . . and wrong.
But at least you can say that the people behind this database are actually trying to pin down the real differences between the products — even if their information is sadly wrong.
As you indicate, we have a real scarcity of written reviews/comparisons/matrixes on the topic of ICT4D. Where is the Consumer Reports for ICT4D? Where is the CNET or Engadget or Wired? Where is a writer that has actually used the tools in question, and then who can offer their opinion that “this sucks, but that one is really useful”? That “this one requires a programmer to set up” but “this one is as easy as Gmail”? That “this one is ‘free'” but will kill you in personnel costs”?
Too often I read articles about ICT4D that are just scraped from years-old articles on the net, with no actual personal experience, and that all seem to conclude “well, each tool is good in its own way” — translation: the writer didn’t bother to learn each tool, and has no incentive to offer an opinion and risk a conflict.
It’s like reading an issue of Car & Driver that concludes (without listing the prices of the cars) “it’s good to drive cars, and each kind of car has its own strengths.” Useless.
Joel
Again, let’s just build this – I think a broad rating of ICT4D tools would be handy, but let’s take some of the data already out there, enable “vendors” to update it, and users to review and rate it. I’m sure some fun challenges of verification of data will come into play, but it’s a tight enough community that we’ll get through it.
Starting with what type of data to gather from the tool providers; I’m thinking:
Communication networks used (USSD, SMS, GPRS, EDGE, high-speed (3/4g), LTE/WiMax, WiFi)
Phone interface needs – possibly munging together the platform and the actual interface used (none/sms/voice-only, brew, java, symbian, blackberry, iOS, android, wap, html, html5)
Backend systems (DB? API?)
Implementation needs (what do you need on the ground to build out a project – just the end-user phones, a laptop with a cellmodem, an agreement with a cell provider, etc.)
Security (manual coding, crypto software, encrypted at server level?)
What other mostly-exclusive groups of features/user needs would be most relevant?
Jon, thank you very much for your post and also to Joel for your mutual call for action in establishing credible and reliable sources to compare technology tools for development. I could not agree more. We at RTI have not only been using, but also developing a range of open source tools for development in the past years (see, e.g., http://www.ictedge.org), our most recent endeavor being Tangerine, a tool to facilitate large scale early grade reading and mathematics assessment. Here a few thoughts on the discussion.
1. I second the need for a source of review information a la CNET as a first stop for practitioners in learning about potential tools that may serve their need. I would, however, always consider such a source just a first-stop, and not a ‘one-stop’ resource and implore every user to consult additional sources to validate the information provided. Your earlier comments on this topic already unearthed challenges in accuracy of a single source, in spite of, most likely, best effort by its “authors”.
2. Further, in my vision, instead of one organization or person managing the content, and thus creating a dependency on either that organizations/persons good will or external funding, we seriously consider a community approach to maintaining this tool. I believe a collaboratively managed site a la wikipedia may be better suited to ensure accuracy and sustainability as well a timeliness of updates when needed. I understand that there are valid concerns about either incorrect information or inappropriate content where such openness for anybody to contribute is provided. However, to me, there is convincing experience as to the overall benefit of this approach outweighing such risk [I keep remembering the Isuzu Experiment by Alex Havalis from back in 2004 in this context.]
3. Realistically, however, we should expect they tool owners/developers may be the ones most interested in keeping its information up to date. Thus we may want to consider combining a more open factual information presentation along more discreet and objective criteria with a more subjective “review”-type component to provide a deeper understanding on actual experiences of using such tools in the field; I could see more independent users being the ones contributing such information.
4. From a content perspective in such a scenario, the process to develop and agree on criteria to categorize the tools should be similarly collaborative. Jon has made a great start and I hope we will jointly be able to refine and add to these further and more people joining this discussion.
Carmen
I don’t think creating a database in the typical top-down method of international development would be useful, nor do I think putting “the community” in charge has historically been a recipe for success: when everyone is responsible, no one is responsible.
I am more interested in an organic and varied system of tech evaluation, such as has developed in the commercial consumer technology community.
But, until we understand WHY the development technology community does not have a single CNET, or Consumer Reports, or Engadget, AnandTech, or Ars Technica, etc, etc — I do not believe we will be able to successfully produce their equivalents within the development field.
Here we go: http://mobile.ictdev.org/
I’ve posted a first few seed/sample profiles of mobile data tools up there (is one of them yours? Create an account and use the contact form to get in touch, I’ll transfer ownership of it so you can provide updated information!)
I totally agree that me (or any organization, even with funding) doing this top-down will have a shelf life that begins to get stale as soon as the funding is over. I’m not convinced that the market though is quite big enough for a cnet/amazon style marketplace+reviews to magically appear without some place to provide a central store of data.
I’m certainly no amazon, and am building this in my copious spare time with absolutely no funding behind it, but based on this thread and many conversations going over the exact same discussion, I’m willing to see if this can spark an entrepreneur-sourced database of tools. As that catches on, I’ll start enabling reviewers as well.
Please browse through it, let me know what you think, and offer suggestions of different types of data I can or should collect, as well as if I’ve gone too deep in collecting the data that’s already there.