⇓ More from ICTworks

4 Reasons for Large Scale SMS Text Surveys in Developing Countries

By Guest Writer on January 9, 2017

We know that SMS is a cheap, convenient, easy to adapt and automate, and non-intrusive way to collect data. But compared to face to face or phone interviews, text messages are also easier to ignore, and responses may be incomprehensible. Little has been published on the topic of SMS surveys in developing countries, including factors that influence response rates.

At the MERL Tech DC conference, I reported out on a small research study Abt Associates did through USAID’s SHOPS project. We wanted to test the feasibility of using SMS surveys for a large scale randomized controlled trial (RCT) in Kenya. The primary research study was designed to evaluate the impact on knowledge and use of family planning of m4RH, a mobile health information service. Our SMS experiment was a preliminary study to validate SMS as our data collection method and inform design of the RCT.

m4RH, developed by FHI360, is a free text message service publicized through mass media. Consumers access the service by sending a text and then scrolling through menus to get information on a range of topics such as the benefits, side effects and myths regarding various family planning methods.

We conducted our SMS feasibility experiment with about 1400 people in Kenya. Once someone accesses the service, we used a system able to capture the user’s number and randomly assign them to various sub-groups to test our variables. Sample questions included “have you spoken to your partner about family planning within the last 30 days?” and “Up to how many days after sex is emergency contraceptive effective?” We sent two rounds of six questions each, from five to eleven weeks apart.

Here are our key findings:

6-question SMS survey sets are viable

First, the response rate was higher than we had hoped, and sufficient for us to build the sample size needed for our impact evaluation. 40% of people receiving the survey answered at least one question in the first round, and 20% answered all six. The second round of questions sent a couple of months later resulted in a 28% response rate on the first question, which dropped to 14% who answered all six. We were encouraged by these numbers, knowing that digital surveys in developed countries often report response rates under 5%.

We had hypothesized that there would be a drop off after each question as survey fatigue set in, but did not know where the drop off points would be. We found that there was a steep drop in response rate after Q1, but after that, the fall off from Q2 to Q6 was gradual, linear, and stable.

SMS surveys appear amenable to a wide variety of design options.

Second, with regard to other variables we tested, we found that over 90% people responded within 20 minutes of receiving a survey questions or not at all. We found no difference in response rates based on time of day, number of weeks between surveys, or the content of the question. Most importantly, we saw similar response rates between the control group (more limited access to m4RH content) and the treatment group (full access to m4RH content). If we had found a large differential in response rates between those two groups, it would have invalidated the impact evaluation design.

Lottery-type incentives work, and can save researchers money.

A third factor we examined was how the level of incentives (in the form of airtime) might boost response rate. We tested two levels of a guaranteed top-up for completing the six-question survey (roughly equivalent to $.50 and $1.00), and then compared those incentives with the chance to win a larger top-up (equivalent to $10.00). Our hypothesis was that those promised a chance to win the incentive in round one (but did not) would be less likely in round two to complete the survey compared to those who received their guaranteed reward.  But in fact we found no difference among the response rate based on these different incentives.

SMS surveys produce readable, understandable results.

Finally, we needed to confirm that text responses would be readable by our automated system, and that participants could follow formatting directions. We gave instructions such as respond “Y” for yes, text your age in numbers, or text “a”, “b” or “c” to answer a question. We found 92% of the responses used the correct format, and most of the other 8% were readable manually.

These results suggest that SMS is practical for large scale health surveys in low resource settings, and confirmed our use of SMS data collection in the RCT. For more information on our methodology and findings, please refer to this summary. We hope publishing these results encourages others generating data on SMS survey designs to share their findings as well, to promote best practices in this emerging field.

This post written by Pamela Riley and the study was led, designed and implemented by Doug Johnson from Abt Associates and supported by FHI360 colleagues Alice Olawo and Loice Maggaria, and Marcus Waggoner from TexttoChange.

Filed Under: Healthcare
More About: , , , , , , , ,

Written by
This Guest Post is an ICTworks community knowledge-sharing effort. We actively solicit original content and search for and re-publish quality ICT-related posts we find online. Please suggest a post (even your own) to add to our collective insight.
Stay Current with ICTworksGet Regular Updates via Email

Sorry, the comment form is closed at this time.