Study Investigates Benefits of Modular Design for Mobile and Online Surveys
Surprising results include high respondent engagement, low dropout rate on mobile survey
by Frank Kelly, Lightspeed Research, Alex Johnson, Kantar Operations and Sherri Stevens, Millward Brown
It’s an age old story. The older generation just doesn’t understand — yet heartily disapproves of – the music, fashions, trends and overall behavior of the younger generation. Over time the younger generation becomes the older and adopts the same attitude toward its successors. And the beat goes on.
Today’s generation, born into the technology revolution that gave rise to the internet and the myriad new behaviors it created, has followed this narrative by and large, although the establishment’s reaction has been more bewilderment than disapproval. However, there is one big difference for today’s youth. The transformation brought about by such huge technological advances is likely to produce significant, enduring changes in behavior. In fact, according to some neurologists, modern technology can physically rewire the brain.1
Market researchers have been monitoring these trends in consumer behavior for decades. But recently, the competition for consumers’ attention that began as the number of TV channels multiplied from the tens to the hundreds has greatly accelerated with the rise of the mobile internet and the proliferation of portable devices and apps.
At the end of 2011 there were 5.9 billion mobile cellular subscriptions globally2, and an estimated 5 billion people owned at least one mobile phone – representing a 74 percent penetration. Further, the share of mobile in relation to other computing devices is trending higher. In the fourth quarter of 2011, worldwide shipments of smart phones exceeded shipments of client PCs – which includes tablet computers3. In developing markets, millions of people own mobiles but not PCs4, and this may soon become a reality for developed markets. Indeed, if we include all portable devices (as opposed to just mobile) the shift away from static, PC-based computing is happening even more quickly than the shipment figures imply.
Mobile phones, tablet computers and other devices not only introduced another lure for consumers’ attention to compete with the bonanza of information and entertainment available via TV and PC, but also created new opportunities for multi-tasking beyond the confines of the home and office. Because that multitasking typically takes place during short periods of downtime – such as waiting for an appointment – the activities done on mobiles are for the most part relatively quick transactions. In addition, email has become a common mobile task, and has created the expectation that content delivered via email would be accessible via mobile, as well.
These trends have a profound impact on marketing research, and in particular on how online consumer research surveys are delivered to panelists. The numbers of respondents attempting to access the Lightspeed Research UK panel member’s page with mobile or tablet devices has roughly quadrupled in the past 12 months, and now stands at almost 5 percent (figure 1). Moreover, there may be significant latent demand, because panelists currently are not encouraged to access the member’s page from devices other than the PC.
Enormous Technological Diversity
Throughout the ‘90s, as online surveys burgeoned in the PC-based environment, the industry worked to employ color, graphics and interactive techniques to make surveys more compelling and in line with users’ wider online experience. Over time, we were able to provide a consistent user experience across a range of browsers.
However, the displacement of the PC by portable devices threatens our ability to deliver consistent online research. Today we face technological diversity on a grand scale, as screen sizes and interfaces across the portable device market make the browser compatibility issues of the previous decades seem like a walk in the park. Mobile devices offer at least six different operating systems5, different resolution levels (the iPhone 4 has doubled the resolution of its predecessor), a variety of interfaces (touch screen, trackball, track pad and keyboard used in various combinations), as well as zoom and pan mechanisms.
Now, engaging respondents may be more about providing a choice of device than delivering sophisticated graphics. Further, with the spectre of the PC-less respondent looming, embracing the multi-device environment may even become a prerequisite to accessing respondents at all.
This creates a dilemma: how can we deliver a consistent research environment while maintaining access to respondents through the use of short, simple and convenient surveys that work on a range of mobile devices, and still deliver the engaging survey experience to which panelists have become accustomed on their PCs?
The answer is unsurprising: we need shorter questionnaires. As an industry we have known for years about the problems with long surveys6. Unfortunately, forces inherent in the industry – among them research clients’ desire to preserve their insights, often at the expense of the respondent experience — support the status quo. Gradually, questionnaires are getting shorter. But respondent attention spans, along with their patience, may be shrinking faster. If we rely on market forces to redress the imbalance eventually, it could come at too great a cost to our panels, as well as to the consumer perception of the industry. Therefore, because questionnaire evolution is too slow, we need revolution.
Viva la Questionnaire Revolution
Instead of trying to shorten surveys, we are investigating a different approach — breaking them into pieces to create shorter respondent experiences while still garnering the same level of insights for clients. This survey modularization can be accomplished in two ways, a “within respondent” approach and an “across respondent” approach.
The within respondent approach allows the same respondent to take pieces of a survey at different times, while the across respondent approach distributes modules of the questionnaire among different respondents using a multiple of the normal sample size. We believe the within respondent approach could be the low-hanging fruit of survey modularization, simply allowing respondents to return to a survey they have abandoned temporarily. The pause points would be formalized to allow the research design to group questions for which sequential responding is needed. Conducting parallel tests is recommended with this approach to ensure that pause points are set appropriately in existing studies.
The greater challenge – and the correspondingly greater prize – is in the across respondent approach. The question is whether fusion and weighting techniques can be used successfully to create equivalent datasets and derive the same insights as though each respondent had taken all the modules. If so, respondent efficiency could be markedly improved, and significant benefits realized for online survey providers, their clients and the survey respondents.
One consideration to employing this approach is that more completes will be required if one of the survey modules will be analyzed across several measures. But other modules may not have this requirement, and so will require lower base sizes. Likewise, reaching significance on the questions in one module could allow its fielding to be stopped more quickly than others.
In all likelihood, neither modularization within nor across respondents will work for all studies, but the hope and expectation is that one or the other will work for some studies – and that it will serve as a major step forward in the march toward improved respondent attention.
A conservative estimate based upon Lightspeed Research’s experience is that 15 percent of surveys are not completed, and further, it is estimated that upwards of 10 million partial interviews are considered useless and discarded in the US each year. Designing surveys in a modular way would allow data such as this to be used to effectively answer business questions. In the case of mobile, with sample expected to be more expensive for the foreseeable future, there is sufficient impetus to make use of this resource.
Data fusion is one option to consider, but it is a complex and time consuming process to get less than perfect data. A better approach might be to simply design a survey in a modular way, with each module an independent analytical unit. In this way, data from all respondents that completed at least one module could be used. As we did in this study, the surveys would be designed so that the order of individual modules would not adversely affect the data.
Modular survey design has another advantage, as well. By rotating the modules to ask each battery of questions sometimes at the beginning, and sometimes at the middle or end of the research, the effects of respondent fatigue, which can impact long studies, could be mitigated. Of course, because mobile surveys tend to be 10 minutes or less, this benefit applies more to online surveys. In fact, a paper on split survey design by Vriens, Wedel and Sandor in 20017 laid out this concept, but to our knowledge it has not been widely adopted yet.
Over the next few years, if mobile continues to be more expensive, and if it becomes a necessary research mode, and if long surveys are still required, then this modular design might finally find its place in marketing research.
Our pilot looked at the two techniques discussed above primarily in a mobile context. Working with Kantar’s mobile development partner, Lumi, we developed an application capable of delivering a survey in either a single sitting, or as discrete modules. A mobile app was used instead of a mobile internet survey to minimize the potential effect of connection delays on the willingness of respondents to take subsequent modules.
Two control cells enabled us to understand both potential platform and modular effects. First, the survey was taken by online respondents, via PC. This allowed comparison between non-mobile and mobile. Second, the survey was taken by mobile respondents in a single sitting to allow comparison between modularization and full surveys.
The survey was broken into five modules. Two additional cells tested within and across-survey modularization. For the “within” cell, respondents were asked to complete the screener questions then given 48 hours to complete the remaining modules. At the end of each module they were given the option to continue to the next module, or to save and return later. They were incentivized for each completed module. For the “across” cell, respondents were asked to complete the screener questions along with one of the modules. In both test cells, respondents who completed all the intended modules were asked satisfaction questions.
The survey was originally an ad hoc survey on car rental companies that we adapted to work on mobile and via modular question design. Topics of the survey included brand awareness, brand imagery, car rental booking processes, reasons for travel, etc.
First we need to understand the test cell structure:
The fieldwork for all cells was done simultaneously using the Lightspeed Research MySurvey managed access panel with US respondents during January 2012. Lightspeed Research and Kantar Operations worked with Lumi to build the survey specific application on the Android, iPhone and Blackberry platforms. Panel members were sent to a web-based screener and asked if they had rented a car in the past 12 months, if they own a smartphone and if they were willing to take a survey on their smart phone.
The survey was designed to be modular. To make the web cell comparable to the mobile cell, we broke up the web cell survey into the same modules as the mobile cells and randomized those cells within the web survey to mimic the process we used on the mobile platform. We asked the same screening questions of both the web and mobile cells. The survey was broken into five modules plus a short screener to capture a few demographic variables and confirm car rental. Great care was taken to design the survey so that the individual modules did not depend on answers from other modules. Twice, we did ask the same question in two modules because other questions in each module were dependent on that question.
How many modules will a respondent take?
The first question we wanted to answer was “how many modules will a respondent take if given a logical stopping point?” We were quite surprised at the answer: All 185 respondents continued to the end of the survey, despite being given an opportunity to save and return at the end of each of four modules. By comparison, we observed a 6-percent dropout or non-completion rate on the web survey. Factors likely to have contributed to the lack of dropouts in the mobile cells include:
- Comparatively high incentives (30 points for a 2 minute module, 150 points total versus 50 points for the full web survey)
- Relatively short overall survey (9 minutes on the mobile phone versus 11 minutes on web for the same survey).
The rationale for higher incentives for the mobile survey was to maximize completion rates. Most people who qualify for the web survey would not be willing or able to complete the mobile survey. We had 4,638 respondents qualify for the survey because they had rented a car in the past 12 months and had a smartphone. Of that group, 2,624 declined to participate because they were unwilling to download the application and take a survey on their mobile phone. The table below shows that about six times more respondents are required to net the same number of completes on a mobile survey as compared to a web survey. We expect this gap will narrow as we get more efficient with mobile fieldwork logistics, but at the moment it makes sense to spend the extra money on higher incentives in mobile to motivate respondents to participate and complete. Certainly, the development of a permanent application will help, but even so the majority of online panel members are not willing to switch modes. In addition, as usability expert Jakob Nielsen notes in his “Alertbox” blog, people install many more apps than they actually use. In our case, about 11 percent of those who successfully downloaded the application never used it.
While the average length for completed online client surveys is between 15 and 20 minutes, we felt that 8 to 10 minutes was the appropriate length in this case. Economics will likely provide the impetus for shorter mobile surveys: costs of developing and testing surveys on at least three platforms will price longer surveys out of the mobile market. Longer term this is not likely to be a constraint, however, as more cost-effective ways to create and manage surveys on mobile devices are developed.
Will respondents take multiple cells at one time or start and stop?
Our second question was “will respondents take multiple cells at one time or will they start and stop?”
Because everyone in Cell 3 completed the survey, we can roll together Cells 2 and 3 to best answer this question. Fewer than 10 percent of the respondents that completed the full survey paused during the survey completion. Most of them paused only for a short while and completed the whole survey in less than 30 minutes. Just a handful took longer, and 22 hours was the longest completion time observed. This indicates that people are willing to complete the survey all the way through on a mobile phone, perhaps because they have had their expectations set by their previous online panel experience. Although we expected that surveys would be completed in modules during several short downtimes across multiple locations, only 3 respondents changed locations during the survey completion process. If more than 10 minutes passed between the completions of modules, we re-asked the survey location question to track location changes. A few respondents completed surveys in non-traditional locations, such as a bus or a train, but about 95 percent completed the mobile surveys either at work or at home.
When designing this study we hypothesized that dropout rates would be higher on mobile devices. The reason for this assumption was our belief that people would undertake only short tasks on a small screen device, preferring a larger format screen for the survey experience. To our surprise, the completion rates were excellent among those willing to download and install the application. Our results indicated that we should not assume higher dropout rates on mobile.
This concurs with a study recently published in ESOMAR’s RW Connect by Gina Pingitore and Dan Seldin, which found that mobile dropout rates were similar for mobile web and online. The study stated “once respondents are engaged, they are likely to complete the survey even if it is lengthy.”8
Web and Mobile Deliver Similar Data, Some Demographics Differ
Even though the full mobile and the web survey were the same, the mobile respondent perception was that the survey was shorter than expected. Perhaps the design exercise that is required to make the survey work on a small screen made the survey easier to complete. The median completion time for the full mobile cell at 9 minutes was 2 minutes faster than the web cell. (yellow shading indicates significance at 95% CI)
Despite the large number of qualified web respondents that were unwilling to take the survey on the mobile phone, we found the results to be quite similar. Only a few of the 40 plus questions yielded statistically significant differences. When the web cell was further split into those that were willing or able to do so, we started to see more significant differences in the data for those not willing to or able to complete on mobile versus those that did. Other studies, such as the one by Pingatore and Seldin, also show that these two modes delivered comparable data.
However, we see that there are demographic differences in the ownership of smart phones. A January 2012 Nielsenwire post stated: “While overall smartphone penetration stood at 48 percent in January, those in the 25-34 age group showed the greatest proportion of smartphone ownership, with 66 percent saying they had a smartphone. In the same age group, 8 of 10 of those that had gotten a new device in the last three months chose a smartphone. Among those who chose a device in the last three months, more than half of those under 65 had chosen a smartphone.”
So while we may be able to get comparable results between smart phones and web, we should expect demographic differences in the audiences. In this instance panel membership and the survey topic may have contributed to minimizing demographic differences. For example, smart phone owners were much more likely to have rented a car. A summary of some key demographics:
While the mobile and web cells were reasonably well matched, when we broke out the web cell into those that would be willing and able to do a survey via mobile and those that were not, we saw some bigger demographic differences:
The biggest differences that we observed among the cells were not with the client survey questions, but rather our survey satisfaction questions.
Mobile survey respondents will complete at least four modules (in this case the entire 9-minute survey) when offered an increased incentive. And they are likely to take the survey in a fixed location such as home or office, rather than during short downtimes throughout their day. Also, they do not drop out at a higher rate than web survey respondents. Therefore, modular survey design has limited advantages for short surveys taken on mobile.
Nevertheless, the benefits of modular survey design can be significant when applied to long web surveys. Benefits include mitigating respondent fatigue, reducing the perception of survey length and effectively using sample from partial completes. Designing a web survey in a modular way does not add substantively to the cost, and therefore is worth serious consideration.
About the Authors
Frank Kelly is Senior Vice President, Operations, at Lightspeed Research.
Alex Johnson is Director, Innovation Group, at Kantar Operations.
Sherri Stevens is Director, Global Innovations at Millward Brown.
- 1. Mosher, Dave 2011 “High Wired: Does Addictive Internet Use Restructure the Brain?” Scientific American
- 2. The International Telecommunication Union, 2011 “The World in 2011 ICT Facts and Figures”
- 3. Canalys, February 2012 “Smart phones overtake client PCs in 2011”
- 4. On Device Research, November 2011 “New internet audience emerges in developing countries”
- 5. Gartner news release, February 15, 2012, “Gartner Says Worldwide Smartphone Sales Soared in Fourth Quarter of 2011 With 47 Percent Growth”
- 6. Cape, P, 2010 “Questionnaire Length, Fatigue Effects and Response Quality Revisited” ARF Re:think 2010
- 7. Split Question Design by Vriens, Wedel and Sandor in Marketing Reesrach, Summer 2001
- 8. Pingatore and Seldin, “5 Things About Mobile Data Collection” ESOMAR October 11, 2011
Back to Home