Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title Effectiveness of Progress Indicators in Web Surveys
Year 2004
Access date 14.09.2004
Abstract To increase the number of respondents that complete web surveys, designers often display information about how much of the questionnaire has been completed (progress indicators). The assumption is that respondents will be less likely to break off if they see they are making progress. However, the evidence for such benefits is mixed. In Couper, Traugott & Lamias (2000) and Crawford, Couper & Lamias (2003) progress indicators increased, slightly decreased, and had no impact on break-offs. Perhaps it is not the presence or absence of progress indicators that matters but whether the information they display is encouraging or discouraging. We test this in two experiments that vary how progress is calculated in web surveys. Progress accumulates quickly (i.e., is encouraging) in the early going and slows down (i.e., becomes discouraging) later in the questionnaire (the “fast-to-slow” group), accumulates slowly at first and speeds up later (the “slow-to-fast” group), or increases at a constant rate. A fourth group of respondents received no progress feedback. In the first experiment, 3179 respondents began the questionnaire and 457 of these broke off. Those in the slow-to-fast group (discouraging early information) broke off at nearly twice the rate as those in the fast-to-slow group. In addition, they judged the task to be less interesting and to have taken longer than their fast-to-slow counterparts (even though it actually took them less time to complete the questionnaire). In addition, break-offs with the constant-speed progress indicator – the type most typically used – were not reliably different from no progress indicator suggesting that the technique may be counter-productive for long questionnaires. In the second experiment, we varied the calculation of progress in exactly the same ways as the first experiment. However, we also examined whether the effect of the early progress information was moderated by when it was displayed. In particular, progress indicators were always visible (as in the first experiment), available intermittently (mostly at transitions between sections), or on demand, i.e. when respondents clicked to display them. 3195 respondents began the questionnaire and 478 of these broke off. The speed of progress affected break-offs almost identically to its impact in the first experiment. In addition, the frequency with which progress information was presented interacted with speed: when the feedback was under respondents’ control, speed made little difference. We conclude by discussing the promise and limits of interactive features of web surveys for improving data quality.
Year of publication2004
Bibliographic typeConferences, workshops, tutorials, presentations
Print