Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title Predicting Response Times in Web Surveys
Author Wenz, A.
Source General Research Conference (GOR) 2015
Year 2015
Access date 15.07.2015

Relevance & Research Question: Survey length is an important factor that researchers have to consider when designing questionnaires. Longer interviews are assumed to impose greater cognitive burden on respondents, which may have a negative impact on data quality. In the course of long surveys, respondents may become fatigued of answering questions and may be more likely to use satisficing response strategies to cope with the cognitive demands. Furthermore, longer surveys lead to increasing costs for questionnaire programming and interviewing. Despite the impact of interview duration on data quality and costs, survey designers are often uncertain about the length of their survey and only apply rules of thumb, if any, to predict survey length. The research project presented in this article investigates how item properties and respondent characteristics influence item-level response times to web survey questions. The project builds on the response time analysis carried out by Yan and Tourangeau (2008) and other studies on response times and interview duration and examines whether their findings can be replicated using a different dataset. Finally, the development of a tool for response time prediction is discussed as possible use of the results.

Methods & Data: The analysis is based on data of the GESIS Online Panel Pilot, a probability-based online panel which consists of German-speaking, Internet-using adults living in Germany. The survey is suitable for studies on response times because it contains a large variety of question types on multiple topics. Response times to survey items were captured at the respondent’s web browser (client side) by implementing JavaScript code on each survey page. In contrast to response times collected at the web server, client-side response times represent more precise measures of the response process as they do not include downloading times. Multilevel models are applied to take into account that item-level response times are cross-classified by survey questions and respondents. Assuming that the effect of respondent characteristics is constant over items and the effect of item properties is constant across respondents, a set of random-intercept fixed slope models is fitted. Starting from an unconditional model without any covariates, predictors on the respondent level and the item level as well as cross-level interactions are successively included as fixed effects in the model to account for the observed variation in response times.

Results: The analysis shows that the respondent’s age, education, Internet experience and motivation are significant predictors of response times. Respondents who are younger than 65, have A-levels or Vocational A-levels, use the Internet frequently and are less motivated in survey participation need less time to complete items in web surveys. Survey participants using smartphones or tablets to complete the survey have longer response times than participants who use desktop computers or laptops. Since the questionnaire of the GESIS Online Panel Pilot was not optimised for mobile devices, mobile respondents may have had problems in reading questions or selecting the appropriate response options. Among item properties, the complexity of questions and the format of response options were found to have an impact on response times: Survey items with long question texts, many response options and open-ended questions are associated with longer response times, compared to less complex questions with closed-ended response formats. When comparing the response times to a survey evaluation item across waves which was asked at the end of each survey, it could be shown that respondents become faster in the course of the panel study. Several results were contrary to prior expectations: The response times of respondents with survey experience who completed at least one survey in the previous year do not differ significantly from the response times of inexperienced survey participants. The position of the item within the questionnaire does not significantly influence survey length, which implies that respondents do not speed up in the course of the survey. Factual questions, not attitude questions, induce the longest response times among all questions types. Moreover, it was found that sensitive items are completed faster than items not dealing with sensitive topics. This finding could be explained by the fact that survey participants may answer sensitive items less thoroughly or may tend to skip these questions. None of the cross-level interaction effects were found to be significant predictors of response times. Apart from substantive variables, a set of paradata variables which describe the process of questionnaire navigation were included in the model to account for variation in response times. Respondents who scroll horizontally or vertically on the survey page need more time to complete the item on that page. Assuming that response times are an indicator of respondent burden, this finding implies that survey pages have to be adapted to the respondent’s screen size to avoid scrolling. If survey participants leave the survey window, for example to access another webpage in the browser, or revisit survey pages to edit their response, this also has an effect on response times.

Added Value & Limitations: The present analysis could replicate many findings from previous studies on response times using a probability-based online panel where response times were collected at the client side. However, some results were not in line with previous research and need to be investigated by future studies. Beyond replication, the study contributes to existing research by examining response times in the context of a panel study and demonstrating that respondents speed up across survey waves. Furthermore, the analysis indicated that paradata describing the process of questionnaire navigation are significant predictors of response times and should be collected and controlled for in future response time analyses. The limitations of the present study are the very low response rate of the GESIS Online Panel Pilot and the small number of observations in the analysis compared to previous studies on response times. Although the sample sizes are sufficiently large to estimate multilevel models, the statistical power of the fitted models may be reduced. Moreover, the nested structure of survey questions within waves, additional to the cross-classified structure of survey items and respondents, has not been considered due to a small group size at the level of survey waves. To address these shortcomings, response time data from online panels with larger sample sizes and a larger number of survey waves have to be analysed.

Year of publication2015
Bibliographic typeConferences, workshops, tutorials, presentations

Web survey bibliography (439)