Web Survey Bibliography
Title Nonresponses as context-sensitive response behaviour of participants in online-surveys and their relevance for data quality
Author Wetzlehuetter, D.
Year 2017
Access date 15.09.2017
Abstract Starting point and focus: It is not possible to ignore the internet as a quick, practicable and economic source of information and nearly unlimited communication channel, as a mass medium (online news), a mainstream medium (social media) as well as an individual medium (email). The number of web surveys and methods of taking web surveys increased with the utilisation of the internet. For instance, the Arbeitskreis Deutscher Markt- und Sozialforschungsinstitute e.V. recorded a continuous increase from 1% quantitative web surveys of their members in 1998 to 16% in 2004, 38% in 2010 and 43% in 2014. However, webbased surveys – as extensive discussions show – are not free of controversy. Questable data quality, typically regarding the representativeness of the data (coverage error / missing data) and difficulties to achieve unbiased responses (measurement errors) caused by the equipment used (mode-effects) is more and more common. Errors caused by continuous rising proportions of drop-outs and item-nonresponses in online surveys, are relevant in almost the same manner. However, these sources of error are repeatedly neglected to a certain degree.
As the starting point of the paper, it is assumed that drop-out rates and item-nonresponse rates in online surveys differ as context-sensitive (whether at home or not and using a smart-phone or not) response behaviour. This means that systematic errors linked to the interview situation (in terms of location and device) are conceivable. Respectively, the presentation aims to illustrate, how/to what extend the context of the interview situation has to be considered for data cleansing and analysis of data captured online to avoid, as far as possible, biased results.
Methods and Data: To test this assumption, an online survey about “participation of university students” is used. To provoke drop-outs on the one hand and on the other hand test the consequences of different motivation strategies (prospect of profit, appeals, manipulation of the progress bar) that are easily inserted and therefore often used in online surveys, an experimental design was applied. For this purpose, an unusually long questionnaire (23 online-pages, 121 items) was developed, wherein different motivation strategies were included. 14.2% of the students (n=17,491) invited to take part in the survey reacted to the invitation, 1916 (11%) answered at least one question; just 7.3% (n=1282) reached the final page.
Results: Drop-out-rates and item-nonresponse-rates differ, depending on the above specified survey context: not being at home and using a smart-phone increases both. The motivation strategies used work differently: they solely reduce the risk of non-responses of those who did not use a smart-phone while at home. However, data cleansing does not affect the sample composition concerning studyrelated characteristics. Detailed analyses show that the influence of the defined survey context on substantial findings varies. Based on this the presentation will emphasize the importance of recording and considering the context-information of data collection for data cleansing, analysis and interpretation of results and will discuss how this
As the starting point of the paper, it is assumed that drop-out rates and item-nonresponse rates in online surveys differ as context-sensitive (whether at home or not and using a smart-phone or not) response behaviour. This means that systematic errors linked to the interview situation (in terms of location and device) are conceivable. Respectively, the presentation aims to illustrate, how/to what extend the context of the interview situation has to be considered for data cleansing and analysis of data captured online to avoid, as far as possible, biased results.
Methods and Data: To test this assumption, an online survey about “participation of university students” is used. To provoke drop-outs on the one hand and on the other hand test the consequences of different motivation strategies (prospect of profit, appeals, manipulation of the progress bar) that are easily inserted and therefore often used in online surveys, an experimental design was applied. For this purpose, an unusually long questionnaire (23 online-pages, 121 items) was developed, wherein different motivation strategies were included. 14.2% of the students (n=17,491) invited to take part in the survey reacted to the invitation, 1916 (11%) answered at least one question; just 7.3% (n=1282) reached the final page.
Results: Drop-out-rates and item-nonresponse-rates differ, depending on the above specified survey context: not being at home and using a smart-phone increases both. The motivation strategies used work differently: they solely reduce the risk of non-responses of those who did not use a smart-phone while at home. However, data cleansing does not affect the sample composition concerning studyrelated characteristics. Detailed analyses show that the influence of the defined survey context on substantial findings varies. Based on this the presentation will emphasize the importance of recording and considering the context-information of data collection for data cleansing, analysis and interpretation of results and will discuss how this
Access/Direct link Conference Homepage (abstract) / (presentation)
Year of publication2017
Bibliographic typeConferences, workshops, tutorials, presentations
Web survey bibliography (4086)
- Displaying Videos in Web Surveys: Implications for Complete Viewing and Survey Responses; 2017; Mendelson, J.; Lee Gibson, J.; Romano Bergstrom, J. C.
- Using experts’ consensus (the Delphi method) to evaluate weighting techniques in web surveys not...; 2017; Toepoel, V.; Emerson, H.
- Mind the Mode: Differences in Paper vs. Web-Based Survey Modes Among Women With Cancer; 2017; Hagan, T. L.; Belcher, S. M.; Donovan, H. S.
- Answering Without Reading: IMCs and Strong Satisficing in Online Surveys; 2017; Anduiza, E.; Galais, C.
- Ideal and maximum length for a web survey; 2017; Revilla, M.; Ochoa, C.
- Social desirability bias in self-reported well-being measures: evidence from an online survey; 2017; Caputo, A.
- Web-Based Survey Methodology; 2017; Wright, K. B.
- Handbook of Research Methods in Health Social Sciences; 2017; Liamputtong, P.
- Lessons from recruitment to an internet based survey for Degenerative Cervical Myelopathy: merits of...; 2017; Davies, B.; Kotter, M. R.
- Web Survey Gamification - Increasing Data Quality in Web Surveys by Using Game Design Elements; 2017; Schacht, S.; Keusch, F.; Bergmann, N.; Morana, S.
- Effects of sampling procedure on data quality in a web survey; 2017; Rimac, I.; Ogresta, J.
- Comparability of web and telephone surveys for the measurement of subjective well-being; 2017; Sarracino, F.; Riillo, C. F. A.; Mikucka, M.
- Achieving Strong Privacy in Online Survey; 2017; Zhou, Yo.; Zhou, Yi.; Chen, S.; Wu, S. S.
- A Meta-Analysis of the Effects of Incentives on Response Rate in Online Survey Studies; 2017; Mohammad Asire, A.
- Telephone versus Online Survey Modes for Election Studies: Comparing Canadian Public Opinion and Vote...; 2017; Breton, C.; Cutler, F.; Lachance, S.; Mierke-Zatwarnicki, A.
- Examining Factors Impacting Online Survey Response Ratesin Educational Research: Perceptions of Graduate...; 2017; Saleh, A.; Bista, K.
- Usability Testing for Survey Research; 2017; Geisen, E.; Romano Bergstrom, J. C.
- Paradata as an aide to questionnaire design: Improving quality and reducing burden; 2017; Timm, E.; Stewart, J.; Sidney, I.
- Fieldwork monitoring and managing with time-related paradata; 2017; Vandenplas, C.
- Interviewer effects on onliner and offliner participation in the German Internet Panel; 2017; Herzing, J. M. E.; Blom, A. G.; Meuleman, B.
- Interviewer Gender and Survey Responses: The Effects of Humanizing Cues Variations; 2017; Jablonski, W.; Krzewinska, A.; Grzeszkiewicz-Radulska, K.
- Millennials and emojis in Spain and Mexico.; 2017; Bosch Jover, O.; Revilla, M.
- Where, When, How and with What Do Panel Interviews Take Place and Is the Quality of Answers Affected...; 2017; Niebruegge, S.
- Comparing the same Questionnaire between five Online Panels: A Study of the Effect of Recruitment Strategy...; 2017; Schnell, R.; Panreck, L.
- Nonresponses as context-sensitive response behaviour of participants in online-surveys and their relevance...; 2017; Wetzlehuetter, D.
- Do distractions during web survey completion affect data quality? Findings from a laboratory experiment...; 2017; Wenz, A.
- Predicting Breakoffs in Web Surveys; 2017; Mittereder, F.; West, B. T.
- Measuring Subjective Health and Life Satisfaction with U.S. Hispanics; 2017; Lee, S.; Davis, R.
- Humanizing Cues in Internet Surveys: Investigating Respondent Cognitive Processes; 2017; Jablonski, W.; Grzeszkiewicz-Radulska, K.; Krzewinska, A.
- A Comparison of Emerging Pretesting Methods for Evaluating “Modern” Surveys; 2017; Geisen, E., Murphy, J.
- The Effect of Respondent Commitment on Response Quality in Two Online Surveys; 2017; Cibelli Hibben, K.
- Pushing to web in the ISSP; 2017; Jonsdottir, G. A.; Dofradottir, A. G.; Einarsson, H. B.
- The 2016 Canadian Census: An Innovative Wave Collection Methodology to Maximize Self-Response and Internet...; 2017; Mathieu, P.
- Push2web or less is more? Experimental evidence from a mixed-mode population survey at the community...; 2017; Neumann, R.; Haeder, M.; Brust, O.; Dittrich, E.; von Hermanni, H.
- In search of best practices; 2017; Kappelhof, J. W. S.; Steijn, S.
- Redirected Inbound Call Sampling (RICS); A New Methodology ; 2017; Krotki, K.; Bobashev, G.; Levine, B.; Richards, S.
- An Empirical Process for Using Non-probability Survey for Inference; 2017; Tortora, R.; Iachan, R.
- The perils of non-probability sampling; 2017; Bethlehem, J.
- A Comparison of Two Nonprobability Samples with Probability Samples; 2017; Zack, E. S.; Kennedy, J. M.
- Rates, Delays, and Completeness of General Practitioners’ Responses to a Postal Versus Web-Based...; 2017; Sebo, P.; Maisonneuve, H.; Cerutti, B.; Pascal Fournier, J.; Haller, D. M.
- Necessary but Insufficient: Why Measurement Invariance Tests Need Online Probing as a Complementary...; 2017; Meitinger, K.
- Nonresponse in Organizational Surveying: Attitudinal Distribution Form and Conditional Response Probabilities...; 2017; Kulas, J. T.; Robinson, D. H.; Kellar, D. Z.; Smith, J. A.
- Theory and Practice in Nonprobability Surveys: Parallels between Causal Inference and Survey Inference...; 2017; Mercer, A. W.; Kreuter, F.; Keeter, S.; Stuart, E. A.
- Is There a Future for Surveys; 2017; Miller, P. V.
- Reducing speeding in web surveys by providing immediate feedback; 2017; Conrad, F.; Tourangeau, R.; Couper, M. P.; Zhang, C.
- Social Desirability and Undesirability Effects on Survey Response latencies; 2017; Andersen, H.; Mayerl, J.
- A Working Example of How to Use Artificial Intelligence To Automate and Transform Surveys Into Customer...; 2017; Neve, S.
- A Case Study on Evaluating the Relevance of Some Rules for Writing Requirements through an Online Survey...; 2017; Warnier, M.; Condamines, A.
- Estimating the Impact of Measurement Differences Introduced by Efforts to Reach a Balanced Response...; 2017; Kappelhof, J. W. S.; De Leeuw, E. D.
- Targeted letters: Effects on sample composition and item non-response; 2017; Bianchi, A.; Biffignandi, S.