Web Survey Bibliography
Title Decomposing Selection Effects in Non-probability Samples
Author Mercer, A. W.; Keeter, S.; Kreuter, F.
Year 2016
Access date 09.06.2016
Abstract
Prior studies have found that survey estimates obtained using non-probability samples are sometimes very close to estimates that use probability-based methods, while at other times they differ substantially. To date, the conditions under which non-probability methods yield comparable estimates remain poorly understood. We propose a framework for understanding differences between probability and non-probability sample estimates as a function of four separate components, each with different root causes and requiring different approaches to remedying. First, differences may be due to confounding, or the presence of unobserved factors associated with both the survey outcome and inclusion in a sample. Second, differences may be due to the absence of certain classes of respondent from non-probability samples (such as the non-internet population in web panels). Third, differences may be due to the over or underrepresentation of certain types of respondents. Finally, differences may be reduced or magnified by post-survey adjustment such as weighting or sample matching. We use machine learning and causal inference methods to evaluate the relative contribution of each of these components in explaining observed differences between estimates generated from 11 parallel, non-probability web surveys and the Pew Research Center’s probability-based American Trends Panel. We look at the degree to which these components are consistent across different sample providers over a wide range of variables. Because the point of comparison is itself a survey rather than true values, these comparisons cannot be said to measure selection bias; however they can shed a great deal of light on dynamics that produce similarities or differences between different non-probability sample providers relative to a common, probability-based point of reference.
Access/Direct link Conference Homepage (abstract)
Year of publication2016
Bibliographic typeConferences, workshops, tutorials, presentations
Web survey bibliography - Kreuter, F. (12)
- Theory and Practice in Nonprobability Surveys: Parallels between Causal Inference and Survey Inference...; 2017; Mercer, A. W.; Kreuter, F.; Keeter, S.; Stuart, E. A.
- Decomposing Selection Effects in Non-probability Samples ; 2016; Mercer, A. W.; Keeter, S.; Kreuter, F.
- The Effect of Benefit Wording on Consent to Link Survey and Administrative Records in a Web Survey; 2014; Sakshaug, J. W., Kreuter, F.
- Experiments in Obtaining Data Linkage Consent in Web Surveys ; 2013; Sakshaug, J. W., Kreuter, F.
- The Influence of Respondent Incentives on Item Nonresponse and Measurement Error in a Web Survey; 2013; Felderer, B., Kreuter, F., Winter, J.
- Practical tools for designing and weighting survey samples; 2013; Valliant, R. L., Daver, J. A., Kreuter, F.
- Using paradata to explore item-level response times in surveys; 2012; Couper, M. P., Kreuter, F.
- Paradata; 2012; Kreuter, F.
- Assessing the Magnitude of Non-Consent Biases in Linked Survey and Administrative Data; 2012; Sakshaug, J. W., Kreuter, F.
- The use of paradata to monitor and manage survey data collection; 2010; Kreuter, F., Couper, M. P., Lyberg, L. E.
- Television Viewing Among Respondents and Nonrespondents to the Nielsen Diary Survey; 2009; Casas-Cordero, C., Kreuter, F.
- Social desirability bias in CATI, IVR and Web surveys: The effects of mode and question sensitivity; 2008; Kreuter, F., Presser, S., Tourangeau, R.