# Web Survey Bibliography

Probability sampling designs, those with samples selected with a reproducible random mechanism, are considered by many to be the gold standard for surveys. Theory has existed since the early 1930’s to produce population estimates from these samples under labels such as design-based, randomization-based, and model-assisted estimation. This theory ultimately requires that the sample units, excluded from the analysis files either because of non-sampling or nonresponse, are missing at random. This condition, however, is not always attainable. Studies involving samples without a necessarily reproducible design, referred to as non-probability surveys, have gained more attention in recent years but they are not new. Touted as cheaper, faster (even better) than probability designs, these surveys capture participants through various methods such as respondent-driven sampling or opt-in web surveys. For surveys required to produce population estimates to meet their stated fit for purpose, the link between the sample and the target population as well as the probability of participation must be addressed to justify the desired level of quality. Survey weights or analytic models have been proposed to provide the needed evidence of the data’s utility, but research findings on the effectiveness of these approaches are inconsistent. This suggests that probability and non-probability surveys sometimes “work” and sometimes they “fall apart.” Through this lens, we argue in this paper that probability and non-probability surveys are not two sides of a research coin but actually lie on a quality continuum. We first review the published material on the definition of quality, keeping in mind that surveys have a specific fit for purpose within this context. Next, we summarize research to date to measure the quality of non-probability survey estimates and compare these criteria with similar probability surveys. We conclude with components for a quality framework that encompasses all surveys to enable their objective comparison.

# Web survey bibliography - Shook-Sa, B. E. (5)

- Impact of Field Period Length in the Estimates of Sexual Victimization in a Web-based Survey of College...; 2016; Berzofsky, M.; Peterson, K.; Shook-Sa, B. E.; Lindquist, C.; Krebs, C.
- Timing is Everything: Discretely Discouraging Mobile Survey Response through the Timing of Email Contacts...; 2016; Richards, A.; C.; Shook-Sa, B. E.; C.; Berzofsky, M.; Smith, A. C.
- Assessing Potential Bias in Respondent-driven Incident Based Data from a Web Survey of College Students...; 2016; Peterson, K.; Berzofsky, M.; Shook-Sa, B. E.; Krebs, C.; Lindquist, C.
- Methods for Detecting Telescoping Error in a Cross- sectional Web Design Survey ; 2016; Shook-Sa, B. E.; Berzofsky, M.; Peterson, K.; Lindquist, C.; Krebs, C.
- Survey Estimation: How Different Are Probability and Non-Probability Survey Designs?; 2015; Shook-Sa, B. E.; Dever, J. A.