Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title Response Rates and Response Bias in Web Panel Surveys
Year 2015
Access date 22.08.2016
Abstract
Non-probability samples, such as online panels, are increasingly accepted as “fit for purpose” for low incidence populations (e.g., pregnant women), difficult to reach populations (e.g., health care workers) and other special populations, particularly when time or cost make probability surveys infeasible. However, there is much less enthusiasm for the application of these methods in social science research for general populations. Aside from the issue of statistical generalizability, low response rates within the panel and demographic biases in the achieved samples are often cited (AAPOR 2010).
Are low response rates and demographic biases endemic to population surveys using web panels, or do they reflect the methods of particular surveys? Many web panel surveys are conducted in such a way that response rate cannot be calculated. In other cases, response rate is not reported. Further, most web surveys are not conducted to optimize response rate since sample is nearly unlimited and speed is often critically important to the client. In addition, biases in web surveys are usually identified by comparing the characteristics of the achieved sample to the population, which does not address the source of the error as the frame or the survey procedures.
This paper examines the application of two survey protocols in a general population survey conducted in the same community using a national web panel. Invitations will be sent to two Census balanced samples of 5,000 from the master panel, with the goal of achieving at least 500 completes in each sample. For the first protocol, invitations will be followed by a single reminder, an industry standard. For the second protocol, a robust reminder schedule including up to 4 reminders will be fielded over a three week period. Response rate is calculated as the proportion of invited respondents who complete the interview. Non-response bias is calculated by comparing the characteristics of responders and non-responders from their panel profile. Findings are compared across the two samples from the same community in the experiment.
Year of publication2015
Bibliographic typeConferences, workshops, tutorials, presentations
Print

Web survey bibliography (4086)

Page:
Page: