Web Survey Bibliography
Online research has experienced remarkable growth over the past fifteen years. To keep up with demand,some companies have become quite creative. Rather than continuing to rely exclusively on opt-in panelists, for example, they've developed new methods to include non-panelists within online surveys. They've also figured out how to direct or route respondents who do not qualify for one survey to another for which they might. Despite these important advances, the available supply of respondents to which any supplier might have access is insufficient at times for certain kinds of studies,such as a tracking survey of a rare population. To meet the requirements of such studies, researchers now depend heavily on multiple samples sources (e.g.,Panel A,B; River 1). Some evidence suggests,however, that the decision can have unintended consequences. In research carried out in 2008 evaluating seventeen different opt-in panels, for instance,the Advertising Research Foundation found "wide variance,particularly on attitudinal and/or opinion questions (purchase intent,concept reaction, and the like)," even after holding constant socio demographic and other factors (Walker et al., 2009). Since that time,some researchers have mounted new research to understand how to select multiple sample sources for the same survey without increasing bias. Proponents of these approaches cite at least three benefits: (a) consistency (or interchangeability) of new respondent sources with existing ones,(b) complementariness of new respondent sources with existing ones relative to an external standard, and (c) enhanced representativeness relative to the US general population through calibration with non-online data sources. Although these approaches have taken a step in the right direction, we believe they have not gone far enough for three main reasons: (a) they restrict the pool of potential respondents to those from sample sources vetted previously, thereby limiting supply, (b) they seem to assume that the vetted sample sources do not change over time,and (c) they rely on benchmark data sets
that have either limited shelf lives or uncertain external validity. We therefore suspect that they may not produce the same levels of sample representativeness and response accuracy as a new methodology, which we refer to as SmartSelect, that selects potential survey respondents in real-time from either a single sample source or multiple sources based on how well their characteristics match an appropriate, evolving standard with demonstrated evidence of external validity.
CASRO Journal Homepage (Abstract) / (Full text)
Web survey bibliography - Bremer, J. (11)
- Data Quality Standards in Mixed Mode Surveys; 2015; Bremer, J.; Barbulescu, M.; Bennett, J.
- Thinking Differently About How to Select Respondents for Surveys; 2012; Terhanian, G., Bremer, J.
- A Smarter Way to Select Respondents for Surveys; 2012; Terhanian, G., Bremer, J.
- I Got a Feeling: Comparison of Feeling Thermometers with Verbally Labeled Scales in Attitude Measurement...; 2012; Thomas, R. K., Bremer, J.
- How Likely?: Comparisons of Behavioral Intention Measurement Validity; 2012; Bremer, J., Thomas, R. K.
- Propensity Score Matching to Correct Telephone Surveys for Cell Phone Nonresponse; 2009; Bremer, J.
- Truth in measurement: Comparing Web Based interviewing Techniques; 2007; Couper, M. P., Terhanian, G., Bremer, J., Thomas, R. K.
- Generalizability Issues in Internet-Based Survey Research: Implications for the Internet Addiction Controversy...; 2002; Bremer, J.
- The record of internet-based opinion polls in predicting the results of 72 races in the November 2000...; 2001; Taylor, H., Bremer, J., Overmeyer, C., Siegel, J. W., Terhanian, G.
- Using Internet polling to forecast the 2000 elections; 2001; Terhanian, G., Taylor, H., Bremer, J., Overmeyer, C., Siegel, J. W.
- Update on the Internet Usage Survey; 1997; Bremer, J.