Web Survey Bibliography
Research Questions and Methodology: As response rates for online surveys continue to decline, three issues are of particular interest: Do lottery style incentives increase response to online surveys? While there is research to suggest that monetary incentives can increase response, other research suggests that lottery style incentives do not increase response, at least in mail surveys. Would results be ‐groups. The population was defined as degree holders from Stanford from 1955‐2007, who live in the continental USA or Canada, and for whom Stanford has an e‐mail address. Three random samples of approximately 2500 alumni were selected from the population: Sample 1 was offered no incentive to participate in the survey. Sample 2 was told that five randomly selected respondents (i.e., those who completed the survey) would each win a $100 Visa gift card. Sample 3 was told that one randomly selected respondent would win a $500 Visa gift card. The invitations were e‐mailed on June 24, 2008, and three reminders were e‐mailed to non‐respondents at one‐week intervals before the survey was taken off the web on July 20, 2008. ‐incented sample drew a response of 30.6% – a marginally significant difference. Differences were also seen among certain demographic groups: Donors and women responded significantly higher to the incentives than to no incentive. Alumni in their 30s and 40s responded marginally significantly higher to the incentives than to no incentive. Higher response to the incentives among donors and women (but not among non‐donors and men) is noteworthy because overall response was also higher among donors and women – so not only did the incentives not improve data quality by increasing response among underrepresented groups, but the incentives in fact decreased data quality by further increasing the participation of overrepresented groups and magnifying the bias already present in the data. Within the other demographic groups, response was, at most, a few points higher in the incented samples than the non‐incented sample, but the differences are not significant. Overall response to the two tested incentives was virtually identical – 32.6% for the sample offered five chances to win $100, and 32.4% for the sample offered one chance to win $500. Furthermore, response to the two incentives was the same across almost all demographic groups. ‐incented sample – both overall and across most demographic groups. Therefore, if it is of paramount importance to obtain every last possible respondent, lottery style incentives in an online survey may be of some benefit. Nevertheless, the difference in response was only marginally significant, so for many projects, the incentives may not be worth the extra cost. While the incentives added to the cost of the survey, they also added bias to the results. So the accuracy of the data collected must also be considered when deciding whether or not to offer incentives. If a lottery style incentive is offered, it may not matter which (at least of the two incentives we tested) is used.
different for online surveys? Prospect theory suggests that the way a proposition is framed can affect the decision people make, so is it more effective to offer respondents multiple chances to win a smaller gift…or to offer them a single chance to win a larger gift?
Do lottery style incentives affect – either positively or negatively – the accuracy of the data collected, in terms of how representative the respondents are of the population?
To explore these issues, we conducted an online survey with alumni from Stanford University testing two incentives of the same expected value but framed differently, along with a control group offered no incentive. The survey was about resources and services available to Stanford alumni, and thus was salient to the general alumni population, and not just to certain sub
Results: The overall response rate was 31.9%. As we have consistently seen in fifteen years of surveying alumni from a wide range of institutions, overall response was significantly greater among alumni with whom the University has the strongest relationship – donors, Stanford Alumni Association members, and undergraduate alumni. Response was also somewhat greater among women than men, and among alumni 50 and older. The incented samples drew a response of 32.5%, while the non
Implications: Even with incentives, if the sponsor of a survey is identified upfront to respondents, response will be higher among those with whom the sponsor has the closest relationship (in this case, donors, Alumni Association members, and undergraduate alumni). Response was a few percentage points higher in the incented samples than in the non -incented sample – both overall and across most demographic groups. Therefore, if it is of paramount importance to obtain every last possible respondent, lottery style incentives in an online survey may be of some benefit. Nevertheless, the difference in response was only marginally significant, so for many projects, the incentives may not be worth the extra cost. While the incentives added to the cost of the survey, they also added bias to the results. So the accuracy of the data collected must also be considered when deciding whether or not to offer incentives. If a lottery style incentive is offered, it may not matter which (at least of the two incentives we tested) is used.
Conference homepage (abstract)
Web Survey Bibliography - Krosnick, J. A. (59)
- The Handbook of Questionnaire Design; 2013; Krosnick, J. A., Fabrigar, L. R.
- Improving ability measurement in surveys by following the principles of IRT: The Wordsum vocabulary...; 2012; Cor, K., Haertel, E., Krosnick, J. A., Malhotra, N.
- Complete Anonymity Compromises the Accuracy of Self - Reports; 2012; Lelkes, Y., Krosnick, J. A., Marx, D. M., Judd, C. M., Park, B.
- Can Offcial Records Correct Errors in Turnout Self-reports?; 2012; Berent, M., Krosnick, J. A., Lupia, A.
- Improving Question Design to Maximize Reliability and Validity; 2012; Krosnick, J. A.
- How accurate are surveys of objective phenomena?; 2012; Chang, L. C., Krosnick, J. A.
- Does mentioning "Some People" and "Other People" in an opinion question improve...; 2012; Yeager, D. S., Krosnick, J. A.
- What Number of Scale Points in an Attitude Question Optimizes Response Validity and Administration Practicality...; 2012; Yeager, D. S., Anand, S., Krosnick, J. A.
- A Systematic Review of Studies Investigating the Quality of Data Obtained with Online Panels; 2012; Callegaro, M., Villar, A., Krosnick, J. A., Yeager, D. S.
- Measuring americans' issue priorities. A new version of the most important problem question reveals...; 2011; Yeager, D. S., Larson, S. B., Krosnick, J. A., Tompson, T.
- Experiments for evaluating survey questions; 2011; Krosnick, J. A.
- Does mentioning "some people" and "other people" in a survey question increase the...; 2011; Yeager, D. S., Krosnick, J. A.
- Comparing the Accuracy of RDD Telephone Surveys and Internet Surveys Conducted with Probability and...; 2011; Yeager, D. S., Krosnick, J. A., Chang, L. C., Javitz, H. S., Levendusky, M. S., Simpser, A., Wang, R...
- More comparisons of Probability and Non-Probability Sample Internet Surveys: The Dutch NOPVO Study.; 2011; Weiss, R., Krosnick, J. A., Yeager, D. S.
- Measuring User Satisfaction in the Lab: Questionnaire Mode, Physical Location, and Social Presence Concerns...; 2011; Jans, M., Romano, J. C., Ashenfelter, K. T., Krosnick, J. A.
- Measuring Intent to Participate and Participation in the 2010 Census and Their Correlates and Trends...; 2010; Pasek, J., Krosnick, J. A.
- Assessing the Accuracy of the Face-to-Face Recruited Internet Survey Platform: A Comparison of Behavioral...; 2010; Villar, A., Malka, A., Krosnick, J. A.
- Study of Non-Probability Sample Internet Surveys' Estimates of Consumer Product Usage and Demographic...; 2010; Yeager, D. S., Carter, A., Tewoldemedhin, H., Krosnick, J. A.
- Computing weights for the American National Election Study survey data; 2009; Debell, M., Krosnick, J. A.
- Question and Questionnaire Design; 2009; Krosnick, J. A., Presser, S.
- Dispositions and Outcome Rates in the “Face-to- Face/Internet Survey Platform" (the FFISP); 2009; Sakshaug, J. W., Tourangeau, K., Krosnick, J. A., Ackermann, A., Malka, A., Debell, M., Turakhia, C.
- Attrition in a Face-to-Face Recruited Internet Panel with Substantial Incentives; 2009; Malka, A., Krosnick, J. A., Ackermann, A., Debell, M., Turakhia, C.
- Lessons Learned About How to Accomplish Effective In- Person Recruitment of a Web-Equipped Survey Panel...; 2009; Ackermann, A., Krosnick, J. A., Turakhia, C., Debell, M., Malka, A., Jarmon, R.
- Comparison Study of Probability and Non-Probability Sample Surveys Conducted by Internet and Face to...; 2009; Yeager, D. S., Krosnick, J. A.
- Does Weighting Improve the Accuracy of Data from Non- Probability Internet Survey Panels of People Who...; 2009; Yeager, D. S., Krosnick, J. A.
- National Surveys Via RDD Telephone Interviewing vs. the Internet: Comparing Sample Representativeness...; 2009; Chang, L. C., Krosnick, J. A.
- Lottery Style Incentives and Response Rates to Online Surveys; 2009; Pearson, J., E., Krosnick, J. A.Levine, R. E.
- Scientific Survey Research: Sustainable in an Online World?; 2009; Krosnick, J. A.
- Money for Surveys: What about Data-Quality?; 2009; A.Krosnick, J. A.
- Optimal Design of Branching Questions to Measure Bipolar Constructs; 2009; Malhotra, N., Krosnick, J. A., Thomas, R. K.
- The accuracy of online surveys with non-probability samples; 2008; Krosnick, J. A.
- ‘For Example’: How Different Example Types in Online Surveys Influence Frequency ; 2008; Berent, M., Krosnick, J. A.
- Comparing the Results of Probability and Non-probability Telephone and Internet Survey Data; 2008; Wang, R., Krosnick, J. A.
- “For Example…,” How Different Example Types in Online Surveys Influence Frequency...; 2008; Berent, M., Krosnick, J. A.
- Response option ordering: Reconciliating meanings conveyed by rating scale position and label. Unpublished...; 2007; Garland, P., Krosnick, J. A.
- The Effect of Survey Mode and Sampling on Inferences about Political Attitudes and Behavior: Comparing...; 2007; Malhotra, N., Krosnick, J. A.
- Face-to-Face Recruitment of an Internet Survey Panel: Lessons from an NSF-Sponsored Demonstration Protect...; 2007; O'Muircheartaigh, C., Krosnick, J. A., Dennis, J. M.
- The measurement of attitudes; 2005; Krosnick, J. A., Judd, C. M., Wittenbrink, B.
- Comparing the results of probability and non-probability sample surveys; 2005; Krosnick, J. A.
- Effects of survey data collection mode on response quality: Implications for mixing modes in cross-national...; 2005; Krosnick, J. A.
- Vote Over-Reporting: Testing the Social Desirability Hypothesis in Telephone and Internet Surveys; 2005; Holbrook, A. L., Krosnick, J. A.
- Effect of Respondent Motivation and Tack Difficulty on Nondifferentiation in Ratings: A Test of Satisficing...; 2005; Anand, S., Krosnick, J. A., Mulligan, K., Smith, W., Green, M. C., Bizer, G. Y.
- Comparing Major Survey Firms in Terms of Survey Satisficing: Telephone and Internet Data Collection; 2005; Krosnick, J. A., Nie, N., Rivers, D.
- Web Survey Methodologies: A Comparison of Survey Accuracy; 2005; Krosnick, J. A., Nie, N., Rivers, D.
- The Economist/YouGov Internet Presidential poll.; 2004; Fiorina, M., Krosnick, J. A.
- Comparing Data Quality in Telephone and Internet Surveys: Results of Lab and Field Experiments; 2003; Krosnick, J. A.
- Can What We Don’t Know (about “Don’t Know”) Hurt Us?: Effects of Item Non-response...; 2003; Krosnick, J. A., Behnke, C. S., Lafond, C.R., Thomas, R. K.
- How Does Ranking Rate?: A Comparison of Ranking and Rating Tasks.; 2003; Krosnick, J. A., Shaeffer, E. M., Thomas, R. K.
- Comparing Self-administered Computer Surveys and Auditory Interviews: An Experiment; 2002; Chang, L. C., Krosnick, J. A.
- More Is Not Necessarily Better: Effects of Response Categories on Measurement Stability and Validity; 2002; Thomas, R. K., Uldall, B. R., Krosnick, J. A.