Web Survey Bibliography
Relevance and research question: Nonserious answering is one of the most important threats to the validity of online research (Oppenheimer, 2009). Respondents with little motivation to participate, or respondents who are interested in a survey's content or methodology only may decide to participate without giving serious answers, thus increasing noise and reducing experimental power.
Methods and data: One approach to identifying nonserious participants is to directly ask respondents about the seriousness of their participation (Klauer, Musch & Naumer, 2000; Musch & Klauer, 2002; Reips, 2000, 2002). We hypothesized that when given an opportunity to do so, randomly answering participants might be willing to identify themselves to help researchers (Reips, 2009). To validate this approach, we questioned a sample of more than 3000 respondents in the week prior to the German 2009 federal election to the Bundestag. We asked the participants whether they were responding to the questions in earnest, expecting that the exclusion of nonserious participants would help to improve data quality.
Results: We found that restricting analyses to serious participants allowed a more valid forecast of the election result. Moreover, serious participants answered attitudinal questions in a more consistent manner than nonserious participants. For example, among serious participants, self-ratings on a left-right scale correlated more strongly with approval ratings for the two major parties (CDU/CSU and SPD), and intentions to vote corresponded better with the participant’s recollections of their voting behavior in a previous election.
Added value: Taken together, our results document the usefulness of employing seriousness checks to improve data validity. We therefore recommend to routinely employ seriousness checks in online surveys. Nonserious participants should be allowed to render their data invalid, instead of letting their data invalidate the results.
Web Survey Bibliography - Musch, J. (5)
- Seriousness Checks are Useful to Improve Data Validity in Online Research; 2010; Diedenhofen, D., Aust, F., Ullrich, S., Musch, J.
- Selection Bias in Web Surveys and the Use of Propensity Scores in Forecasting the Result of the 2009...; 2010; Musch, J., Ullrich, S., Diedenhofen, D.
- Psychological Experimenting on the World Wide Web: Investigating content effects in syllogistic reasoning...; 2002; Musch, J., Klauer, K. C.
- Improving Survey Research on the World-Wide Web Using the Randomized Response Technique; 2001; Musch, J., Broder, A., Klauer, K. C.
- A Brief History of Web Experimenting; 2000; Musch, J., Reips, U. -D.