Web Survey Bibliography
Title Who fails and who passes instructed response item attention checks in web surveys?
Author Gummer, T.; Rossman, J.; Silber, H.
Year 2017
Access date 15.09.2017
Abstract Providing high-quality answers requires respondents to devote their attention to completing the questionnaire and, thus, thoroughly assess each question. This is particularly challenging in web surveys, which lack the presence of interviewers who can assess how carefully respondents answer the questions and motivate them to be more attentive if necessary. Inattentiveness can provoke response behavior that is commonly associated with measurement and nonresponse error by only superficially comprehending the question, retrieving semi- or irrelevant information, not properly forming a judgement, or failing in mapping a judgement to the available response options. Consequently, attention checks such as Instructed Response Items (IRI) have been proposed to identify inattentive respondents. An IRI is included as one item in a grid and instructs the respondents to mark a specific response category (e.g., “click strongly agree”). The instruction is not incorporated into the question text but is placed like a label of an item. The present study is focused on IRI attention checks as these (i) are easy to create and implement in a survey, (ii) do not need too much space in a questionnaire (i.e., one item in a grid), (iii) provide a distinct measure of failing or passing the attention check, (iv) are not cognitively demanding, and (v) –most importantly–provide a measure of how thoroughly respondents read items of a grid.
Most of the literature on attention checks has focused on the consistency of some “key” constructs so that IRIs typically serve as a local measure of inattentiveness for the grid in which they are incorporated (e.g., Berinsky, Margolis and Sances 2014, Oppenheimer et al. 2009). This body of research focuses heavily on how the consistency of these key constructs can be improved by relying on the measure of these attention checks, for instance, by deleting “inattentive” respondents. In the present study, we further extend the research on attention checks by addressing the research question on which respondents fail an IRI and, thus, show questionable response behavior.
To answer this research question, we draw on a web-based panel survey with seven waves that was conducted between June and October 2013 in Germany. In each wave of the panel, an IRI attention check was implemented in a grid question with a five-point scale. Across waves, the proportion failing the IRIs varied between 6.1% and 15.7%. Based on these data, logistic hybrid panel regression was used to investigate the effects of time-invariant (e.g., sex, age, education) and time-varying (e.g., interest in survey topic, respondent motivation) factors on the likelihood of failing an IRI. Consequently, the results of our study will provide additional insights in who shows questionable response behavior in web surveys. Moreover, our methodological approach allows for a finer grained discussion on whether this response behavior is the result of rather static respondent characteristics or if it is subject to change.
Most of the literature on attention checks has focused on the consistency of some “key” constructs so that IRIs typically serve as a local measure of inattentiveness for the grid in which they are incorporated (e.g., Berinsky, Margolis and Sances 2014, Oppenheimer et al. 2009). This body of research focuses heavily on how the consistency of these key constructs can be improved by relying on the measure of these attention checks, for instance, by deleting “inattentive” respondents. In the present study, we further extend the research on attention checks by addressing the research question on which respondents fail an IRI and, thus, show questionable response behavior.
To answer this research question, we draw on a web-based panel survey with seven waves that was conducted between June and October 2013 in Germany. In each wave of the panel, an IRI attention check was implemented in a grid question with a five-point scale. Across waves, the proportion failing the IRIs varied between 6.1% and 15.7%. Based on these data, logistic hybrid panel regression was used to investigate the effects of time-invariant (e.g., sex, age, education) and time-varying (e.g., interest in survey topic, respondent motivation) factors on the likelihood of failing an IRI. Consequently, the results of our study will provide additional insights in who shows questionable response behavior in web surveys. Moreover, our methodological approach allows for a finer grained discussion on whether this response behavior is the result of rather static respondent characteristics or if it is subject to change.
Access/Direct link Conference Homepage (abstract) / (presentation)
Year of publication2017
Bibliographic typeConferences, workshops, tutorials, presentations
Web survey bibliography - Germany (361)
- Interviewer effects on onliner and offliner participation in the German Internet Panel; 2017; Herzing, J. M. E.; Blom, A. G.; Meuleman, B.
- Comparing the same Questionnaire between five Online Panels: A Study of the Effect of Recruitment Strategy...; 2017; Schnell, R.; Panreck, L.
- Push2web or less is more? Experimental evidence from a mixed-mode population survey at the community...; 2017; Neumann, R.; Haeder, M.; Brust, O.; Dittrich, E.; von Hermanni, H.
- Social Desirability and Undesirability Effects on Survey Response latencies; 2017; Andersen, H.; Mayerl, J.
- Comparison of response patterns in different survey designs: a longitudinal panel with mixed-mode and...; 2017; Ruebsamen, N.; Akmatov, M. K.; Castell, S.; Karch, A.; Mikolajczyk, R. T.
- Mobile Research im Kontext der digitalen Transformation; 2017; Friedrich-Freksa, M.
- Kognitives Pretesting; 2017; Neuert, C.
- Grundzüge des Datenschutzrechts und aktuelle Datenschutzprobleme in der Markt- und Sozialforschung; 2017; Schweizer, A.
- Article Establishing an Open Probability-Based Mixed-Mode Panel of the General Population in Germany...; 2017; Bosnjak, M.; Dannwolf, T.; Enderle, T.; Schaurer, I.; Struminskaya, B.; Tanner, A.; Weyandt, K.
- Socially Desirable Responding in Web-Based Questionnaires: A Meta-Analytic Review of the Candor Hypothesis...; 2016; Gnambs, T.; Kaspar, K.
- Methodological Aspects of Central Left-Right Scale Placement in a Cross-national Perspective; 2016; Scholz, E.; Zuell, C.
- Predicting and Preventing Break-Offs in Web Surveys; 2016; Mittereder, F.
- Incorporating eye tracking into cognitive interviewing to pretest survey questions; 2016; Neuert, C.; Lenzner, T.
- Geht’s auch mit der Maus? – Eine Methodenstudie zu Online-Befragungen in der Jugendforschung...; 2016; Heim, R.; Konowalczyk, S.; Grgic, M.; Seyda, M.; Burrmann, U.; Rauschenbach, T.
- Comparing Cognitive Interviewing and Online Probing: Do They Find Similar Results?; 2016; Meitinger, K., Behr, D.
- Device Effects - How different screen sizes affect answers in online surveys; 2016; Fisher, B.; Bernet, F.
- Effects of motivating question types with graphical support in multi channel design studies; 2016; Luetters, H.; Friedrich-Freksa, M.; Vitt, SGoldstein, D. G.
- Analyzing Cognitive Burden of Survey Questions with Paradata: A Web Survey Experiment; 2016; Hoehne, J. K.; Schlosser, S.; Krebs, D.
- Secondary Respondent Consent in the German Family Panel; 2016; Schmiedeberg, C.; Castiglioni, L.; Schroeder, J.
- Does Changing Monetary Incentive Schemes in Panel Studies Affect Cooperation? A Quasi-experiment on...; 2016; Schaurer, I.; Bosnjak, M.
- Using Cash Incentives to Help Recruitment in a Probability Based Web Panel: The Effects on Sign Up Rates...; 2016; Krieger, U.
- The Mobile Web Only Population: Socio-demographic Characteristics and Potential Bias ; 2016; Fuchs, M.; Metzler, A.
- The Impact of Scale Direction, Alignment and Length on Responses to Rating Scale Questions in a Web...; 2016; Keusch, F.; Liu, M.; Yan, T.
- Web Surveys Versus Other Survey Modes: An Updated Meta-analysis Comparing Response Rates ; 2016; Wengrzik, J.; Bosnjak, M.; Lozar Manfreda, K.
- Retrospective Measurement of Students’ Extracurricular Activities with a Self-administered Calendar...; 2016; Furthmueller, P.
- Privacy Concerns in Responses to Sensitive Questions. A Survey Experiment on the Influence of Numeric...; 2016; Bader, F., Bauer, J., Kroher, M., Riordan, P.
- Ballpoint Pens as Incentives with Mail Questionnaires – Results of a Survey Experiment; 2016; Heise, M.
- Does survey mode matter for studying electoral behaviour? Evidence from the 2009 German Longitudinal...; 2016; Bytzek, E.; Bieber, I. E.
- Forecasting proportional representation elections from non-representative expectation surveys; 2016; Graefe, A.
- Setting Up an Online Panel Representative of the General Population The German Internet Panel; 2016; Blom, A. G.; Gathmann, C.; Krieger, U.
- Online Surveys are Mixed-Device Surveys. Issues Associated with the Use of Different (Mobile) Devices...; 2016; Toepoel, V.; Lugtig, P. J.
- Stable Relationships, Stable Participation? The Effects of Partnership Dissolution and Changes in Relationship...; 2016; Mueller, B.; Castiglioni, L.
- Will They Stay or Will They Go? Personality Predictors of Dropout in Online Study; 2016; Nestler, S.; Thielsch, M.; Vasilev, E.; Back, M.
- Respondent Conditioning in Online Panel Surveys: Results of Two Field Experiments; 2016; Struminskaya, B.
- A Privacy-Friendly Method to Reward Participants of Online-Surveys; 2015; Herfert, M.; Lange, B.; Selzer, A.; Waldmann, U.
- The impact of frequency rating scale formats on the measurement of latent variables in web surveys -...; 2015; Menold, N.; Kemper, C. J.
- Investigating response order effects in web surveys using eye tracking; 2015; Karem Hoehne, J.; Lenzner, T.
- Implementation of the forced answering option within online surveys: Do higher item response rates come...; 2015; Decieux, J. P.; Mergener, A.; Neufang, K.; Sischka, P.
- Translating Answers to Open-ended Survey Questions in Cross-cultural Research: A Case Study on the Interplay...; 2015; Behr, D.
- The Effects of Questionnaire Completion Using Mobile Devices on Data Quality. Evidence from a Probability...; 2015; Bosnjak, M.; Struminskaya, B.; Weyandt, K.
- Are they willing to use the web? First results of a possible switch from PAPI to CAPI/CAWI in an establishment...; 2015; Ellguth, P.; Kohaut, S.
- Measuring Political Knowledge in Web-Based Surveys: An Experimental Validation of Visual Versus Verbal...; 2015; Munzert, S.; Selb, P.
- Changing from CAPI to CAWI in an ongoing household panel - experiences from the German Socio-Economic...; 2015; Schupp, J.; Sassenroth, D.
- Rating Scales in Web Surveys: A Test of New Drag-and-Drop Rating Procedures; 2015; Kunz, T.
- Mode System Effects in an Online Panel Study: Comparing a Probability-based Online Panel with two Face...; 2015; Struminskaya, B.; De Leeuw, E. D.; Kaczmirek, L.
- Higher response rates at the expense of validity? Consequences of the implementation of the ‘forced...; 2015; Decieux, J. P.; Mergener, A.; Neufang, K.; Sischka, P.
- A quasi-experiment on effects of prepaid versus promised incentives on participation in a probability...; 2015; Schaurer, I.; Bosnjak, M.
- Response Effects of Prenotification, Prepaid Cash, Prepaid Vouchers, and Postpaid Vouchers: An Experimental...; 2015; van Veen, F.; Goeritz, A.; Sattler, S.
- Recruiting Respondents for a Mobile Phone Panel: The Impact of Recruitment Question Wording on Cooperation...; 2015; Busse, B.; Fuchs, M.
- The Influence of the Answer Box Size on Item Nonresponse to Open-Ended Questions in a Web Survey ; 2015; Zuell, C.; Menold, N.; Koerber, S.