Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title Who fails and who passes instructed response item attention checks in web surveys?
Year 2017
Access date 15.09.2017
Abstract Providing high-quality answers requires respondents to devote their attention to completing the questionnaire and, thus, thoroughly assess each question. This is particularly challenging in web surveys, which lack the presence of interviewers who can assess how carefully respondents answer the questions and motivate them to be more attentive if necessary. Inattentiveness can provoke response behavior that is commonly associated with measurement and nonresponse error by only superficially comprehending the question, retrieving semi- or irrelevant information, not properly forming a judgement, or failing in mapping a judgement to the available response options. Consequently, attention checks such as Instructed Response Items (IRI) have been proposed to identify inattentive respondents. An IRI is included as one item in a grid and instructs the respondents to mark a specific response category (e.g., “click strongly agree”). The instruction is not incorporated into the question text but is placed like a label of an item. The present study is focused on IRI attention checks as these (i) are easy to create and implement in a survey, (ii) do not need too much space in a questionnaire (i.e., one item in a grid), (iii) provide a distinct measure of failing or passing the attention check, (iv) are not cognitively demanding, and (v) –most importantly–provide a measure of how thoroughly respondents read items of a grid. 
Most of the literature on attention checks has focused on the consistency of some “key” constructs so that IRIs typically serve as a local measure of inattentiveness for the grid in which they are incorporated (e.g., Berinsky, Margolis and Sances 2014, Oppenheimer et al. 2009). This body of research focuses heavily on how the consistency of these key constructs can be improved by relying on the measure of these attention checks, for instance, by deleting “inattentive” respondents. In the present study, we further extend the research on attention checks by addressing the research question on which respondents fail an IRI and, thus, show questionable response behavior.
To answer this research question, we draw on a web-based panel survey with seven waves that was conducted between June and October 2013 in Germany. In each wave of the panel, an IRI attention check was implemented in a grid question with a five-point scale. Across waves, the proportion failing the IRIs varied between 6.1% and 15.7%. Based on these data, logistic hybrid panel regression was used to investigate the effects of time-invariant (e.g., sex, age, education) and time-varying (e.g., interest in survey topic, respondent motivation) factors on the likelihood of failing an IRI. Consequently, the results of our study will provide additional insights in who shows questionable response behavior in web surveys. Moreover, our methodological approach allows for a finer grained discussion on whether this response behavior is the result of rather static respondent characteristics or if it is subject to change.
Year of publication2017
Bibliographic typeConferences, workshops, tutorials, presentations
Print

Web survey bibliography - Germany (361)

Page:
Page: