Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title Dynamic Instructions in Check-All-That-Apply Questions
Year 2016
Access date 08.06.2016
Abstract
In check-all-that-apply questions, respondents are required to select all applicable responses. Although check-all-that-apply questions are one of
the most commonly used question formats in (Web) surveys, respondents often do not spend sufficient effort to thoroughly process each of the response alternatives. Instead, respondents tend to select one of the first alternatives without sufficiently considering the remaining ones, resulting in primacy effects and an overall lower number of responses then actually apply to them. On the contrary, respondents may select response alternatives that apply to them only vaguely, resulting in a considerably higher number of responses than desired. In order toensure comparability researchers often use instructions with check-all-that-apply questions that specify the number of responses desired (e.g., "Please select the three most important aspects."). However, such instructions are often overlooked or ignored by respondents. Web surveys offer the opportunity toimplement dynamic design features that possibly increase the respondents’ attention to such instructions. In this paper, we assess the effectiveness of instant feedback messages appearing once respondents start answering a question. Using a between-subjects design, the effectiveness of providing instructions either in the form of static instructions that are always visible together with the question stem (EG1), dynamic instructions that instantly appear once respondents start answering the question (EG2), or a combination of both (EG3) was assessed. Experimental conditions were evaluated compared to a control group where no instruction was provided (CG). Initial findings concerning the effectiveness of different instruction types showed that a combination of both static and dynamic instructions is most effective in obtaining the desired number of responses. Comparisons with importance ratings shed lighton the question whether respondents actually select the most important responses. In addition, response order effects were assessed in order to determine the extent of satisficing behavior in each experimental condition.
 
 
Year of publication2016
Bibliographic typeConferences, workshops, tutorials, presentations
Print

Web survey bibliography - The American Association for Public Opinion Research (AAPOR) 71st Annual Conference, 2016 (107)

Page:
Page: