Web Survey Bibliography
Title Dynamic Instructions in Check-All-That-Apply Questions
Year 2016
Access date 08.06.2016
Abstract
In check-all-that-apply questions, respondents are required to select all applicable responses. Although check-all-that-apply questions are one of
the most commonly used question formats in (Web) surveys, respondents often do not spend sufficient effort to thoroughly process each of the response alternatives. Instead, respondents tend to select one of the first alternatives without sufficiently considering the remaining ones, resulting in primacy effects and an overall lower number of responses then actually apply to them. On the contrary, respondents may select response alternatives that apply to them only vaguely, resulting in a considerably higher number of responses than desired. In order toensure comparability researchers often use instructions with check-all-that-apply questions that specify the number of responses desired (e.g., "Please select the three most important aspects."). However, such instructions are often overlooked or ignored by respondents. Web surveys offer the opportunity toimplement dynamic design features that possibly increase the respondents’ attention to such instructions. In this paper, we assess the effectiveness of instant feedback messages appearing once respondents start answering a question. Using a between-subjects design, the effectiveness of providing instructions either in the form of static instructions that are always visible together with the question stem (EG1), dynamic instructions that instantly appear once respondents start answering the question (EG2), or a combination of both (EG3) was assessed. Experimental conditions were evaluated compared to a control group where no instruction was provided (CG). Initial findings concerning the effectiveness of different instruction types showed that a combination of both static and dynamic instructions is most effective in obtaining the desired number of responses. Comparisons with importance ratings shed lighton the question whether respondents actually select the most important responses. In addition, response order effects were assessed in order to determine the extent of satisficing behavior in each experimental condition.
the most commonly used question formats in (Web) surveys, respondents often do not spend sufficient effort to thoroughly process each of the response alternatives. Instead, respondents tend to select one of the first alternatives without sufficiently considering the remaining ones, resulting in primacy effects and an overall lower number of responses then actually apply to them. On the contrary, respondents may select response alternatives that apply to them only vaguely, resulting in a considerably higher number of responses than desired. In order toensure comparability researchers often use instructions with check-all-that-apply questions that specify the number of responses desired (e.g., "Please select the three most important aspects."). However, such instructions are often overlooked or ignored by respondents. Web surveys offer the opportunity toimplement dynamic design features that possibly increase the respondents’ attention to such instructions. In this paper, we assess the effectiveness of instant feedback messages appearing once respondents start answering a question. Using a between-subjects design, the effectiveness of providing instructions either in the form of static instructions that are always visible together with the question stem (EG1), dynamic instructions that instantly appear once respondents start answering the question (EG2), or a combination of both (EG3) was assessed. Experimental conditions were evaluated compared to a control group where no instruction was provided (CG). Initial findings concerning the effectiveness of different instruction types showed that a combination of both static and dynamic instructions is most effective in obtaining the desired number of responses. Comparisons with importance ratings shed lighton the question whether respondents actually select the most important responses. In addition, response order effects were assessed in order to determine the extent of satisficing behavior in each experimental condition.
Access/Direct link Conference Homepage (abstract)
Year of publication2016
Bibliographic typeConferences, workshops, tutorials, presentations
Web survey bibliography - Kunz, T. (9)
- Dynamic Instructions in Check-All-That-Apply Questions ; 2016; Kunz, T.; Fuchs, M.
- The use and positioning of clarification features in web surveys; 2016; Metzler, A., Kunz, T., Fuchs, M.
- Rating Scales in Web Surveys: A Test of New Drag-and-Drop Rating Procedures; 2015; Kunz, T.
- Instant Interactive Feedback in Grid Questions: Reminding Web Survey; 2014; Kunz, T., Fuchs, M.
- Using Eye Tracking Data to Understand Respondent's Processing of Rating Scales; 2013; Kunz, T., Fuchs, M.
- Use of Drag-and-Drop Rating Scales in Web Surveys and Its Effect on Survey Reports and Data Quality; 2013; Kunz, T.
- Reducing Response Order Effects in Check-All-That-Apply Questions by Use of Dynamic Tooltip Instructions...; 2013; Kunz, T., Fuchs, M.
- Effects of Static versus Dynamic Formatting Instructions for Open-Ended Numerical Questions in Web Surveys...; 2012; Kunz, T., Fuchs, M.
- Video enhanced web survey; 2011; Fuchs, M., Kunz, T., Gebhard, F.