Web Survey Bibliography
Survey researchers who regularly conduct online surveys may wish to monitor and ensure the quality of surveys. From a respondent’s perspective the quality of a survey manifests itself as attitudes towards the survey. The attitudes can be assessed with questions concerning the satisfaction, the cognitive burden and other survey related issues. However, these scales do not explain why a specific survey was rated as very good or poor compared to other surveys. Furthermore, the available instruments for attitudes towards surveys are unsuited for regular implementation because of their length. We propose a single-item open-ended question which can both be easily implemented in each survey and which provides more insight into respondents’ perception of a survey than rating scales. To allow computer-assisted content analysis of the answers we developed a dictionary. The dictionary is based on 6 online surveys including different topics and samples (6694 completed questionnaires, 4150 answers to the item). The validity coefficient for the automatic coding regarding the two central aspects of positive and negative evaluations is .951. A comparison between participants and their answer tendencies showed the following results: Women have a higher tendency towards positive evaluations than men. Whereas, higher educated and older respondents show a tendency to more negative answers. The content of the answers fits into the theoretical concept of respondent burden (Bradburn 1978). The dictionary-based approach allows calculation of a satisfaction index for each survey similarly to the use of rating scales. Additionally, the proposed open-ended question captures a greater variety of evaluation dimensions than rating scales. Respondents can evaluate the issues they themselves find most relevant concerning the quality of a given survey. On the one hand survey researchers are able to assess whether a survey receives a poor evaluation because it is “boring” or “too personal”. On the other hand a good evaluation can be caused by “interesting” questions or because it is “important”. The proposed one-item instrument allows survey researchers to track the quality of their surveys with minimal burden for the respondents while gathering differentiated feedback regarding the questionnaire.
Conference homepage (abstract)