Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title Using Client Side Paradata as Process Quality Indicators in Web Surveys
Year 2005
Access date 19.10.2005
Abstract

Statistics Norway has made all their business questionnaires available on Internet. This counts to more than 150 questionnaires. Most of these questionnaires was implemented into one common system for business surveys called Idun (Information and Data Exchange with Businesses). This was done during a very short period of time, starting in 2003 and finalized in spring 2005. First priority in this period has clearly been quantity before quality. Generally the functionality built into the web questionnaires are not yet very advanced. And even if some qualitative tests were run during the development of the system, we have not yet established quality indicators that can identify strengths and weaknesses of the implemented questionnaires in an efficient and systematic way. This is also true for a similar system already established for municipality to state reporting (Kostra) and for a system that offers a common portal for all kinds of official electronic forms (Altinn). Without this kind of quality indicators it is difficult to know where the resources for improvements should be allocated. In addition we need process data in order to study the relationship between actual and perceived response burdens and in order to study the relationship between process quality and response quality in web surveys.

Client side paradata are electronic observations that are collected while the respondents fill in web questionnaires. To this date client side paradata has first of all been used to test how specific question wordings or presentation tools work. The most common data collected are how long time it takes to complete the questions and how often answers are changed. Our perspective is more to think about client side paradata as big scale observations similar to those we base follow-up questions on in small scale cognitive interviewing, or to think about paradata collection as an automated way of carrying out behavioural coding. One idea which guides our work is that we look for paradata that are similar to observations that we learned to know the meaning of in qualitative tests made during the development of the instrument. Another idea is that it possible to specify how we ideally would want the respondents to work their way through the questionnaires, and that deviance from this path can be regarded as a sign of design weaknesses. Based on this idea, we have for instance defined a good web questionnaire design to be one where a minimum of the possible error messages are activated. During this spring a few of the business surveys running on Internet will be evaluated according to this and similar indicators. The results will be presented on the workshop.

Year of publication2005
Bibliographic typeConferences, workshops, tutorials, presentations
Print

Web survey bibliography (4086)

Page:
Page: