Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title Assessing Measurement Equivalence and Bias of Questions in Mixed-mode Surveys Under Controlled Sample Heterogeneity
Year 2012
Access date 30.06.2012
Abstract

Analysts working with data generated by different modes of data collection often want to be sure that their measurements are comparable. If a set of questions is designed to measure the same latent trait, confirmatory factor analysis (CFA) is a useful analytic tool for this purpose. It can be applied to assess, whether properties, such as measurement error, the association between latent traits and questions (measurement invariance) and the means of latent traits, are equivalent across survey modes. We illustrate an application using empirical data from an experiment based on a national probability sample, in which 4048 respondents were randomly assigned to either face-to-face, telephone, mail or web interviewing. Two related traits were measured with three questions respectively, “moral support of the police” and the “obligation to obey”, which form the basis of our CFA model. The association between latent traits and questions was invariant across modes. However, measurement errors differed between modes. In particular, the self-administered modes yielded more reliable indicators than the interviewer modes. Moreover, we find systematic bias between modes on the mean of one of the traits. The effect signs suggest that respondents gave socially desirable answers in the intervieweradministered modes. A particularity of survey modes is that sample compositions are often heterogeneous. If the selective process is correlated with model elements, such as traits, it can bias invariance tests and decrease fit. We illustrate available options to adjust for this problem, for example propensity score methods or covariate adjustment, all based on the use of auxiliary variables, such as socio-demographics. In conclusion, self-administered modes seem to produce measurements of lower quality than interviewer-administered modes with respect to random error and systematic bias. Modes may thus affect both, the error variance and bias of an estimate. An effect can be suspected particularly between interviewer and non-interviewer modes.

Access/Direct link

Conference Homepage (abstract)

Year of publication2012
Bibliographic typeConferences, workshops, tutorials, presentations
Print

Web survey bibliography - Schouten, B. (20)