Web Survey Bibliography
Scope: This study focuses on measurement error, one component of error of observation in the framework of the total survey error (Groves et al., 2009). More precisely, this research is about is formatting error that occurs if a rating scale does not provide a perfectly matching response option (see Schwarz & Oyserman, 2001). Therefore data collected with two different closed-ended rating scales – conventional 5-point scales and graphical visual analogue scales (VASs) – were checked against each other. About VASs: The general advantages of VASs are (1) great sensitivity because of a great range, (2) data are less affected by error, leading to more statistical power (see Funke, 2010), and (3) there are far more possibilities for data analysis (e.g., recoding into odd and even number of categories, as well as into any empirical quantile). Study Design: Respondents (N = 460) completed a 40 item Big Five personality test (found at http://ipip.ori.org/ipip/). In a between-subjects design participants were randomly assigned to a questionnaire that was either made of 5-point scales or of VASs. The VASs used in the study were plain, horizontal lines with only the ends verbally anchored. They had a range of 250 values and they were generated with the free Web service VAS Generator (see http://vasgenerator.net).
Results: Overall, higher loadings on predicted factor and lower loadings on unpredicted factors lead to more explained variance with VASs in comparison to 5-point scales. The expected factor structure was considerably clearer with VASs than with 5-point scales.
Conclusion: This study adds further evidence that VASs can have a beneficial effect on data quality and that one should think about the general reluctance to use this rating scale. Overall, VASs’ positive scale characteristics should be taken advantage of in computerized data collection.
Conference Homepage (abstract)
Web survey bibliography (4086)
- Testing a single mode vs a mixed mode design; 2011; Laaksonen, S.
- Germans' segregation preferences and immigrant group size: A factorial survey approach; 2011; Schlueter, E., Ullrich, J., Schmidt, P.
- Errors within web-based surveys: a comparison between two different tools for the analysis of tourist...; 2011; Polizzi, G., Oliveri, A. M.
- Benefits of Structured DDI Metadata across the Data Lifecycle: The STARDAT Project at the GESIS Data...; 2011; Linne, M., Brislinger, E., Zenk-Moeltgen, W.
- Microdata Information System MISSY; 2011; Bohr, J.,
- The Use of Structured Survey Instrument Metadata throughout the Data Lifecycle; 2011; Hansen, S. E.
- DDI and the Lifecycle of Longitudinal Surveys; 2011; Hoyle, L., Wackerow, J.
- Dissemination of survey (meta)data in the LISS data archive; 2011; Streefkerk, M., Elshout, S.
- Underreporting in Interleafed Questionnaires: Evidence from Two Web Surveys; 2011; Medway, R., Viera Jr., L., Turner, S., Marsh, S. M.
- The use of cognitive interviewing methods to evaluate mode effects in survey questions; 2011; Gray, M., Blake, M., Campanelli, P., Hope, S.
- Does the direction of Likert-type scales influence response behavior in web surveys?; 2011; Keusch, F.
- Cross-country Comparisons: Effects of Scale Type and Response Style Differences; 2011; Thomas, R. K.
- Explaining more variance with visual analogue scales: A Web experiment; 2011; Funke, F.
- A Comparison of Branching Response Formats with Single Response Formats; 2011; Thomas, R. K.
- Different functioning of rating scale formats – results from psychometric and physiological experiments...; 2011; Koller, M., Salzberger, T.
- Cognitive process in answering questions: Are verbal labels in rating scales attended to?; 2011; Menold, N., Kaczmirek, L., Lenzner, T.
- Experiments on the Design of the Left-Right Self-Assessment Scale; 2011; Zuell, C., Scholz, E., Behr, D.
- Is it a good idea to optimise question format for mode of data collection? Results from a mixed modes...; 2011; Nicolaas, G., Campanelli, P., Hope, S., Lynn, P., Nandi, A.
- Separating selection from mode effects when switching from single (CATI) to mixed mode design (CATI /...; 2011; Carstensen, J., Kriwy, P., Krug, G., Lange, C.
- Testing between-mode measurement invariance under controlled selectivity conditions; 2011; Klausch, L. T.
- Using propensity score matching to separate mode- and selection effects; 2011; Lugtig, P. J., Lensvelt-Mulders, G. J.
- A mixed mode pilot on consumer barometer; 2011; Taskinen, P., Simpanen, M.
- Separation of selection bias and mode effect in mixed-mode survey – Application to the face-to...; 2011; Bayart, C., Bonnel, P.
- Optimization of dual frame telephone survey designs; 2011; Slavec, A., Vehovar, V.
- A Comparison of CAPI and PAPI through a Randomized Field Experiment; 2011; De Weerdt, J.
- Flexibility of Web Surveys: Probing 'do-not-know' over the Phone and on the Web; 2011; Hox, J., de Leeuw, E. D.
- Changing research methods in Ukraine: CATI or Mixed-Mode Surveys?; 2011; Paniotto, V., Kharchenko, N.
- The effects of mixed mode designs on simple and complex analyses; 2011; Martin, P., Lynn, P.
- In the Face of Declining Budgets: The Student Experience at Washington State University ; 2011; Allen, T., Dillman, D. A., Garza, B., Millar, M. M.
- Use of Cognitive Shortcuts in Landline and Cell Phone Surveys; 2011; Everett, S. E., Kennedy, C.
- Mode Effect or Question Wording? Measurement Error in Mixed Mode Surveys; 2011; de Leeuw, E. D., Hox, J., Scherpenzeel, A.
- Conducting Respondent Driven Sampling on the Web: An Experimental Approach to Recruiting Challenges; 2011; Kapteyn, A., Schonlau, M.
- Discussion of Melanie Revilla's presentation on "Impact of the mode of data collection on...; 2011; Blom, A. G.
- Comments on the paper - Geetha Garib: Perceived diversity among employees; 2011; Vehovar, V.
- Framing Effects and Expected Social Security Claiming Behavior; 2011; Brown, Je., Kapteyn, A., Mitchell, O. S.
- Building an Online Immigrant Panel: Response and Representativity; 2011; Scherpenzeel, A.
- Effects of Incentives and Prenotification on Response Rates and Costs in a National Web Survey of Physicians...; 2011; Bonham, V., Day, B., Dykema, J., Sellers, S., Stevenson, J.
- Nonsampling errors in dual frame telephone surveys ; 2011; Brick, J. M., Flores Cervantes, I., Lee, S., Norman, G.
- Measurement invariance in training evaluation: Old question, new context; 2011; P., Gissel, A., Stoughton, J. W., Whelan, T. J.Clark, A. P.
- Towards Usage of Avatar Interviewers in Web Surveys; 2011; Jans, M., Malakhoff, L.
- Measurement Error in Mixed Mode Surveys: Mode or Question Format?; 2011; de Leeuw, E. D., Hox, J.
- Toward a Benefit-Cost Theory of Survey Participation: Evidence, Further Tests, and Implications; 2011; Singer, E.
- Using Community Information and Survey Methodology for Bias Reduction to Enhance the Quality of the...; 2011; Harvey, J., Prabhakaran, J., Spera, C., Zhang, Zh.
- Response Quantity, Response Quality, and Costs of Building an Online Panel via Social Contacts.; 2011; Toepoel, V.
- The Influence Of The Direction Of Likert-Type Scales In Web Surveys On Response Behavior In Different...; 2011; Keusch, F.
- An Injured Party?: A Comparison of Political Party Response Formats in Party Identification.; 2011; Schwarz, S., Barlas, F. M., Thomas, R. K., Corso, R. A., Szoc, R.
- Asking Sensitive Questions: Do They Affect Participation In Follow-Up Surveys?; 2011; Schaurer, I., Struminskaya, B., Kaczmirek, L., Bandilla, W.
- Designing Questions for Web Surveys: Effects of Check-List, Check-All, and Stand-Alone Response Formats...; 2011; Dykema, J., Schaeffer, N. C., Beach, J., Lein, V., Day, B.
- Differential Sampling Based on Historical Individual-Level Data in Online Panels.; 2011; Kelly, R. H.
- Web Survey Live Validations - What Are They Doing?; 2011; Crawford, S. D., McClain, C.