Web Survey Bibliography
Relevance & Research Question: Response latency measurement and eye tracking are two computer-assisted pretesting methods that may be particularly useful for evaluating Web questionnaires. In contrast to other techniques (e.g., expert reviews, qualitative interviews), both methods produce nonreactive and objective measures of behavior that are neither affected by the researcher (and the ways in which she tests the questions) nor by the research context. While previous studies have shown that longer response latencies and fixation times are indicative of problematic questions (Lenzner et al., 2010, 2011), little is known about the utility of the two methods (or measures) in the practical pretesting context (e.g., in testing draft questions). This study examines whether response latencies and fixation times are discriminative features to distinguish flawed from improved survey questions.
Methods & Data: In a laboratory experiment, respondents’ eye movements and response latencies were recorded while they were answering two versions of a Web questionnaire. One group (n=22) received a questionnaire including poorly worded questions and the other group (n=22) received the same questionnaire with improved question wordings. Given that response latencies and fixation times are highly individual, we computed the baseline fixation rate (eye tracking) and baseline reading rate (response latency) for every respondent from seven additional questions asked in the same Web survey. In the analyses, whenever the response or fixation times for a question exceeded respondents’ baseline by more than 15%, the question was deemed problematic. (The analyses were repeated with 10%, 20%, and 25% thresholds, but all conclusions remained unchanged).
Results: Fixation rate (eye tracking) was consistently more accurate than reading rate (response latency) in classifying the questions as flawed or improved. The overall accuracy of the fixation rate ranged from 60% to 85%, the accuracy of the reading rate from 43% to 70%. Also, the eye tracking measure resulted in considerably fewer misses (failures to detect problems) and fewer false alarms.
Added Value: This study suggests that fixation times and response latencies are potentially useful methods for pretesting (draft) Web questionnaires, albeit the level of accuracy with which they identify problematic questions is not yet satisfactory.
GOR Homepage (abstract) / (full text)
Web survey bibliography - General Online Research Conference (GOR) 2012 (26)
- Is „chapterisation“ a viable alternative to traditional progress indicators ?; 2012; Spicer, R., Dowling, Z.
- Exploring New Pathways to Survey Recruitment; 2012; Bilgram, V., Stadler, D.Jawecki, G.
- Understanding selection bias in a worldwide, volunteer web-survey; 2012; Tijdens, K., Steinmetz, S.
- Does Mode Matter? Initial Evidence from the German Longitudinal Election Study (GLES); 2012; Blumenstiel, J. E., Rossmann, J.
- The Representativity of Web Surveys of the General Population compared to Traditional Modes and Mixed...; 2012; Klausch, L. T., Schouten, B., Hox, J.
- Surveytainment 2.0: Why investing 10 more minutes more in constructing your questionnaire is worth considering...; 2012; Muehle, A., Tress, F., Schmidt, S., Winkler, T.
- Market research online community (MROC) versus focus group; 2012; Zuber, M.
- Data quality in MAWI and CAWI; 2012; Mavletova, A. M., Blasius, J.
- Time use data collection using Smartphones: Results of a pilot study among experienced and inexperienced...; 2012; Scherpenzeel, A., Sonck, N., Fernee, H., Morren, Me.
- Scrutinizing Dynamics – Rolling panel waves in theory and practice; 2012; Faas, T., Blumenberg, J. N.
- Little experience with technology as a cause of nonresponse in online surveys; 2012; Struminskaya, B., Schaurer, I., Kaczmirek, L., Bandilla, W.
- Automatic Forwarding on Web Surveys – Some Outlines and Remarks; 2012; Selkaelae, A.
- Thinking, Planning & Operationalizing Empirical Mixed Methods Research Design; 2012; Ruhi, U.
- Continuous large-scale volunteer web-surveys: The experience of Lohnspiegel and WageIndicator; 2012; Oez, F.
- Is Pretesting Established Among Online Survey Tool Users?; 2012
- An Evaluation of Two Non-Reactive Web Questionnaire Pretesting Methods; 2012; Lenzner, T.
- Recommendations for implementing online surveys and simple experiments in social and behavioural research...; 2012; Hewson, C. M.
- High potential for mobile Web surveys: Findings from a survey representative for German Internet users...; 2012; Funke, F., Wachenfeld, A.
- A taxonomy of paradata for web surveys and computer assisted self interviewing (Casi); 2012; Callegaro, M.
- Can Social Media Research replace traditional research methods?; 2012; Faber, T., Einhorn, M., Hofmann, O., Loeffler, M.
- Bad Boy Matrix Question – Whatcha gonna do when they come for you?; 2012; Tress, F.
- Matrix vs. Single Question Formats in Web Surveys: Results from a large scale experiment; 2012; Klausch, L. T., de Leeuw, E. D., Hox, J., de Jongh, A., Roberts , A.
- Effects of Static versus Dynamic Formatting Instructions for Open-Ended Numerical Questions in Web Surveys...; 2012; Kunz, T., Fuchs, M.
- FamilyVote – Conducting online surveys with children and families; 2012; Geissler, H., Peeters, H.
- The influence of social desirability on data quality in face-to-face and web surveys; 2012; Keusch, F.
- Reducing the Threat of Sensitive Questions in Online Surveys; 2012; Couper, M. P.