Web Survey Bibliography
Survey researchers and methodologists seek to have new and innovative ways of evaluating the quality of data collected from sample surveys. Paradata, or data collected for free from computerized survey instruments, have increasingly been used in survey methodological work for this purpose (Couper, 1998). One error source that has been studied using paradata is measurement error, or the deviation of a response from a “true” value (Groves, 1989; Biemer and Lyberg, 2003). Although used in psychological literature since the 1980s (see Fazio, 1990, for an early review) and adapted to telephone interviews by Bassili in the early 1990s (Bassili and Fletcher, 1991; Bassili and Scott, 1996), the adoption and use of paradata for studying measurement-error- related outcomes has grown exponentially with the growth of web surveys and increased use of computerization in interviewer-administered surveys (Couper, 1998; Heerwegh, 2003; Couper and Lyberg, 2005). Paradata are a proxy for breakdowns in the cognitive response process or identify problems respondents and interviewers have with a survey instrument (Couper, 2000; Yan and Tourangeau, 2008).
Paradata can be collected at a variety of levels, resulting in a complex, hierarchical data structure. Examples of paradata collected automatically by many computerized survey software systems include timing data, keystroke data, mouse click data, and information about the type of interface such as the web browser and screen resolution. Examples of paradata that inform the measurement process, but not collected automatically, include behavior codes, analysis of vocal characteristics, and interviewer evaluations or observations of the survey-taking process. Paradata available to be captured vary by mode of data collection and the software used for data collection. One challenge is that not all off-the-shelf software programs capture paradata, and thus user-generated programs have been developed to assist in recording paradata. Further complicating matters is how the data are recorded, ranging from text or sound files to ready-to-analyze variables. In this chapter, we review different types of paradata, evaluate how paradata differs by mode, and examine how to turn paradata into an analytic dataset.
DigitalCommons Homepage (abstract) / (full text)
Web survey bibliography - Olson, K. (13)
- The Effect of CATI Questions, Respondents, and Interviewers on Response Time; 2016; Olson, K.; Smyth, J. D.
- Identifying predictors of survey mode preference; 2015; Millar, M. M.; Olson, K.; Smyth, J. D.
- The Effect of Answering in a Preferred Versus a Non-Preferred Survey Mode on Measurement; 2014; Smyth, J. D., Olson, K., Kasabian, A.
- Assessing Within-Household Selection Methods in Household Mail Surveys; 2014; Olson, K., Stange, M., Smyth, J. D.
- Accuracy of Within-household Selection in Web and Mail Surveys of the General Population.; 2014; Olson, K., Smyth, J. D.
- Using Eye Tracking to Examine the Visual Design of Web Surveys; 2014; Zhou, Q., Ricci, K., Olson, K., Smyth, J. D.
- Analyzing Paradata to Investigate Measurement Error; 2013; Yan, T., Olson, K.
- Are You Seeing What I am Seeing? Exploring Response Option Visual Design Effects With Eye-Tracking; 2013; Libman, A., Smyth, J. D., Olson, K.
- Does Giving People Their Preferred Survey Mode Actually Increase Survey Participation Rates?; 2012; Olson, K., Smyth, J. D., Wood, H.
- Literacy and Data Quality in Self-Administered Surveys; 2011; Smyth, J. D., Olson, K.
- Medium Node: NSF Census Research Network; 2011; McCutcheon, A. L., Belli, R. F., Olson, K., Smyth, J. D., Soh, L.-K.
- Comparing Numeric and Text Open-End Responses in Mail and Web Surveys.; 2011; Olson, K., Smyth, J.
- When do nonresponse follow-ups improve or reduce data quality? A meta-analysis and review of the existing...; 2008; Olson, K., Feng, C., Witt, L.