Web Survey Bibliography
Web surveys of campus populations are a unique survey sub-type. On the plus side, we have e-mail addresses for everyone; on the minus side, not everyone uses their campus address. On the plus side, it is easy and inexpensive to invite the entire population, producing a large “sample” size (I use “sample” in quotation marks because these surveys are not typically sample surveys); on the minus side, response rates are typically low, raising issues of non-representativeness and non-response bias. On the plus side, we can easily weight the survey “sample” to known population parameters such as gender, ethnicity, academic level, residential status, type of admit, and others. But while weighting may indeed correct for non-representativeness of the survey “sample,” it is impossible to correct for non-response bias unrelated to the factors included in the weights – particularly the possibility that respondents, regardless of their characteristics, may be more engaged and have higher satisfaction levels than non-respondents. We also need to be careful not to make any particular under-represented group try to speak for a much larger group of non-respondents. In this paper, I review the literature on weighting of student surveys, and analyze several surveys conducted at The University at Albany, SUNY, during the last few years: in particular, the 2006 Student Opinion Survey (administered to a cluster sample of classrooms) and the 2007 Student Experience Survey (administered by web to all matriculated undergraduates). I show that weighting, while an important (and I argue necessary) tool, does not solve all problems. At the same time, I also show that increasing response rates are not a panacea – even high response-rate surveys are subject to the same types of problems as low-response rate surveys, albeit to a lesser degree, and should thus be weighted as well.