Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title Measurement options, measurement error, and usability in mobile surveys
Author Pferdekämper, T., Bosnjak, M., Metzger, G.
Year 2009
Access date 13.08.2009
Abstract

Mobile surveys offer new avenues for collecting primary data by widening the researchers´ options to address coverage, sampling, and timing issues. However, a number of new methodological questions emerge, one of them being associated with the measurement possibilities and corresponding errors involved when conducting mobile surveys. Accordingly, we conducted an empirical study aimed at exploring the usability of different mobile measurement scales. Five prototypical mobile question types were analyzed with regard to their measurement and usability properties on (1) objective, unobtrusive data (e.g., non-response and drop-out rates, loading times, response latencies) and their (2) subjectively rated usability (e.g., liking, perceived effort, ease of access, amount of unintended inputs, perceived fluency of answering process).

The study consists of two survey waves: In the first wave conducted in August 2008, 150 participants answered a set of questions related to a salient topic (Olympic Games 2008) using their cell phones. Five different question types were randomly presented and meta-data about the answering process were collected. In the second wave, those who participated in the mobile survey (first wave) were invited to participate in a follow-up web based survey on their subjective experiences with the mobile survey. The five different question types presented in wave 1 had to be evaluated in terms of their corresponding (1) scrolling effort, (2) ease of access and ease of selecting response options, (3) the degree of unintended answers and (4) the perceived fluency of the mobile surveying process.

The results illustrate that the five question types used (single response list, multiple response list, close response list, text field single row and picture-question type) are (a) overall evaluated positively, (b) perceived differently with regard to scrolling effort, ease of access, unintended answers and fluency and (c) varying concerning respondents’ subjective perceptions on a question type’s usability and its objective usability properties (i.e., although the usability of a picture-question-type was evaluated very positive, item-non-responses are relatively high).

As a result, actionable recommendations on how to implement mobile surveys can be derived. Furthermore, avenues for future research focusing on the discrepancies between objective usability indicators versus subjective experiences will be sketched.

Access/Direct link

Conference homepage (abstract)/(full text)

Year of publication2009
Bibliographic typeConferences, workshops, tutorials, presentations
Full text availabilityAvailable on request
Print