Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title Ranking vs. Rating in an online Environment
Year 2006
Access date 21.09.2006
Abstract The use of ranking scales is a controversial topic in social science. Since the Rockeach Value Survey (RVS) (1963) there is persistent discussion about the pros and cons of the rank ordering approach. The main argument against ranking is its complicated and expensive implementation in self administered surveys. Even though the number of objects is small, respondents are cognitively overstrained by writing the rank number next to the corresponding object. This leads to weak data quality and high non response. But if concentration is desired or the objects are likely to cause floor and ceiling effects, ranking exceeds rating. For this reasons Rockeach insisted on the rank ordering task for his 18 value items. He sent out gummed value labels to be pasted in the personal rank order of each respondent. This method leads to valid results, but is very laborious and costly. So it never became popular. To field rankings on the Internet with the standard HTML language likewise leads to unsatisfactory results. But using JavaScript, which is activated by approximately 99% of users, opens a whole new dimension of data collection. The graphical objects can freely be manipulated by drag & drop functions to result in the respondent’s personal rank order. But classical rank order approach does not allow the user to built ties. So a new method was implemented to allow the user a metric arrangement of the 18 instrumental values of the RVS. As a third condition classical rating was implemented. Even this was hard to answer, without graphical aid. So a highlighting method was developed to keep the respondents in the right line. The presentation will show when to use ranking scales from a methodological perspective. A catalogue of different possibilities of operationalisations will be given. Then the results of an experimental study comparing the ranking, rating and the “metric” ranking options will be provided. Besides objective criteria like drop out and item non-response, soft indicators like sensed suitability for the task, perceived burden and technical complexity will be contrasted.
Year of publication2006
Bibliographic typeConferences, workshops, tutorials, presentations
Full text availabilityNon-existant
Print

Web survey bibliography (4086)

Page:
Page: