Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title Changing the scoring procedure and the response format to get the most out of multiple-choice tests conducted online
Year 2016
Access date 29.04.2016
Abstract
Relevance & Research Question: Traditional multiple-choice (MC) tests are often administered using paper and pencil, and typically rely on calculating the number of correctly solved questions to measure a test-taker’s knowledge. A major disadvantage of MC tests is their inability to capture partial knowledge and to adequately control for guessing and for testwiseness. We investigated whether conducting multiple choice tests online can help to address these problems by changing either the scoring procedure or the response format.

Methods & Data: We conducted a series of experimental investigations involving several hundred participants each. To investigate whether the traditional number-right scoring procedure can be improved, we computed option weights for MC tests both empirically and by querying experts. We also used two alternative response formats that can better be employed online than offline: (1) Discrete-option multiple choice (DOMC) testing that employs a sequential, rather than simultaneous presentation of answer alternatives; this response format presumably provides a better control of testwiseness because it does not allow test-takers to compare all available answers. (2) Answer-until-correct (AUC) testing, a response format that allows test takers to answer repeatedly until they identify the correct answer. By determining how many attempts a test taker needs to successfully solve an item, AUC allows to capture partial knowledge and to provide test takers with a direct feedback on their performance.

Results: We found that DOMC tests allowed for a better control of testwiseness than traditional MC tests while achieving the same level of reliability and validity. We also found that both, answer-until-correct testing and empirical option weighting allowed to improve the validity of a knowledge test. However, option weights were useful only if they were determined automatically on an empirical basis, rather than by querying experts.

Added Value: We compare the advantages and disadvantages of the various methods, and give recommendations on when the different response formats and scoring procedures should best be used, and when they should better be avoided.
Year of publication2016
Bibliographic typeConferences, workshops, tutorials, presentations
Print

Web survey bibliography - Germany (361)

Page:
Page: