Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title How the survey presentation affects the answers you get: Estimate bias effect in a comparative study
Author Bartoli, B., Zucconi, S.
Year 2014
Access date 10.12.2014

Introduction: Online panels are population's subsets who accept to complete questionnaires in exchange of economic incentives, charitable givings or just for the sake of giving their opinions. Whereas on one hand the incentive presence is a way to ensure the people usually not helpful partecipation, on the other hand it could drive the panelists to respond in a not truthful way: panelists are inclined to elaborate strategies for going beyond initial filtering questions as soon as they undertsand that those are used to determine panelist eligibilty. How the introduction questions could induce respondents to give not truthful informations for escaping a potential screenout? To verify if this bias exists and to measure its dimensions, we have done a survey using two different kind of introduction for the same questionnaire. In one case we stated explicitly what the screenout criterias would be, on the other we kept it secret. This method allows us to measure this kind of bias and to obtain informations about cheaters profile. Method: In May of 2014 we've made a questionnaire aimed to estimate the consumption of organic food by italian population through a CAWI survey. In a first version of the survey we've processed an explicit introduction from which the panelist could infer the topic of the survey. We've done this first survey on our panel on a sample of 1000 respondents with a quota sampling (gender, age and geographical area) by using the first version of the questionnaire with an explicit presentation. At the end of this first survey we've noticed that the percentages obtained from the food purchasing manager and consumers of organic food was too high compared to the data in our possession coming from different sources. We had, therefore, a first confirmation of the fact that the presence in the introduction of the questionnaire of some clues, that could induce a part of panelists to lie, in order to obtain an incentive, had generated a bias. From the literature we know that the panelists are often registered on multiple panels, joining the same panel with double or triple identity, and finally tend not to be truthful, again in order to obtain incentives. In this regards, interesting are the indications of methodological research for the identification of cheaters. Hence, we've repeated the survey to give more foundation to the first confirmation of the hypothesis. We faced 2 alternatives in order to understand the magnitude of the bias, ie repeat the survey:

  • with different panellists;
  • with the same panellists.

We've chosen the latter because we thought it would have let us to gather more information due to the comparability of a greater number of subsets. After the second survey's field eneded, we have been able to split first survey's respondents in 3 category: who completed the first survey but not the second, who responded in the same way at both the survey, who responded in a different way. At the first survey responded 1.001 panelists and 885 of those responded also to the second survey carring to the below situation:

  • 116 persons didn't respond to the second survey
  • 592 panelists gave the same answers at both the surveys
  • 293 panelists gave different answers in the second survey

This way of proceeding (measure again the same variables after some weeks, changing presentation omitting indications about the questionnaire contents) made us able to analyse 3 different subsets and study their differences, furthermore it has made possible to measure the bias effect due to the use of two different kind of presentations. Conclusions: The first conclusion that we can make is: the use of two different introductions changes the results, confirming the hypothesis that the clues about the questionnaire's argument in the presentation can cause bias in the answers. The second conclusion is: there are specific differences in the 3 categories produced by the experiment. With this experiment we have found out some practical indication about the bias extent and about the right way to fix it.

Year of publication2014
Bibliographic typeConferences, workshops, tutorials, presentations