Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title The Low Response Score (LRS): A Metric to Locate, Predict, and Manage Hard-to-Survey Populations
Source Public Opinion Quarterly (POQ); 81, 1, pp. 144–156
Year 2016
Access date 24.08.2017
Abstract In 2012, the US Census Bureau posed a challenge under the
America COMPETES Act, an act designed to improve the competitiveness
of the United States by investing in innovation through research
and development. The Census Bureau contracted Kaggle.com to host
and manage a worldwide competition to develop the best statistical
model to predict 2010 Census mail return rates. The Census Bureau provided
competitors with a block group-level database consisting of housing,
demographic, and socioeconomic variables derived from the 2010
Census, five-year American Community Survey estimates, and 2010
Census operational data. The Census Bureau then challenged teams to
use these data (and other publicly available data) to construct the models.
One goal of the challenge was to leverage winning models as inputs
to a new model-based hard-to-count (HTC) score, a metric to stratify
and target geographic areas according to propensity to self-respond in
sample surveys and censuses. All contest winners employed data-mining
and machine-learning techniques to predict mail-return rates. This
made the models relatively hard to interpret (when compared with the
Census Bureau’s original HTC score) and impossible to directly translate
to a new HTC score. Nonetheless, the winning models contained
insights toward building a new model-based score using variables from
the database. This paper describes the original algorithm-based HTC
score, insights gained from the Census Return Rate Challenge, and the
model underlying a new HTC score.
Year of publication2016
Bibliographic typeJournal article
Print

Web survey bibliography - Public Opinion Quarterly (POQ) (90)

Page:
  • 1
  • 2
Page:
  • 1
  • 2