Notice: the WebSM website has not been updated since the beginning of 2018.

Call for Papers: AAPOR Mini-Conference on Reassessing Today's Survey Methods

Oct 03 2014

The AAPOR Task Force on Reassessing Today's Survey Methods invites your participation in a special track of sessions, a mini-conference, at the AAPOR 70th Annual Conference, May 14-17, 2015 in Hollywood, FL. The mini-conference organizers are accepting proposals for full panels, which may be comprised of 4-5 papers, a roundtable discussion with several experts and a moderator, or other content that fits within the 90-minute timeframe of a single conference session.

Deadline for panel proposals: October 15, 2014 with notification of acceptance by October 31

Deadline for individual papers and presentations: November 12

The challenges faced by survey researchers today have produced a range of activities, from new methods to deal with the challenges, to improvements to the old methods to meet the challenges, and to giving up altogether on traditional surveys in favor of non-survey approaches.

The Task Force is pleased to sponsor this ‘mini‐conference’ within the 2015 AAPOR conference to serve as a platform and forum for new research on the rapidly changing landscape of survey methodology. The mini-conference will operate as a separate track of sessions within the larger conference, and also will feature one or two related short courses and a role in the revised and expanded version of the successful ResearchHack that took place at the 2014 conference.  

The goal of the mini‐conference is to bring together experts in survey methodology, sampling statistics, quantitative analysis and qualitative analysis to discuss:

  • current concepts of survey quality;
  • the challenges and opportunities presented in designing and using all forms of attitudinal and behavioral survey data in light of rapidly emerging technologies;
  • the opportunities that may be provided by the increasing varieties of methodologies available (sampling, data collection, estimation)

The primary focus of the Task Force is on household and individual-level survey research, rather than on social media analysis or various forms of behavioral data such as administrative records or so-called "big data,” though research that uses surveys in conjunction with social media or organic data may be relevant and will be considered.

The mini‐conference seeks presentations that address the following key areas:

  • Should the traditional concepts of “survey quality” and “data quality” hold for modern surveys? How should we evaluate “quality” in today’s survey/data environment?
  • When the notion of “fit for purpose” is appropriate, how can it be applied more concretely in the design, collection, and evaluation of survey data?
  • How should “representativity” be defined for surveys?  What methods are available for evaluating it?
  • How can the concept of Total Survey Error best be applied to the evaluation of surveys based on non-probability samples?
  • In what ways are probability and non-probability samples similar and different? What is the difference in data quality between a probability-based survey with a very low response rate and an opt-in panel survey? Is the challenge of non-coverage similar or different in probability and non-probability samples?
  • What are the strengths and weaknesses of different forms of non-probability sampling?
  • Which quality metrics and methodological details about surveys based on new methods should be reported in published research?
  • What challenges and opportunities exist to improve survey quality across the array of methodologies currently available?
  • Is there evidence that non-probability methods that deal with challenges at both the sampling and estimation stages produce superior quality data compared with those that use modeling only at the estimation stage?
  • How can the precision of samples based on non-probability methods be reliably estimated?
  • What variables (beyond standard demographics) are associated with the propensity to join online panels (e.g., altruism)? How can they be used in models to reduce sample bias?
  • What methods work best for improving inferences from opt-in panels and other nonprobability samples?
  • The AAPOR code stipulates the elements of a survey that must be disclosed in the public release of survey results. What additional information, if any, is needed for a non-probability survey to be fully transparent?

If you would like your abstract considered for the mini‐conference program track, be sure to check the appropriate box when submitting an abstract for a panel or a paper.

Attachments (Word, PowerPoint, Excel, PDF, ZIP, ARJ, RAR, TXT, XML)

This article has been read 1264 times

Comments