AAPOR Task Force Report on Data Quality Metrics for Online Samples
SSRS Chief Methodologist, Cameron McPhee, is Chair of the AAPOR Task Force, and Mickey Jackson, SSRS Director of Data Science Innovation, is also on the Task Force
AAPOR is pleased to announce the release of the new task force report entitled “Data Quality Metrics for Online Samples: Considerations for Study Design and Analysis”.
This report examines the characteristics of online survey panels and provides approaches for evaluating the panel methodologies and the sample-quality, and the inferential integrity of the resulting survey estimates.
For almost a century, the public opinion and survey research community has been relying on the statistical procedures developed by Neyman (1936) for producing measurable inferences when only a small sample of the target population is observed, albeit under specific requirements. Key among such requirements includes access to a complete sampling frame from which units could be selected with known probabilities. In practice, however, it has become exceedingly difficult to fulfill these requirements and for many reasons, including cost and feasibility, researchers have increasingly relied on pragmatic sampling alternatives.
The goal of this report is to provide audiences who have a basic understanding of survey methodology with an overview of the various types of online survey sampling methodologies currently being employed by survey researchers and major survey firms. Specifically, the report provides an overview of the landscape of online survey data collection, focusing mainly on probability and nonprobability online panels. It discusses how alternative methodologies for initial recruitment, decisions around panel freshening, respondent attrition, and missing data may impact sampling and data quality. The report outlines some ways to assess the quality of online samples, including well-known measures such as cumulative response rates and cooperation rates, as well as newer metrics that can be applied to online samples to evaluate representativeness and inferential reliability.
Specifically, the Task Force Report aims to:
- Develop a clear, concise, updated explanation of survey sample sources for online panels.
- Describe the representativeness and fitness for use of common sampling strategies that providers use to construct and replenish online panels.
- Assess coverage error of online panels, including systematic error related to recruitment methods, self-selection, and coverage of internet non-users.
- Propose alternative metrics of sample quality, beyond completion rates and cumulative response rates that measure the underlying representativeness and utility of online samples.
- Discuss whether sample quality metrics developed for use with probability-based panels can be applied to samples from panels that do not recruit using probability-based methods.
Discuss the application of AAPOR’s Code of Ethics’ reporting guidelines to studies using online panels and outline important issues regarding methodological transparency.