Nonresponse and Bias Trends in Telephonic Probability Samples

October 2018

Participation in telephone surveys has been on a downward decline nearly since the popularization of telephonic research (see Curtin et al, 2000).

How low have they gotten, and to what effect?  Can we still have confidence that survey results from telephone surveys are valid and reliable?

RESPONSE RATE TRENDS

In terms of declines in participation, Curtin and colleagues measured response over time in the Survey of Consumer Attitudes (2000; 2005), reporting a response rate of 72 percent in 1979 and 48 percent in 2003, with a quite linear trend between the two points in time.  Prior to the Curtin et al. article, Steeh (1981) explored response in the National Election Surveys and the Survey of Consumer Attitudes from the early 1950s to 1979.  Her research unsurprisingly found a significant increase in nonresponse over that period of time.  Bradburn continued to show the same trends through the 1980s in his 1992 article.  Importantly, the decline in response has been endemic, spanning across sectors and survey types (Brehm, 1994). Overall, research makes it clear that nonresponse has consistently increased from the 1970s to 2000 (Singer, 2006).

More recently, the National Research Council extended many trends to 2011 in its report on nonresponse (2013).

For example, the National Health Interview Survey response rates declined for their sample adult module from 80 percent to 66 percent from 1997 to 2011; the General Social Survey from 83 percent to 70 percent from 1975 to 1993; the National Household Education Survey from 81 percent to 53

percent from 1991 to 2007; the National Immunization Survey from 87 percent to 64 percent from 1995 to 2010; and the BRFSS from as high as 65 percent to the mid-30 percent range from 1996 to 2010.

Notably, the studies cited above are high effort, high response rate studies, many of which utilize an in-person interviewing design, and only report trends up to 2010.

What about more recent policy research surveys or the more typical public opinion polls?  Pew reported on trends in their own studies, which revealed a response rate decline from 35 percent to 9 percent from 1997 to 2012 (Pew Research Center, 2012).

The California Health Interview Survey adult module response rate declined from 38 percent in 2001 to 8 percent in 2017.  And the Survey of Consumer Attitudes, noted above as attaining a 48 percent in 2003, now has a response rate in the single digits.

CBS poll response rates have declined from 19 percent in 1999 to 5 percent in 2015 (using AAPOR’s RR1, a more conservative estimate than the traditional RR3 calculation), and ABC polls have declined from 28 percent in 1997 to a rate of 9 percent in 2018.

TRENDS IN BIAS

Importantly, efforts to assess the degree to which lower survey response has led to survey bias have almost exclusively found null effects.

Keeter and colleagues (2000) manipulated a high response rate survey (61 percent) and deleted hard-to-reach cases to generate a low-response rate (36 percent) survey of the same data.  This afforded them the ability to compare measures against themselves directly with no concern for measurement error or other potential confounds.

They found almost no significant differences between the high and low response rate versions of their survey.  The 2006 replication and expansion of this research (Keeter et al, 2006) found largely the same results, with 7 out of 84 tests of differences marginally statistically significant between a survey that attained a 50 percent response rate compared to one that only achieved 25 percent.  Groves and Peytcheva (2008) used a meta-analytical approach to investigate non-response, utilizing 59 surveys with varied level of effort and response.

Measures of response and quality, such as the use of prenotifications, did not significantly influence bias.

This is similar in nature to other research by Groves (2006), also finding little impact of nonresponse on bias.

Research since the early 2000s is sparse.

Dutwin and Buskirk (2017) found that surveys with response rates under 10% showed little bias compared to the National Health Interview Survey, an in-person design that attains response rates in the upper 70 percent range.

In another paper, Dutwin and Buskirk (2018) explored survey bias trends from 1996 to 2015 and found three general stages of telephonic survey bias by comparing lower response rate survey estimates to benchmarks from official U.S. Census benchmarks:

  • Flat and low bias for surveys conducted between 1996 and 2004;
  • A period of bias growth from 2004 to 2007;
  • 3) A period of bias decline since 2007 (see table below).

Current 2016 estimates of bias are on par with where they were before the “Stage 2” increase began.

The authors argue that bias in the mide-2000s appears to have more to do with design issues, namely under-representations of respondents reached by cell phone, than a product of nonresponse.

In fact, studies with sufficient cell phone interviews during that time period show no trends in increasing bias over time.

REFERENCES

Bradburn, Norman (1992). A response to the nonresponse problem. 1992 AAPOR Presidential Address. Public Opinion Quarterly 56 (3): 391–397.

Brehm, John (1994). Stubbing our toes for a foot in the door? Prior contact, incentives and survey response improving response to surveys. International Public Opinion Research, 6 (1): 45–64.

Brick J. Michael & Tourangeau Roger, 2017. “Responsive Survey Designs for Reducing Nonresponse Bias,” Journal of Official Statistics, Sciendo, vol. 33(3), pages 735-752, September.

Curtin, R., Presser, Stanley, and Singer, Eleanor (2000). The effects of response rate changes on the index of consumer sentiment. Public Opinion Quarterly 64: 413–428.

Curtin, Richard, Presser, Stanley, and Singer, Eleanor (2005). Changes in telephone survey nonresponse over the past quarter century. Public Opinion Quarterly 69 (1): 87–98.

Dutwin, D, and Buskirk, T. (2017). Apples to oranges or gala versus golden delicious?: Comparing data quality of nonprobability internet samples to low response rate probability samples. Public Opinion Quarterly, Volume 81, Issue S1, 1 April 2017, Pages 213–239, https://doi.org/10.1093/poq/nfw061

Dutwin, D., and Buskirk, T. (2018). Telephone sample surveys: dearly beloved or nearly departed? Trends in survey errors in the age of declining response rates. Unpublished manuscript under peer review.

Groves, Robert M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70 (5): 646-75.

Groves, Robert. M. and Peytcheva, Emilia (2008).  The impact of nonresponse rates on nonresponse bias. Public Opinion Quarterly 72: 167-189.

Keeter, Scott, Miller, Carolyn, Kohut, Andrew, Groves, Robert M., and Presser, Stanley (2000). Consequences of reducing nonresponse in a large national telephone survey. Public Opinion Quarterly, 67: 125-48.

 Keeter, Scott, Kennedy, Courtney, Dimock, Michael, Best, Jonathan, and Craighill, Peyton (2006). Gauging the impact of growing nonresponse on estimates from a national RDD telephone survey. Public Opinion Quarterly, 70 (5): 759-779. 

Pew Research Center (2012).  Assessing the Representativeness of Public Opinion Surveys. Accessed at http://www.people-press.org/2012/05/15/assessing-the-representativeness-of-public-opinion-surveys/. 

Pickett, Justin, Reanalyzing Tourangeau (2017) and Brick and Tourangeau (2017): A Research Note on Nonresponse Bias (September 24, 2017). Available at SSRN: https://ssrn.com/abstract=3042254 or http://dx.doi.org/10.2139/ssrn.3042254

Silver, Nate (2014). Is the polling industry in stasis or in crisis? (August 25, 2014). Retrieved from http://fivethirtyeight.com/features/is-the-polling-industry-in-stasis-or-in-crisis/.

Singer, Eleanor (2006). Introduction: Nonresponse bias in household surveys. Public Opinion Quarterly 70 (5): 637–645.

Steeh, Charlotte G. (1981). Trends in nonresponse rates, 1952–1979. Public Opinion Quarterly 45: 40–57.

Tourangeau, Roger, and Plewes, Thomas J. (eds. 2013). Nonresponse in Social Science Surveys: A Research Agenda.  A report by the National Research Council of the National Academies. Washington, DC; National Academies Press.

ABOUT THE AUTHOR

David Dutwin, Ph.D.

SSRS EVP & Chief Methodologist

David Dutwin, Ph.D., is primarily responsible for sampling designs, project management, executive oversight, weighting and statistical estimation.  He is an active member of the survey research community, having served in the American Association for Public Opinion Research as a member and a chair of special task forces, a member of the Standards, Communications, and Heritage Committees; teaching multiple short courses and webinars; as the Student Paper winner of 2002; and as the 2016 Conference Chair.  He was elected to the AAPOR Executive Council in 2017 and serves as the 2017 Vice President/2018 President.  David is a Senior Fellow with the Program for Opinion Research and Election Studies at the University of Pennsylvania.

He holds a Masters of Communication from the University of Washington and his doctorate in Communication and Public Opinion from the Annenberg School for Communication at the University of Pennsylvania.  David attained his Bachelors in Political Science and Communication from the University of Pittsburgh

He has taught Research Methods, Rhetorical Theory, Media Effects and other courses as an Adjunct Professor at West Chester University and the University of Pennsylvania for over a decade.  David is also a Research Scholar at the Institute for Jewish and Community Research.  His publications are wide-ranging, including a 2008 book on media effects and parenting; methodology articles for Survey Practice, the MRA magazine Alert!, and other publications; and a range of client reports, most recently on Hispanic acceptance of LGBT, which he presented to a Congressional briefing in 2012.

Get the PDF

Want more information?