The Status of Telephone Interviewing in the US
Is there a future?
Telephone interviewing in the U.S. has enjoyed a golden age for a number of decades.
This age was characterized by efficient samples, modest if not at times high participation, and simple, widespread coverage. Unfortunately, those days are past. Today, efficiency is lower, participation has declined significantly, while coverage can be maintained by dialing two frames (landline and cell phones). As a result, costs have increased significantly. In addition, many question the accuracy of telephone point estimates given declines in participation and recent challenges to election polling.
But not all the news is bad news.
Recent metrics show that we may have weathered the storm as the downward trend in key metrics like participation have flattened. Empirical research on the quality of telephone samples finds that despite declines in participation, data quality of telephone samples remains very high and far superior to other lower cost alternatives such as non-probability online samples.
While it is difficult to predict the future given a recent past that has seen so much change, current metrics suggest that telephone interviewing, despite lower participation rates continues to be a robust option for research requiring relatively accurate point estimates and/or assessments of sentiment and opinion.
This white paper will briefly detail telephone research’s past, present, and future, elaborating upon the synopsis provided in the above paragraphs.
In 2016, SSRS’s Chief Methodologist David Dutwin and Paul Lavrakas published results from a study based on data from nine of the largest survey organizations in the U.S.
They found that landline telephone nonworking, refusal, and no answer rates all increased significantly from 2008 to 2015. While it took about 20 landline sample records to attain a single completion in 2009, in 2015 that increased for 46 sample records. On cell phones, however, the story was much less dire: refusal rates were found to be flat, nonworking rates were declining, though no answer rates rose significantly. Overall, a single cell phone interview took 22 sample records in 2009, and in 2015 it was 25.
The conclusion from this analysis mirrors what others have found as well, which is that participation and efficiency rates on landlines have been steadily declining, but these same metrics on cell phones have declined only marginally, and according to reports by Pew and others, are relatively unchanged since 2014. Given that nowadays it is not uncommon for 70 percent or more of interviews to be gathered by cell phones, and that the share of interviews attained by cell phones is consistently increasing on an annual basis, there is evidence to suggest that costs for telephone interviewing are flattening and that the near future of telephone interviewing will be no worse than the recent past in terms of cost and participation.
Of course, a somewhat common sentiment is that with participation in telephone surveys so much lower than it was in the past, data quality, and accuracy specifically, of telephone research must have declined, and to the point, perhaps, that it is not much better than what can be attained by a nonprobability online panel. Empirical evidence suggests that telephone research, even with lower response rates has maintained the same quality and accuracy it enjoyed when response rates were much higher. This evidence also suggests that telephone research data is more similar to rigorous in-person survey data than other alternative modes such as on-line panels.
The most reputable of these studies are noted in footnote 4 of this white paper. But a few are particularly noteworthy.
Walker and colleagues studied the Advertising Research Foundation’s study “foundations of quality 2” which pitted 17 online panels with a comparison telephone poll on a wide variety of metrics.
One of the most cited examples is smoking prevalence, where the telephone sample attained an 18% incidence (official government statistics put the smoking rate in the U.S. at the time at 17.5%), while the online samples all attained very different estimates, ranging from 19 to 33 percent. Dutwin and Buskirk (2017) attempted to “fix” online sample with a variety of adjustment techniques including sample matching, propensity weighting, and standard calibration (raking), and found online samples to have about twice the bias and three times the variance in their estimates compared to telephone samples, and that the telephone samples had only marginally higher bias rates than in-person samples.
Another perspective is to look at telephone data over time to see if it has in fact degraded temporally. Dutwin and Buskirk (2017, under peer review) analyzed every study conducted by CBS, ABC, and Pew from 1996 to 2015. They explored the degree to which each poll was biased across a range of interactive demographics (the percent of African Americans, for example, that have a graduate degree, or the percent of persons who have never attained a high school diploma by age groupings). They found three periods of bias in unweighted telephone samples: bias was flat from 1997 to 2004; bias increased from 2004 to 2007 and then flattened; and since 2011 has declined on an annual basis. Present day estimates of bias are similar back down to what they were a decade ago and are still falling. Weighted telephone bias is unchanged over time, and at a level similar to far more expensive (unweighted) in-person data. The authors find that the rise in bias from 2004 to 2007 is almost likely due to coverage bias when survey researchers failed to attain a significant number of interviews by cell phones, but a significant number of Americans were already discarding their landline telephones. As researchers conduct a sufficient number of interviews by cell phones, bias has declined significantly.
One last view is from political pollsters.
Concerned with the potential for greater bias in current-day election polling compared to those in years past, Nate Silver looked at how inaccurate gubernatorial, senatorial, and presidential polls have been over the past 20 years. He found that errors in political polling have been unchanged over that time. Indeed, despite the knee-jerk headlines just after the 2016 election, it turns on that on a national scale, polls predicted the popular vote with a greater degree of accuracy than is historically the average. Overall, as others are similarly reporting, telephone surveys are as accurate as ever.
So: Where is telephone interviewing going? Certain telephone metrics have changed quickly in the past. For example, when TCPA legislation forced telephone survey organizations to hand-dial cell phone numbers, costs increased literally overnight. This makes predicting the future a somewhat perilous task. That said, the trends suggest that telephone interviewing is here to stay, at least a little while longer, and perhaps for the long term. What is clear is that while it is true that costs for telephone interviewing have increased, there is no less expensive substitute that can match the quality of telephone surveys in the United States today.
 The advent and universal spread of list-assisted RDD simple random samples in the 1970s greatly improved sample efficiency compared to sampling for all possible working banks of telephone numbers.
 Response rates between 30 and 60 percent from the 1970s to the early 2000s.
 Before the spread of cell phones, over 98% of households had a landline telephone.
 See Chiang and Krosnick, 2009; Dutwin and Buskirk, 2017; Malhotra, & Krosnick, 2007; Walker et al, 2010; Yeager et al., 2011.
 Trends in Telephone Outcomes, 2008 – 2015, Survey Practice, 2016, 9(3). David Dutwin and Paul Lavrakas
 This is not to deny that work is required to improve political polling, particularly at the state level. For more information see the AAPOR report of the 2016 election here: http://www.aapor.org/Education-Resources/Reports/An-Evaluation-of-2016-Election-Polls-in-the-U-S.aspx
 And in the same vein, if the FCC were to provide an exception for telephone research in the future, costs could just as quickly be significantly reduced.
About the Author
David Dutwin, Ph.D.
SSRS EVP & Chief Methodologist
David Dutwin, Ph.D., is primarily responsible for sampling designs, project management, executive oversight, weighting and statistical estimation. He is an active member of the survey research community, having served in the American Association for Public Opinion Research as a member and a chair of special task forces, a member of the Standards, Communications, and Heritage Committees; teaching multiple short courses and webinars; as the Student Paper winner of 2002; and as the 2016 Conference Chair. He was elected to the AAPOR Executive Council in 2017 and serves as the 2017 Vice President/2018 President. David is a Senior Fellow with the Program for Opinion Research and Election Studies at the University of Pennsylvania.
He holds a Masters of Communication from the University of Washington and his doctorate in Communication and Public Opinion from the Annenberg School for Communication at the University of Pennsylvania. David attained his Bachelors in Political Science and Communication from the University of Pittsburgh
He has taught Research Methods, Rhetorical Theory, Media Effects and other courses as an Adjunct Professor at West Chester University and the University of Pennsylvania for over a decade. David is also a Research Scholar at the Institute for Jewish and Community Research. His publications are wide-ranging, including a 2008 book on media effects and parenting; methodology articles for Survey Practice, the MRA magazine Alert!, and other publications; and a range of client reports, most recently on Hispanic acceptance of LGBT, which he presented to a Congressional briefing in 2012.