Choosing the Right Methodology for Your Survey Research Project

When it comes to determining research methodology, researchers these days have many options.

These choices include multiple telephone frames and subframes, address based sampling, in-person, probability Internet panels, nonprobability Internet panels, respondent-driven sampling, and combinations of any and all of these approaches under the catchall term “hybrid” surveys.  All have relative strengths and weaknesses to one another.  As such, the term “fit for purpose,” for decades a catchphrase of survey research textbooks and the like, is now the pivotal description of the current reality of survey research.

 Researchers must choose, and choose wisely.

But far from a negative, the breadth of options available to survey researchers today is a boon, providing flexibility of cost, quality, and timing. So how do researchers choose?

This white paper is meant as a guide, providing general pros and cons to each approach.  It is our hope that this white paper can serve to assist survey researchers when they are considering what approach to take for their research project.

Telephone Survey Research has been the workhorse of survey research for decades, and still is in many circles.  Many interpret the modern age as one where the Internet is replacing telephone survey research.  While telephone research is faced with challenges of non-response and increasing cost, recent research highlighted in earlier white papers of this series find that despite increased costs and lower rates of participation, telephone surveys still attain data quality on par with the golden years of telephone data collection during the 1960s to the early 2000s.

Nonresponse has taken a toll in the past decade, leading to significant drops in productivity and thus significant increases in cost.  Coupled with lower participation (but unrelated) was the growth of the cell phone frame and the Telephone Consumer Protection Act, which drove out the efficiencies realized with computer-assisted dialing.  Altogether, telephone costs are substantially higher today than they were a decade ago.  But quality, based on all the empirical evidence available, has been only modestly impacted.  As noted in prior white papers, the point estimates attained today are largely as accurate as they were at any point in the history of telephone surveys.  Telephone surveys thus still remain a strong solution for those requiring high data quality, and do not have the budget for in-person surveys.  They can be executed in very short field periods if necessary, are capable of geographic and demographic targeting (though not with the same accuracy on cell phones compared to landlines), and are very capable in executing highly complex studies in terms of skip patterns and other programming features. In short, there is still much to like regarding telephone survey research, for those who have been able to absorb the increase in costs.

Address Based Sampling became viable as an alternative to telephone surveys in the late 2000s.

While there were initial concerns about the quality of the frame, particularly in rural areas, improvements have been substantial and today the Delivery Sequence File (DSF) provided by the U.S. Postal Service to sampling vendors, and appended and improved upon by those vendors, is arguably the highest quality sampling frame in the U.S. today.  The challenge is how to utilize the frame.  There are many options, the simplest of which is to mail paper surveys.  A second option is to send letters inviting people to participate by web, and a third option is to append telephone numbers and call potential respondents.  Alone, the Internet only or telephone only options have serious coverage problems since approximately 13% of households do not use the Internet and about half of households do not show up on telephone listing databases.  As a result, researchers will often then utilize two or even three of these modes to gather data.  Despite the strength of the frame and the variety of response modalities, there are limitations of an ABS study.  The greatest limitation is time.  Given the self-administered nature of this mode, it takes time to design, print, and mail surveys and receive the returns, or to mail letters inviting to the web and sending out subsequent reminders.  In order to maximize response rate, it is typically better to conduct the survey in two phases, using the first phase to model response in order to release a second phase that only requires just enough sample to get the job done.  In summary, it is very difficult to conduct an ABS survey in under two weeks and only then with a low response rate.  We should consider a longer field period, particularly if response rate is a concern.  Another significant concern of ABS designs in nonresponse bias.  While it is true that telephone samples have tended to skew towards older and more White respondents for years, telephone samples with sufficient cell phone interviews tend to represent younger and non-white respondents fairly well.  But research by Link, Rapoport, and others all find that ABS respondents can skew even more so by age and race/ethnicity than telephone samples ever did.  As a result, many ABS designs often incorporate disproportionate sampling by geography and appended listed information in order to oversample the young, the non-Caucasian, and/or low income respondents.  ABS surveys are not necessarily cheaper than telephone surveys.  It really depends on the methodological approach, response rate requirement and survey incidence of the particular study in question.

Non-probability samples have consistently been shown to hold more inherent bias and variance of bias than probabilistic approaches.

As an example, see David’s other white paper >>.  The variance of bias is perhaps most concerning, as research shows that while some non-probability surveys have exhibited very little bias, others have exhibited a great deal of bias, and there is really no way to predict which you might get when initially fielding a given project.  Non-probability panels can be quite large, thereby allowing for surveys of very large sample sizes, of low-incidence populations, and the ability to conduct long-term tracking studies.  Of course, cost is minimal compared to any type of probability survey.  While a number of approaches have been applied to reduce the bias of non-probability surveys, including sample matching, propensity weighting, and advanced “deep” calibration, none have displayed an ability to consistently reduce substantial bias across all studies.  While many survey researchers are at present attempting to develop highly-customized solutions, the effectiveness of these solutions varies greatly from survey to survey.  At the end of the day, non-probability surveys are fit for the purpose of studies that do not require highly accurate point estimates of populations and for studies meant to conduct experimentation without a stringent requirement of external validity.  They can function well to develop relatively accurate estimates in some topic domains and in some cases, though again the principal concern is the variance of bias one can attain with such data.

Probability panels are an attempt to bridge the gap between the low cost of non-probability panels and the higher baseline data quality of probabilistic approaches.

And while it is generally true that they accomplish this task, there are a number of caveats to consider.  First, probability panels tend to be small, with about 50,000 members or fewer.  This limits their ability to conduct studies in small states and limited geographies, of low-incidence populations, or long-term trackers that require fresh respondents day after day or month after month.  Second, as is the case with any self-administered mode, probability panels are faced with potential data quality issues such as straight-lining, speeding etc. this is compounded by the fact that most commercial panels expose respondents to multiple surveys every week.  Thirdly, it should be understood that current response rates of most panels are an order of magnitude lower than custom telephonic research.  This makes logical sense since a panel member is typically twice removed from a custom telephone respondent.  Whereas a custom telephone respondent only needs to respond once, panelists must respond to an initial request to be empaneled, then actually go through an empanelment process, and then respond to specific surveys to which they are invited.  In the end, most panels, properly calculated, attain response rates of well under five percent.  Perhaps as a result, probability panels typically still retain higher data quality than non-probability panels but as a recent Pew report has shown[1], this is not always the case.  In the end, probability panels do offer the ability to conduct fast-turnaround, probability-based surveys with modest data-quality.  They can do so for national populations but become more limited surveying small geographies or low incidence populations.

For decades, the Omnibus Surveys have been the choice of market researchers as a low cost alternative to custom telephone surveys.  An omnibus is a shared-cost survey fielded on a regular basis by a survey research company and includes a standard screener and demographic battery.  Researchers then purchase time to field their own questions in a wave or multiple waves of the omnibus.  While non-probability panels have replaced some market share of the omnibus as a low-cost market research platform, more and more social science and public opinion researchers are discovering the omnibus as a valid alternative to the increasing cost of custom telephone research.  Omnibus methodologies often mirror the methods of custom telephone surveys.  The strength of an omnibus is the ability to field short length surveys for low cost and quick turnaround times. In addition, omnibus surveys can be leveraged for low incidence surveys, attaining respondents in a true RDD framework if given enough time to attain a sufficient number of interviews.  Omnibus does, of course, have limitations as well.  Longer-length surveys are not possible, and again while surveys of small geographies or low incidence populations are possible, that is only true if one has the time to run in multiple waves.

Hybrid Surveys leverage multiple frames for strategic reasons such as cost savings, coverage, or data quality.

Options abound, but an increasingly common hybrid survey is one that uses both nonprobability and probability samples.  The goal of this design is to save costs via the nonprobability panel while retaining data quality from the probability sample.  SSRS is a leader in the field in executing these studies successfully[2].  Other options include studies that use address-based frames in addition to targeted listed telephone samples, or for low incidence studies, what might be called “kitchen sink” designs that utilize multiple prbability and nonprobability samples in order to use any and all opportunities to target low incidence populations.

Overall, there has never been so many choices for fielding survey research.  Each has its own limitations and opportunities.  While having so many choices increases complexity, it allows researchers to increasingly “custom fit” a methodological approach to a particular research endeavor.  SSRS senior research staff has deep experience in each of the methodologies outlined about and collaboratively working with clients in developing most suitable research approach for their research needs.  We are happy to help you discover what approach is best for your research project.

About the Author

David Dutwin, Ph.D.

SSRS EVP & Chief Methodologist

David Dutwin, Ph.D., is primarily responsible for sampling designs, project management, executive oversight, weighting and statistical estimation.  He is an active member of the survey research community, having served in the American Association for Public Opinion Research as a member and a chair of special task forces, a member of the Standards, Communications, and Heritage Committees; teaching multiple short courses and webinars; as the Student Paper winner of 2002; and as the 2016 Conference Chair.  He was elected to the AAPOR Executive Council in 2017 and serves as the 2017 Vice President/2018 President.  David is a Senior Fellow with the Program for Opinion Research and Election Studies at the University of Pennsylvania.

He holds a Masters of Communication from the University of Washington and his doctorate in Communication and Public Opinion from the Annenberg School for Communication at the University of Pennsylvania.  David attained his Bachelors in Political Science and Communication from the University of Pittsburgh

He has taught Research Methods, Rhetorical Theory, Media Effects and other courses as an Adjunct Professor at West Chester University and the University of Pennsylvania for over a decade.  David is also a Research Scholar at the Institute for Jewish and Community Research.  His publications are wide-ranging, including a 2008 book on media effects and parenting; methodology articles for Survey Practice, the MRA magazine Alert!, and other publications; and a range of client reports, most recently on Hispanic acceptance of LGBT, which he presented to a Congressional briefing in 2012.

Get the PDF

Want more information?