Research Objectives

The SSRS Opinion Panel has grown a lot over the past few years—both in size, and in the number of surveys that run on it every month. We carefully monitor the number of surveys received by panel members to make sure that no panelists are being overburdened with survey requests. This is of particular concern for probability panels, like the SSRS Opinion Panel, which are assembled from random samples of adults. Unlike opt-in panels, probability panels should be less skewed towards people who are interested in taking surveys “just for the fun of it”. There may be a point at which receiving too many surveys would cause a panelist to leave the panel, making them unavailable for sampling into future surveys.

But this raises a question: exactly how many surveys is too many—or, equally as important, too few? In particular, what is the relationship between the frequency with which a panelist is surveyed and their likelihood of remaining an active panelist?

Analysis Method

To answer this question, we looked at a cohort of about 5,000 SSRS Opinion Panel members who were recruited in the summer of 2021. These panelists have seen a lot of surveys—about 340 surveys ran on the Opinion Panel between their recruitment and the time the analysis was conducted.

We used a longitudinal analysis method called “two-way fixed effects regression” to analyze the relationship between survey response behavior and the amount of time between survey requests. Here, we will focus on three important outcomes:

  • Survey-level nonresponse: whether a panelist fails to respond to an invitation to an individual survey.
  • Panel attrition: whether a panelist fails to respond to a sequence of several survey invitations.
  • Low satisfaction: whether a panelist indicates that their experience with the SSRS Opinion Panel is “fair” or “poor” when asked about this after completing a survey.

Principal Findings

In this section, we will present graphs derived from the two-way fixed effects regressions, showing how the predicted probability of each outcome changes as we increase the number of days between survey invitations for an “average” panelist. It is important to note that all of these graphs have been rescaled to be in relative terms, so their maximum will always be 100%.

Survey-level nonresponse

Figure 1 shows how, for an average panelist, the predicted nonresponse rate to an individual survey changes as the number of days since the last survey increases. While this plot is upward sloping, and the relationship is statistically significant, it is also very shallow. This means that panelists are more likely to skip their next survey the longer it has been since their last survey, but not by very much.

Figure 1: Predictive margins for relative survey-level nonresponse rate

Predictive margins for relative survey-level nonresponse rate

Panel Attrition

Figure 2 shows the predicted probability of attritting from the Panel at a given survey invitation—specifically, the probability that the panelist begins a sequence of 5 or more nonresponses at that invitation.

This plot is again upward-sloping, with a statistically significant relationship. It is also steeper than the plot for survey-level nonresponse. In other words, when nonresponse does occur, it is more likely to represent the beginning of an attrition episode (a sequence of multiple nonresponses) the longer the gap between invites. Note that this relationship appears both when looking at the number of days since the last survey invitation (denoted by the x axis) and the number of days until the next survey invitation (denoted by the color of the line).

Figure 2: Predictive margins for Panel attrition

Predictive margins for Panel attrition

Low Satisfaction

Finally, Figure 3 shows the predicted relationship between the gap between survey invitations and the probability that the panelist will indicate a relatively low satisfaction with their experience on the Panel. Unsurprisingly at this point, this relationship is mostly positive—panelists with less frequent survey invitations are generally less satisfied. Interestingly, there is some indication that this relationship may be more U-shaped—with more than about 1 survey per month, the probability of low satisfaction starts to become slightly higher. However, we were not able to confirm statistical significance for this finding, so it is something to monitor as we compile more data about panelist satisfaction.

Figure 3: Predictive margins for low panelist satisfaction

Predictive margins for low panelist satisfaction

Conclusions

Generally, this analysis suggests that for a typical probability panel member, “too few surveys” is a bigger risk than “too many surveys”. When a panelist is invited into a survey, there is generally a higher risk of a negative outcome—nonresponse to that individual survey, the beginning of an attrition episode, or low panelist satisfaction—the longer it has been since their last survey. Of course, there may still be a point at which survey invitations could become too frequent; but if that point exists, it appears to be well outside the range of survey frequencies that a typical SSRS Opinion Panel member experiences.

The takeaway for those who manage probability panels is that it is important to keep the volume of surveys high enough to keep panelists engaged and interested. This is one benefit of multi-purpose panels like the SSRS Opinion Panel—since they are open to a wide range of studies, it is easier to maintain a consistent volume of surveys over time, which in turn makes it more likely that panelists will still be available when they are needed.

Of course, there may be times when a more limited-purpose proprietary panel is a better fit for your research needs. If that is the case, SSRS can provide data-driven guidance on the optimal volume of surveys needed to make the investment in a panel worthwhile.