In the 2020 presidential election, pre-election polls substantially overestimated President Biden’s margin of victory. In the wake of this industry-wide polling error, a common hypothesis was that the pre-election polls were subject to non-ignorable partisan nonresponse bias—in other words, supporters of President Trump were less likely to respond even after controlling for demographics on which polls are commonly weighted.

After 2020, significant efforts were made throughout the industry to correct for partisan nonresponse bias. For example, organizations like the Pew Research Center have launched benchmarking surveys to produce population benchmarks for party identification, allowing pollsters to weight on party. Some pollsters have also begun asking respondents who they voted for in the last election and weighting this measure to known election results.

However, this still leaves an outstanding question: what if nonresponse to political surveys is related not just to party or past vote, but also to vote preference in the upcoming election, even after controlling for measures of partisanship? If this were the case, then simply weighting on party and/or past vote would be inadequate to correct for nonresponse bias. Unfortunately, this type of bias is usually very difficult to measure or correct for, since the actual election results are not known at the time pre-election polls are conducted.

Probability panels like the SSRS Opinion Panel can help with this. On a probability panel, a typical panelist is likely to be surveyed multiple times per year. This means that, for panelists who were asked their vote preference at some point during an election cycle, it is possible to track the relationship between vote preference and the likelihood of responding to surveys.

In a recent presentation at the Midwest Association for Public Opinion Research (MAPOR) annual conference, we used the SSRS Opinion Panel to investigate whether vote preference in the 2024 presidential election was related to the likelihood of responding to surveys.

How We Did This Analysis

For most of this analysis, we focused on a subset of SSRS Opinion Panel members who were asked their 2024 vote preference at least once before mid-July 2024. We use mid-July as a convenient cutoff because this is when President Biden dropped out of the presidential race and was replaced by Vice President Harris.

For panelists who were asked their vote preference before this point, we defined their “baseline 2024 vote preference” as follows:

  • If they chose Biden at least once before he dropped out, and never chose Trump, we classified them as a likely Harris voter.
  • If they chose Trump at least once, and never chose Biden, we classified them as a likely Trump voter.
  • If they never chose either candidate when asked, or chose different candidates at different times, we classified them as undecided.

Using this “sub-panel” for which we had an observation of baseline 2024 preference, we ran regressions predicting response status to all surveys—on both political and non-political topics—that ran on the SSRS Opinion Panel between January 1 and Election Day 2024.

In addition to baseline 2024 preference as the key predictor, we included controls for common weighting demographics (age, gender, race, education, and region). We also ran versions of the regressions both with and without controls for party, to assess whether weighting on party would be sufficient to control for any relationship between vote preference and response behavior.

What We Found

2024 vote preference was not strongly related to nonresponse in Opinion Panel surveys

Figure 1 shows trends throughout 2024 in the relationship between vote preference and survey response. Specifically, this chart shows the regression coefficient on the Trump category of baseline vote preference. If the coefficient is above 0, it means that Trump voters showed higher-than-average response rates to the survey after controlling for weighting variables; if it is below 0, it means that Trump voters showed lower-than-average response rates. We show the results both including and excluding a control for party.

Figure 1: Results of regressions predicting survey response rates on SSRS Opinion Panel

For most surveys that ran on the SSRS Opinion Panel in 2024—both political and non-political— we have no strong evidence that either likely Trump voters or likely Harris voters were more or less likely to respond. Specifically, there was not a statistically significant relationship between baseline vote preference and survey-level response rates within the Opinion Panel.

Notably, this is true whether or not we include a control for party in the regression. This suggests that political attitudes, in general, were not strongly predictive of response to Opinion Panel surveys in 2024 after controlling for weighting demographics.

Estimates of the presidential race from the Opinion Panel were mostly stable after July, and close to final results

Figure 2 shows weighted estimates of the national popular vote among registered voters from the SSRS Opinion Panel. These estimates come from questions about 2024 vote preference that we periodically included in the twice-monthly SSRS Opinion Panel Omnibus, beginning in mid-2024, for research and evaluation purposes. The final estimate, from an Omnibus wave that ended the day before the election, shows approximately a tied race, very close to the actual national popular vote.

Figure 2: Estimated two-party Trump share among registered voters, SSRS Opinion Panel Omnibus

The most significant shifts in estimates from the Omnibus occurred in July around the Biden-Trump debate and President Biden’s subsequent departure from the race. Because we can observe in Figure 1 that there were not major changes in response patterns during this period, we can have increased confidence that the Omnibus estimates were picking up true changes in the preferences of the electorate.

What This Means

There are some limitations of using probability panels to assess survey nonresponse patterns. Most importantly, this type of analysis focuses on nonresponse among people who have already agreed to join a panel. In thinking about the impact of nonresponse on probability panel estimates, we also need to consider nonresponse at the recruitment phase—specifically, if people who agree to join a panel hold different views than people who do not, that might introduce some nonresponse bias that this analysis would not capture. Similarly, the subset of panelists for whom we had a measure of baseline vote preference could also be different from the remaining panelists.

Nevertheless, the ability to analyze, even in a partial way, the relationship between non-demographic variables and nonresponse patterns is an important benefit of probability panels. In a world where political attitudes are sometimes related to willingness to respond to surveys, this suggests that probability panels are likely to remain an important part of the election polling “toolbox” moving forward.

We plan to conduct follow-up research into whether a measure of baseline vote preference for panelists could be used to implement a weighting adjustment to correct for any relationship between vote preference and nonresponse, to a greater extent than is possible via weighting on demographics and party. While it does not seem that such an adjustment would have been needed in 2024, it may be worth considering in the future if nonresponse patterns change.