Interviewers in crisis: the future after the unexpected triumph of Donald Trump

Interviewers in crisis: the future after the unexpected triumph of Donald Trump

Argentine News Website, Infobae, Spoke to SSRS EVP David Dutiwn about Polling and the Presidential Election

This Infobae Article has been Translated from Spanish to English Using Google Translate

The vast majority of the polls predicted Hillary Clinton’s triumph. Public opinion researchers formed a committee to analyze flaws. Some of them advanced their opinions to Infobae.

By Gabriela dodged December 11, 2016  |  Gesquivada@infobae.com

Perhaps it was not the first time the science that measured opinion was wrong: it happened with the Brexit vote, it happened in 2015 when it was predicted that the Israelis were tired of Benjamin Netanyahu, it happened in 2014 when the Republicans gained control of the Senate and The House of Representatives. But in 2016 it was the first time in the United States by losing surveys gave a presidential candidate who won -caramba-.

Systematic or correlated error affected most surveys. The American Association for Public Opinion Research (AAPOR, National Association of Public Opinion Research) joined a committee to study the results to be issued in May 2017. The president of AAPOR, Roger Tourangeau, told Infobae that “talk about a Crisis of the polls is an exaggeration.”

Agreed with him David Dutwin, vice president and manager of methodology in SSRS pollster: “I have seen many articles on the crisis in the polls and there is no doubt that the polls today face more challenges than ever, but I think talk of a crisis. Is an unreasonable reaction.”

Tourangeau – expert in methodology and vice president of the polling Westat – acknowledged that “most polls predicted a victory for Clinton.” However, he noted, ” national polls predicted a victory margin of 3% to 4% in the popular vote, which was located quite close to the real: about 2% favor Clinton”.

That margin of error “is not, in historical terms, the worst of the surveys that has been done,” Dutwin added. “It’s a bit higher, but not much higher. The main difference is that pollsters have had errors in the past but have predicted the right person.” Four years ago, he recalled, polls missed a 2% also “but they predicted Obama.” And in context, this has been the history of political survey in the United States: “Since 1984, surveys have been successful, except this error “.

What happened different this year? Why was the margin of error the same but predicted the wrong winner?

It is presumed that many undecided bent over Republicans, mostly because that was their origin or their primary sympathy. The profile of likely voters, who polls show in the center, many voters omitted Trump, who do not normally interested in politics and have not voted in previous elections; many of them did not even answer the polls, convinced – as their candidate insisted – that the elections would be rigged. There were also voters who would not admit Trump who gravitated millionaire, as the negative image created their -xenófobos, sexist, racist supporters, the shamed – violent.

But the research also worked inaccurate data.

The surveys in the different states were much worse than those of the whole national population. Technological changes affected the conduct of polls: cellular telephony and the Internet are changing the way electoral predictions are made.

With the Electoral College, does the popular vote matter?

In the electoral years the measurement of public opinion is tested as seldom. “The polls dominate the media and their accuracy can be confirmed or refuted by the vote,” AAPOR said in the statement announcing the analysis of the 2016 rulings. “The polls were clearly wrong and Donald J. Trump is the winner who is cast in the Electoral College.Although Clinton may actually win the popular vote, his margin is much lower than the 3% to 4% advantage shown by the polls.And many of the state polls overestimated the level of support To Clinton.”

In a country where voting is not direct, but delegates from each state in the Electoral College defining who the president, national surveys are less important than the state.

That is the first problem.

Trump Margins subpar won in three states: Pennsylvania, Michigan and Wisconsin pointed Tourangeau, co – author of The Psychology of Survey Response , The Psychology of response to surveys -. It is unclear whether there was a last minute vote change in those states.

The State polls are very difficult Dutwin said, also a member of AAPOR and member of the Research Program of Opinion and Electoral Studies at the University of Pennsylvania-. I do not feel too bad about the outcome of national surveys today, but I cannot say the same about many state surveys. It is clear that several state surveys did significantly wrong.

What are the specific difficulties of state surveys?

They do not have a lot of money, they are not done with the same frequency (the last survey in Wisconsin started in October and ended at the beginning of November: there was a whole week, the last, without surveys in Wisconsin), the sample size is usually Smaller, the likely voter pattern is often less sophisticated. For these reasons are not as accurate as national ones, a truth that is known decades ago, let’s clarify: nothing changed at that point this choice. But now we see the final result: in the state surveys there was an error of 5% to 6%, unfortunately.

A central difficulty of the polls is that, unlike the general opinion, are made on an imaginary group. Dutwin explained, “When I want to know what Americans think about Obamacare and I want to interview adults 18 and older, I know exactly who they are.” But in polls they poll a population that does not yet exist: you never really know who’s going A large number of people who say they are going to vote finally do not, and conversely, a large number of people who say they will not vote are going to vote on Election Day. ”

In primary elections it is even more complicated because fewer people are present than in a presidential election. “It could be 20% or 30% of registered voters,” he added. “One tries to figure out which 25% of the people surveyed will actually vote in the primaries.”

Cell and Internet surveys: may fail

Fixed lines seem more and more a relic of yesterday: the growth of cell phone use is overwhelming. If ten years ago 6% of the population of the United States had a mobile, in 2014 43% had it, and 17% preferred it. Cliff Zukin wrote, Professor of Public Policy and Political Science at Rutgers University, in his article “What’s the Matter With Polling?” (“What about polls?”): “In other words, a sample based solely on fixed lines would set aside three-fifths of the American public, almost three times more than it would have left out in 2008 “.

The problem is that Federal Communications Commission regulations prohibit automatic dialing – when a system does, and the interviewer only intervenes if someone answers the call – so that real humans must type the cell numbers of The people in the sample to survey: “To complete a survey of 1,000 people, it is not uncommon to have to mark more than 20,000 random numbers,” added Zukin. “Manually dialing phones takes a lot of time paid to interviewers, and pollsters also compensate respondents with up to USD 10 for their lost minutes.”

Before the preeminence of cell phones was presented the option of cheaper but less accurate surveys over the internet. How do you think this industry problem can evolve? 

No Doubt face many challenges polls -enfatizó new Tourangeau, which investigates  public opinion and terrorism at the University of Maryland -. On the other hand, pollsters have tried to adapt to a changing world and still have to see how good or bad this year was. That evaluation is the work of the panel [the AAPOR Evaluation Committee, to be issued in May 2017]. If you take everything into account, it seems they did well during the primaries. And they seem to have performed almost as well as ever in the general election, although they were wrong in the result of the Electoral College. Something similar seems to have happened in the last parliamentary elections in Britain: people were much more aware of the errors – which were not greater than usual – because there was an erroneous prediction of the outcome.

Dutwin analyzed the disadvantages and also the advantages of cell phone broadcasting, “which we have been calling for more than a decade,” he said, to exculpate them from the mistakes. “The more cell phones you dial, the more accurate the survey is, and you get younger than before.  They may never use the phone to call their friends or their girlfriends or boyfriends, but when they sound they attend.”

“The cost is greater?” 

Yes, by the manual dialing and also by the geographical question: one can call a number whose area code belongs to New York, but the person who attends has moved to San Francisco and did not change the number.

What do you think of the surveys based on the internet? 

“The strength of public opinion research, and all good research, is based on one principle: randomization. I can choose 100 people in such a way that they represent the 300,000,000 inhabitants: it is something amazing, but such is the power of this science. What happens with online surveys is that there is no randomization: it is people who decide to join a panel because they pay something or entertain them. They were not randomly chosen: they chose themselves. There are people who respond polls periodically.

“They are cheap but fail. 

They are of a very, very variable quality. And I say carefully: research has shown that sometimes an internet survey is highly accurate, and other times are highly inaccurate. The problem is that in most cases it cannot be known. There is a broader systematic distortion than in cell phone surveys: in 2012, the most successful was done on the internet, but in 2016 this was not the case. The challenge is how to work the data to be more accurate, but for now are not.

Telephone surveys are accurate but expensive, and the Internet are variable but cheap: the issue of data collection is at the heart of the problems of the errors were in 2016. “There are people who are working on hybrid trials, advanced techniques to try to combine data, “added the expert in SSRS methodology.

 Last problem: less and less people respond

In his article for The New York Times, Zukin said another “disturbing trend”: the percentage of people who agree to answer questions polls has plummeted.

“When I started doing telephone surveys in the late 1970s in New Jersey, we considered that an 80% response rate was acceptable, and we were still worried if the 20% we lost was different from the 80% we got in their attitudes and behaviors. They get telephone answering machines and other technologies. By 1997, the response rate Pew [Research Center] was 36%, and the decline has accelerated. In 2014 the response rate has fallen to 8 %” he wrote.

From 80% to 8%: the fall seems so significant that one could hardly think that it does not affect the results of the surveys. It would be a mistake to do so, argued Dutwin, who in a few months will publish a research article on the subject: “It is important to note that if the people who cut are the same as those who do not cut, that is to say they would have answered in general the same That respondents to the survey do not really care that the response rate is less: they are going to get reliable data, and research shows that this is the case: the samples are still representative.”

He cited another analysis, Nate Silver, “Is The Polling Industry In Crisis In Stasis Or?” (“The industry surveys are static or in crisis?”), where the concept of crisis Bores objected: “He described the accuracy of each survey during the presidential, legislative and gubernatorial elections over the past 20 years. He illustrated the errors in a graph, and they are all boring lines: the error did not increase. It is very attractive to say that the sounding industry is in crisis, and there is no doubt that the response rate has been reduced, but at most that this causes the telephone survey to be more expensive to perform, and for many media it is difficult to absorb the cost of that. But as far as data quality is concerned there is no evidence that there is more systematic distortion in research of public opinion, “concluded the SSRS consultant.

How do you imagine the scenario of the next presidential surveys in the United States in 2020?

I think there will be more similarities than differences with 2016. We will still have telephone research, the media will still take surveys very seriously, politicians will fall in love with the figures when they are favorable to them and will hate them when they do not Show what you want to see.

Any lesson of 2016 that can be applied?

“The polling industry is much more introspective today than it has been before, and that’s a really good thing. The results of that learning will probably be modest improvements, but improvements at last. Surveys will benefit from this ruling in 2016. The voter model can be improved: ask people how much attention they give to the campaign, if they voted two years ago, if they voted four years ago, how much enthusiasm the elections cause … And We have to learn a lot about how to work the probes state by state, since we learned badly that it matters a lot, and maybe instead of predicting the winner of the popular vote we would have to predict that of the Electoral College.