Guest Post: Shy Trumpers
A guest post by Stephen Russell:
In my previous posts I described how the polls were wrong in 2016; what can be said about the accuracy of polls in 2020. This post addresses the specific issue of “shy Trumpers”.
As Biden’s reported polling lead grew in June, Trump enthusiasts suggested that Trump voters’ reluctance to declare their choice to someone in the face of the storm of criticism carried by the mainstream media is distorting the results. This is sometimes called a social desirability bias. It would typically take the form of Trump voters being more likely to decline participation in the poll, or equivocating about their real choice. Since the average response rate for polls in the US these days is only around 33%, clearly not all non-responders are Trump voters. But they might be over-represented.
A Monmouth University poll of Pennsylvania voters published on July 15 included a question on this issue. It found that, remarkably, “most voters (57%) believe there are a number of so-called secret voters in their communities who support Trump but won’t tell anyone about it.” And despite that poll giving Biden a 13-point lead in the state, most voters believed (by 46% to 45%) that Trump will win the state.
So the shy Trumper theory is not unreasonable. Anecdotal evidence certainly proves that such voters exist. And there are many historical examples of the effect worldwide – or at least cases where this effect is widely believed to account for a difference between polls and results. It is, or course, hard to prove either way.
It is not a new theory either. Research Group Morning Consult found in December 2015 (the pre-primary period) that Trump polled six percent better in an internet poll than in a live caller poll. The post-election AAPOR analysis found evidence of this and cited it as a factor in the discrepancy between polls and the result. But the AAPOR also claimed that some tests for this effect failed to find evidence.
Pollsters are not blind to the danger. Bradley Honan of Democratic pollster Honan Strategy Group addresses the issue directly here, and explains how pollsters can compensate for it. It is also addressed in a July 22 Fivethirtyeight podcast here – though the participants in that discussion largely dismiss the idea.
Philip Bump, of the Washington Post, wrote about the issue in mid-May, and observed that if the theory held water there would – as in the Morning Consult research – be a divergence between the results of live-calling polls and online polls. But, he claims, “There’s no significant difference between the two methodologies in general election polls conducted this year.”
Unfortunately for Bump, his own data (presented in a very revealing graph) partly undermines his claim. At the start of the year, the online polls were actually more favourable to Biden. Since then, both methodologies have seen Biden grow his lead. But it has grown more in live-caller polls and at the time Bump wrote those were giving him results about two percent better than online polls. Growing shyness among Trump voters?
Trafalgar Group pollster Robert Cahaly rejects the idea that pollsters have learned their lesson, and says that he has noticed a significant upsurge in shy-Trumpers in his polling. Trafalgar Group is a Republican polling organisation and according to Fivethirtyeight, has a small pro-Republican bias (compared to actual election results). It gained big kudos for correctly picking Trump’s 2016 victories in Michigan and Pennsylvania (though by bigger margins than actually eventuated).
Cahaly insists that people are more hesitant to tell pollsters who they’re voting for today than they were in 2016 because we live in a society in which “people get penalized for their opinions.” Trafalgar has various mechanisms to tease out real preferences from the shy – although he is reluctant to give specifics. “I can tell you looking internally I think there is a significant amount of people for Trump” in the undecided or third-party category, he said.
As a result, Trafalgar’s state polls are producing the most Trump-friendly results of any public pollster – about 5% better than the average. But even this is not enough to call the election for Trump. Trafalgar’s July 5 Pennsylvania poll for example, gave Biden a 5% lead, likewise in Minnesota on July 27. Trafalgar reported a tie in Florida on July 2 and a Trump lead of 1.4% in Arizona on August 10, the only poll to give Trump a lead there since June. But neither of those two states are must-wins for Biden. Cahally said on June 29 that the general election was “too close to call”.
Cahaly is not the only one to think the social desirability effect is in play: Politico reports a number of other pollsters thinking on similar lines.
“I would say that most, if not all, of the concerns that we expressed still hold — some to a lesser degree,” said Courtney Kennedy, director of research at the Pew Research Center and lead author of the polling industry’s post-2016 autopsy. “But I think some of the fundamental, structural challenges that came to a head in 2016 are still in place in 2020.”
With all that said, the fact is that the final 2016 poll averages were only 1 to 1.5% wrong in picking the national popular vote. That suggests that the pollsters did, for the most part, know what they were doing. And probably still do.
Also, it is hard to reconcile a large shy-Trump effect with results of congressional generic ballot polls and Senate polls. These have been giving results that are often worse for Republicans than the presidential ballot polls, despite not involving Trump himself.
Despite this, some Trump enthusiasts, reacting to polls showing Biden ahead, claim that many pollsters are in fact not trying to get an accurate read of voter sentiment and are manufacturing propaganda. Trump himself has insisted that he is not losing, and denounced the polls as “fake”.
But all the polls have him behind Biden – even those conducted by Republican pollsters, and those with a consistent record of over-predicting the Republican vote. The last published poll to put him ahead nationally was in February. And unless this is an entirely new phenomenon, that does not jibe with the fact that – overall – there has been no consistent historical polling bias in favour of either party.
As recently as 2018, Gubernatorial and Senate polls in the three weeks before the vote significantly overestimated Republican support (House polls were a lot better); and in 2012 an average of polls in the week before the election had Barrack Obama winning by 1.2 percentage points. He actually beat Mitt Romney by 3.9 points – a much bigger error than 2016.
Some say that polls are growing less and less accurate and point to some high profile failures, but a study by Will Jennings, University of Southampton and Christopher Wlezien, University of Texas at Austin, of 26,000 polls from 338 elections in 45 countries since 1942 found “…recent performance of polls has not been outside the ordinary, and that if anything polling errors are getting smaller not bigger.”
NOTE: The next post in this series will consider the “enthusiasm gap”.