The Washington PostDemocracy Dies in Darkness

Which 2020 election polls were most — and least — accurate?

Some were on the nose. Overall, though, they did worse than in 2016.

Analysis by
and 
November 25, 2020 at 7:45 a.m. EST
People wait in line to vote at a polling place on Election Day in Las Vegas. (John Locher/AP)

Each campaign season, pollsters conduct hundreds of pre-election surveys, feeding the apparently endless public and news media appetite for agonizing over the poll results. When the polls don’t accurately forecast the final election results, many are disillusioned or even angry. That was especially true in 2016, when most national polls projected that Hillary Clinton would win the presidency.

So how did pollsters do in 2020?

After 2016, pollsters worked to fix problems

After the 2016 election, we worked with political scientist Aaron Weinschenk to release analyses, revealing 2016’s final, national pre-election polls were actually more accurate than they had been in 2012. They pretty closely forecast the popular vote, even if Donald Trump snagged victory in the electoral college. We found a slight pro-Democratic bias that was mostly not statistically significant.

That suggests that, overall, the 2016 national pre-election polls were generally accurate and unbiased. That year’s state-level polls similarly underestimated Republican support, but here too these biases were generally statistically insignificant. The larger problem — at least for those who wanted to know the outcome in advance — was too few quality statewide polls in key battleground states, compared with previous years.

Nevertheless, the discrepancy between poll projections and the eventual outcome pushed many pollsters to reconsider their methods. Survey researchers scrutinized the 2016 polls and considered an array of factors that potentially contributed to underestimating President Trump’s support. These included: failure to adjust weighting procedures to account for elevated survey participation among college graduates, who disproportionately went for Clinton; possible “shy” Trump voters; people who decided which candidate to support late in the campaign, and disproportionate increases in turnout among Republicans. As a result, many polling firms changed their weighting procedures.

Many hoped these changes would improve accuracy in the 2020 presidential election. Certainly, pollsters accurately took Democratic primary voters’ temperatures; most primary election polls correctly predicted the winner. But that didn’t translate into improved accuracy in the 2020 general election.

Biden's win shows that suburbs are the new swing constituency

How we did our research

We analyzed the accuracy and bias of 14 polls released between Oct. 27 and Nov. 3, Election Day, that were conducted using national samples. We limit our sample to the final poll released by each firm during the last week before Election Day among those polls featured by RealClearPolitics. To estimate accuracy and bias, we used a measure developed by Elizabeth Martin, Michael Traugott and Courtney Kennedy. We calculate this measure by taking the natural logarithm of the odds ratio of the outcome in a poll and the popular vote. We used the current standings, which have Biden at 51 percent and Trump at 47.2 percent, but acknowledge accuracy scores could change slightly once states certify final vote counts. Positive accuracy scores indicate a pro-Republican bias while negative scores represent a pro-Democratic bias.

The figure below shows our accuracy rankings. As you can see, two of the 14 polls were highly accurate. Both the Investor’s Business Daily/TIPP and The Hill/HarrisX polls had Joe Biden ahead by 4 percentage points — and Biden is currently 3.8 percentage points ahead of Trump in the national popular vote.

Almost all of the remaining polls — except the Rasmussen poll released Nov. 1 — overestimated support for Biden. Taken as a group, the average bias in the 2020 polls overall is -0.085, which is not statistically significant. However, these five polls’ pro-Democratic bias is statistically significant: Economist/YouGov, CNBC/Change Research, NBC News/Wall Street Journal, USC Dornslife, and Quinnipiac.

We also compared accuracy over time using available accuracy scores for election cycles since 1996. The figure below shows the mean accuracy score for the final, national pre-election polls in the 2020 presidential election in historical context. As you can see, this cycle’s polls were, as a group, among the least accurate since 1996.

What went wrong?

Pollsters and academics are already trying to figure out what went wrong. Some observers are again suggesting some shy Trump voters failed to give honest answers. Others are suggesting pollsters failed to account for late deciders, who disproportionately voted for Donald Trump.

Why did the polls undercount Trump voters?

Other possibilities are more technical, including differential nonresponse between Trump and Biden voters and challenges with likely voting models. Correctly predicting which voters will actually cast ballots has perhaps grown more complicated as both parties have doubled down on mobilizing their base in recent elections. Of course, election polling is further complicated by the reality that both voters’ intentions and their final decisions on whether to vote can change. Sometimes voters switch at the last minute when they learn they disagree with a candidate on a wedge issue like fracking, which Trump vigorously touted during visits to battleground states in the closing days of the campaign.

A handful of national polls navigated this complicated terrain with great success. Overall, however, 2020’s presidential pre-election polls were not quite triumphant.

Don’t miss any of TMC’s smart analysis! Sign up for our newsletter.

Costas Panagopoulos is professor of political science and chair in the Department of Political Science at Northeastern University.

Kyle Endres is assistant professor of political science at the University of Northern Iowa.