Skip to main content
ABC News
The Polls Weren’t Great. But That’s Pretty Normal.

I’m not a pollster, although I’m often misidentified as one on TV. I wanted to get that out of the way because while, in practice, our lives probably get easier in a year where the polls are spot-on, FiveThirtyEight’s mission is really to take the polls as they are — for better or worse — and understand the sources of error and uncertainty behind them. This is true for both the probabilistic forecasts that we build and the reporting that we do. We’re also interested in how polls are perceived by the media and the public and how that sometimes conflicts with the way we think polls should be viewed.


From that vantage point, the story of how polls did in 2020 is complicated — more complicated in some ways than in 2016, which was also complicated. And since the election was called for Joe Biden on Saturday, I’ve had lots of somewhat conflicting thoughts pinging around in my head.

Here are a few of them:

  1. On the one hand, I don’t entirely understand the polls-were-wrong storyline. This year was definitely a little weird, given that the vote share margins were often fairly far off from the polls (including in some high-profile examples such as Wisconsin and Florida). But at the same time, a high percentage of states (likely 48 out of 50) were “called” correctly, as was the overall Electoral College and popular vote winner (Biden). And that’s usually how polls are judged: Did they identify the right winner?
  2. On the other hand, evaluating how close the polls came to the actual vote share margins is a better way to judge polls, so I’m glad that people are doing that.
  3. And yet, the margins by which the polls missed — underestimating President Trump by what will likely end up being 3 to 4 percentage points in national and swing state polls — is actually pretty normal by historical standards.
  4. However, there are nevertheless reasons to be concerned about the polls going forward, especially if it’s hard to get a truly representative sample of people on the phone.
  5. Finally, there’s a slightly meta point here: Voters and the media need to recalibrate their expectations around polls — not necessarily because anything’s changed, but because those expectations demanded an unrealistic level of precision — while simultaneously resisting the urge to “throw all the polls out.”

So, yeah, it’s complicated.

In this story, I’m going to tackle a little bit of everything from this list apart from No. 4. The question of why the polls were off — and what that means for the polls going forward — is a vital one, but it’s something that deserves a longer and fuller analysis, which we’ll do in the upcoming days and weeks. I’m also going to set aside the question of how probabilistic forecasts like FiveThirtyEight’s did. If you want a quick answer, we think our forecasts did very well. (Biden was a reasonably heavy favorite in our forecast precisely because he could withstand a normal-sized or slightly larger polling error and still come out narrowly ahead — and that’s pretty much what happened.) Our forecasts should also do well by the more rigorous methods we use to evaluate them. But that’s a different question than how the polls themselves performed.


First, some good news for the polls: Assuming current results hold, the only states where presidential polling averages got the winners wrong will be Florida, North Carolina and the 2nd Congressional District in Maine. And because Biden’s leads in those states were narrow to begin with, they weren’t huge upsets. (There were also two mild upsets in the Senate, where Maine’s Susan Collins and North Carolina’s Thom Tillis both won reelection despite narrowly trailing in polls.)

[Related: Democrats Aren’t Sure Why They Didn’t Do Better In The House And Senate. It Might Not Be Clear For Weeks.]

I’m starting with this point because “calling” the outcome right is usually the way that polls are evaluated, even if it isn’t the best way to evaluate their actual performance. Polls were pilloried for “missing” Brexit, for instance, even though the position to remain in the European Union had, at best, a tiny advantage in the polling average, and the position to leave won by a narrow margin (52 percent leave, 48 percent remain).

So while I’m not a pollster, I’m going to stick up for them on this point. If you want to criticize polls when they got the outcomes wrong even though the results were close (within the margin of error) — not our preference, but how the rest of the media usually covers polls — then, to be consistent, you ought to praise them if they identify the right winners, even if the margins are fairly far off.

The better way to evaluate polls is how close they come on vote margins, and this year, polling margins will be fairly far off in several swing states as well as in national polls. As of this writing, Biden leads by 3.4 percentage points in the national popular vote. We expect him to improve on that some, as there are still a number of votes to be counted in blue states such as New York, California and Illinois, so he’ll probably finish with a lead closer to 4 points or possibly even a bit larger; 5 points wouldn’t shock me. In any event, we should expect a miss of around 4 points from Biden’s final margin of 8.4 points in national polls, though as I said at the outset, this is actually pretty normal by historical standards.

What about in the swing states? It’s actually a pretty similar story, at least among the competitive states. (There were some fairly bad polling errors in noncompetitive states, especially red states, which I’ll touch on more in a moment.) Between the 18 states and congressional districts that we considered to be competitive this cycle, it looks as though the final FiveThirtyEight polling averages will have underestimated Trump by around 3.7 percentage points, on average. Although, as you can see in the table below, the error was much bigger in some places than others:

There were big misses in some swing states

Joe Biden’s final FiveThirtyEight polling average in each battleground race compared to his vote share margin in each race

Biden’s lead or deficit
Polling Average Actual result Diff
ME-2 +3 -8 -11
Wisconsin +8 +1 -7
Iowa -1 -8 -7
Florida +3 -3 -6
Michigan +8 +3 -5
Ohio -1 -6* -5
Texas -1 -6 -5
New Hampshire +11 +7 -4
Maine (statewide) +13 +9 -4
Pennsylvania +5 +2* -3
Arizona +3 +0 -3
North Carolina +2 -1 -3
Virginia +12 +10 -2
Minnesota +9 +7 -2
Nevada +5 +3 -2
Georgia +1 +0 -1
Colorado +13 +14 +1
NE-2 +4 +7 +3

* In Ohio and Pennsylvania, actual results reflects expected changes once all votes are counted.

Sources: Polls, ABC News, the Cook Political Report, State Websites

Note that this analysis is preliminary. In the estimates above, I’m guesstimating, for instance, that Biden gains about 2 additional points in Ohio and 1 further point in Pennsylvania based on ballots that remain to be counted in those states. There may be a point or so to be had for Biden in some other states, too, given that provisional ballots and other late-counted ballots generally help Democrats. Still, it looks like we’ll end up with swing state polls having missed Trump by 3 or 4 points, overall.

The errors were a little weird, though. Trump beat his polls by considerably more in Wisconsin than in neighboring Minnesota, for instance. Meanwhile, polls slightly overestimated Trump in Maine’s 1st Congressional District, but considerably underestimated him in the more rural, 2nd Congressional District. But this isn’t entirely unexpected. State polling errors are often correlated with one another, but it can also get really complicated. Sometimes, there’s a “uniform swing” that affects nearly all states. Sometimes, there’s a group of states that share geographic and demographic similarities in which the polls are out of line. And sometimes, states behave idiosyncratically. The 2020 election looks as though it featured some combination of all three types of errors. Trump clearly outperformed his polls overall, but he did so more in some regions (say, the Midwest) than others (say, the Southwest). Even within these regions, though, there were quirks, like Biden winning Georgia but losing Florida.


The error may be somewhat larger in congressional races, too, as Republicans are running slightly ahead of Trump (Democrats’ lead in the U.S. House popular vote total is 1 or 2 points smaller than Biden’s). And in deeply red states with supposedly competitive Senate races such as Montana, Kansas and South Carolina, the polls look to have considerably overestimated how well Democrats would do. Then again, down-ballot races typically feature more polling error than the presidency.


Next question: Are polls becoming less accurate over time?

The answer is basically no, although it depends on what cycle you start measuring from and how your expectations around polls are set. Polls were quite accurate in the 2004 and 2008 presidential elections, for instance, and this was also a time when polling received increased media attention. The 2012 election was undoubtedly also good for polling’s reputation since polls identified the winner correctly in almost every state, although they did underestimate then-President Barack Obama’s margin of victory by a few points.

Polls have had a rougher go in the past, however. Take what the final FiveThirtyEight national polling average (calculated retroactively) would have said in each past election since 1972:

The polls aren’t getting more inaccurate

Polling error in the FiveThirtyEight national polling average compared to the national popular vote margin, 1972 to 2020

Year Final Average Result Error
1972 R+24 R+23 1
1976 D+1 D+2 1
1980 R+2 R+10 8
1984 R+18 R+18 0
1988 R+10 R+8 2
1992 D+7 D+6 1
1996 D+13 D+9 4
2000 R+4 D+1 5
2004 R+2 R+2 0
2008 D+7 D+7 0
2012 D+0 D+4 4
2016 D+4 D+2 2
2020 D+8 D+4* 4

*The 2020 result reflects Biden’s projected margin once all votes are counted.

FiveThirtyEight’s polling averages are calculated retroactively for years prior to 2020.

On average, the final national polls were off by 2.3 points. That’s pretty close, but there are also several examples of polling error comparable to this year. There were 4-point polling misses in 1996 and 2012 (as mentioned, the national polls weren’t so good that year), a 5-point error in 2000 and a whopping 8-point miss in 1980, when Ronald Reagan beat Jimmy Carter by far more than polls predicted.

There were also a lot of fairly wild polling errors in the mid-20th century. There aren’t enough polls to calculate a FiveThirtyEight-style average prior to 1972, but we can look at what Gallup’s final poll said dating back to 1936. Oftentimes, it wasn’t much good:

The Gallup era was no golden era

Polling error in the final Gallup presidential poll compared to the national popular vote margin, 1936 to 1968

Year Final Average Result Error
1936 D+12 D+24 12
1940 D+4 D+10 6
1944 D+3 D+7 4
1948 R+5 D+4 9
1952 R+2 R+11 9
1956 R+19 R+15 4
1960 D+2 D+0 2
1964 D+28 D+23 5
1968 R+1 R+1 0

Source: gallup

The final Gallup poll missed by 5.6 percentage points on average between 1936 and 1968. That includes a somewhat infamous 9-point miss in 1948, when Thomas Dewey didn’t actually defeat Harry Truman. Gallup also lowballed FDR’s margin of victory by 12 points in 1936.

Evaluating all this data, our model estimates that the final national polls will miss by about 3 percentage points in an average year. (That’s why we call a 3-point miss a “normal-sized polling error.”) In other words, a 4-point polling error is somewhat par for the course, although maybe you could call it a bogey.


So if all of this is fairly normal, why has there been so much consternation about the polls this year? To put my media critic hat on for a second, I see a few factors here. One is the “blue shift” that occurred in states such as Pennsylvania, where mail-in ballots were counted after in-person ballots. As predictable as the blue shift was, if you didn’t know that Biden was eventually going to win Pennsylvania, Wisconsin, Michigan and Georgia, once these mail-in ballots were factored in, the polls looked much worse than they wound up being.

But narratives about what happened in an election tend to form early on in the evening on election night, even though we knew going in that this election could take weeks and that there was a real possibility that neither Trump nor Biden would reach 270 electoral votes on election night. These narratives also aren’t usually formulated by more data- and empirically-driven reporters, who know to wait until more votes are counted.

Next, there’s the fact that the polls have missed in the same direction in the last two presidential cycles. The conventional wisdom trends to treat whatever happened in the most recent election as a new Iron Law of Politics. So if something happens two times in a row … oh boy, you’re never going to hear the end of it.

[Related: Politics Podcast: Why Polls Were Off In 2020, And Why They Weren’t That Bad]

But what’s important to stress here is even though the polls have now missed in the same direction twice in a row, this isn’t necessarily indicative of how polls will behave going forward. In the long run, while polls can be biased, the bias isn’t predictable from cycle to cycle. Sometimes, they do miss in the same direction for a couple of cycles in a row. Other times, the bias flips around — in 2012, polls underestimated Obama and Democrats before underestimating Trump and Republicans in 2016. It’s probably worth mentioning that polls had a very good and unbiased year in the 2018 midterms before their problems this year, too.

The reason there’s no long-running polling bias is because pollsters try to correct for their mistakes. That means there’s always the risk of undercorrecting (which apparently happened this time) or overcorrecting (see the 2017 U.K. general election, where pollsters did all sorts of dodgy things in an effort to not underestimate Conservatives … and wound up underestimating the Labour Party instead). This sets up an especially big challenge for pollsters moving forward because some of the plausible explanations for the polling miss have to do with the COVID-19 pandemic or other factors specific to 2020, while some do not.

Imagine that, in an effort to correct for the bias it showed in 2020, a pollster adopts a new technique that results in shifting its margins toward Republicans by 3 points — but it turns out that the bias was because of something 2020-specific. Then, the pollster may wind up with polling that underestimates Democrats in 2022 and 2024. But if the pollster chalks the error up to COVID-19 and doesn’t change anything when it was something lasting, they may underestimate Republicans once again.

Finally, there’s the fact that the election comes at a time of exceptionally high anxiety for the country. Between the pandemic and the election — and in an era when the media and other forms of expertise are constantly being challenged in both constructive and unconstructive ways — there’s not a lot to feel certain about.

On that front, I’m afraid I have some bad news. If you want certainty about election outcomes, polls aren’t going to give you that — at least, not most of the time.

It’s not because the polls are bad. On the contrary, I’m amazed that polls are as good as they are. Given that response rates to polls are in the low single digits and that there are so many other things that can go wrong, from voters changing their minds after you poll them to guessing wrong about which voters will turn out — plus the unavoidable issue of sampling error — it’s astonishing that polls get within a couple of points the large majority of the time. And yet, if a poll projects the outcome at 53-47 and it winds up being 51-49 (a 4-point miss), it will probably receive a lot of criticism — even, as we’ve seen this year, if it “calls” the winner right. It’s a fairly thankless task.

Nor is the reason to recalibrate your expectations about polls because polls are getting worse, necessarily. Empirically, after a somewhat worse than average but still fairly normal year in 2016, a very good year in 2018, and a 2020 polling error that will likely be a bit worse than 2016 but is still well within the realm of historical precedent, there’s not really a basis for coming to that conclusion … yet. Theoretically, there may be reasons to expect polls will be worse going forward, but that depends on identifying the reasons for the 2020 error first. And remember, everybody is just getting started on that process.

The main reason that polls aren’t going to provide you with the certitude you might desire is because polls have always come with a degree of uncertainty. In a highly polarized era, most elections are going to be close — close enough as to exceed the ability of polls to provide you with a definitive answer. Say the final polling averages miss by a bit more than 3 points on average, as our forecast assumes. That means the margin of error is closer to 7 or 8 points. And every presidential election so far this century has fallen within that range.1 So if you’re coming to the polls for strong probabilistic hints of what is going to happen, they can provide those — and the hints will usually lead you in roughly the right direction, as they did this year. But if you’re looking for certainty, you’ll have to look elsewhere.


Footnotes

  1. Though only barely so in the case of Obama’s win in 2008.

Nate Silver founded and was the editor in chief of FiveThirtyEight.

Comments