For a better browsing experience, please upgrade your browser.

FiveThirtyEight

Politics

For much of this election cycle, Democrats complained the polls were biased against them. They said the polls were failing to represent enough minority voters and applying overly restrictive likely-voter screens. They claimed early-voting data was proving the polls wrong. They cited the fact that polls were biased against Democrats in 2012.

The Democrats’ complaints may have been more sophisticated-seeming than the ”skewed polls” arguments made by Republicans in 2012. But in the end, they were just as wrong. The polls did have a strong bias this year — but it was toward Democrats and not against them.

Based on results as reported through early Wednesday morning — I’ll detail our method for calculating this in a moment — the average Senate poll conducted in the final three weeks of this year’s campaign overestimated the Democrat’s performance by 4 percentage points. The average gubernatorial poll was nearly as bad, overestimating the Democrat’s performance by 3.4 points.

silver-feature-pollbias-1

The problem with Democrats’ claims is that they were one-sided. I don’t mean that in the typical false equivalence way. I mean that they were ignoring some important empirical evidence.

This evidence suggests that polling bias has been largely unpredictable from election to election. Beyond the shadow of a doubt, the polling was biased against Democrats in 1998, 2006 and 2012. However, just as certainly, it was biased against Republicans in 1994, 2002 and now 2014. It can be dangerous to apply the “lessons” from one election cycle to the next one.

Our estimates of polling bias are evaluated in the same way we’ve assessed them in the past. They rely on a simple average of all polls1 released in the final three weeks of the campaign. Bias is calculated as the difference between the polled margin and the actual result. (For instance, a poll that projects the Democrat to win by 7 points when she actually wins by just 3 has a 4-point Democratic bias.) We exclude races in which one of the top two finishers was an independent candidate.

We’ll publish 2014 data in full soon2 but here are the preliminary results — first, for this year’s Senate polls:

silver-feature-pollbias-table-1

As you can see, the polls were biased toward the Democratic candidate in almost all key races. They were fortunate to have “called” the winner correctly in certain cases: Some of the worst misses came in states like Kentucky and Arkansas where the Republican won, but by a considerably larger margin than polls projected. There was also a near-disaster in Virginia. It looks like Democratic incumbent Mark Warner will pull out the race, but the polls had him up by 9 points rather than being headed for a photo finish.3

Meanwhile, the average bias in gubernatorial polls was 3.4 points in favor of Democrats.

silver-feature-pollbias-table-2In the governor’s races there were a higher number of missed “calls” — including in Illinois and Kansas and especially in Maryland, where Republican Larry Hogan wound up winning by 9 percentage points despite trailing in every nonpartisan poll released all year.

This type of error is not unprecedented — instead it’s rather common. As I mentioned, a similar error occurred in 1994, 1998, 2002, 2006 and 2012. It’s been about as likely as not, historically. That the polls had relatively little bias in a number of recent election years — including 2004, 2008 and 2010 — may have lulled some analysts into a false sense of security about the polls.

Interestingly, this year’s polls were not especially inaccurate. Between gubernatorial and Senate races, the average poll missed the final result by an average of about 5 percentage points — well in line with the recent average. The problem is that almost all of the misses were in the same direction. That reduces the benefit of aggregating or averaging different polls together. It’s crucially important for psephologists to recognize that the error in polls is often correlated. It’s correlated both within states (literally every nonpartisan poll called the Maryland governor’s race wrong, for example) and amongst them (misses often do come in the same direction in most or all close races across the country).

This is something we’ve studied a lot in constructing the FiveThirtyEight model, and it’s something we’ll take another look at before 2016. It may be that pollster “herding” — the tendency of polls to mirror one another’s results rather than being independent — has become a more pronounced problem. Polling aggregators, including FiveThirtyEight, may be contributing to it. A fly-by-night pollster using a dubious methodology can look up the FiveThirtyEight or Upshot or HuffPost Pollster or Real Clear Politics polling consensus and tweak their assumptions so as to match it — but sometimes the polling consensus is wrong.

It’s equally important for polling analysts to recognize that this bias can just as easily run in either direction. It probably isn’t predictable ahead of time.

To the extent polling bias is predictable, it may call for assessing nonpolling factors — the so-called “fundamentals” — along with the polls in each race. One simple factor is the overall partisanship of a state as measured by its past voting history. In the past, Republicans have tended to outperform their polls in red states while Democrats have done so in blue states.

Many of the key races were held in red states this year. That won’t be true in 2016, and we’ll undoubtedly see some pollsters and polling aggregators draw the wrong lessons from 2014. The polls may be biased again in 2016; we just won’t know much about the direction of it until votes have been cast and counted.

CORRECTION (Nov. 5, 2:52 p.m.): An earlier version of this article misstated how much the average gubernatorial poll overestimated the Democrat’s performance. It was 3.4 points, not 4 points.

Footnotes

  1. Including the handful of polls that aren’t used in FiveThirtyEight’s forecast models ^
  2. For raw data from 1998 through 2012, see here and for 1990 through 1996, see here ^
  3. Our methodology is also giving a mulligan to the polls in Kansas, since the independent Greg Orman was the second-place finisher. But that was a bad miss too: Orman was slightly ahead in the polling average but wound up losing to Republican incumbent Pat Roberts by roughly 10 percentage points. ^

Filed under , , , ,

Comments Add Comment

Powered by WordPress.com VIP