Skip to main content
You have permission to edit this article.
Edit
Editorial: A consumer's guide to polls

Editorial: A consumer's guide to polls

  • 0
Only $5 for 5 months

Now that Labor Day is past, we are sure to be inundated by one thing: Presidential polls.

That makes today a good day to offer a consumer advisory on polls. They are no different from any consumer product: They are not all equal — and no, their quality is not determined by how much you like the results.

First, let’s dispense with a myth that the polls were wrong in 2016. Actually, the polls were almost dead on — we just remember the wrong ones.

The final polls in the 2016 presidential campaign showed Hillary Clinton topping Donald Trump by an average of 3.1%. She ended up 2.1% ahead in the popular vote. That’s very much within the margin of error. That’s also closer to the mark than they were in 2012 when the final average showed Barack Obama ahead of Mitt Romney by 0.7% and he won by 3.9% — a 3.2% miss that was still within the margin of error. In 2008, the final polls were off by just 0.3%. In 2004, they were off by 0.9%. Take all those together and there are two lessons: Polls are pretty accurate —and the year they were off the most was not 2016 but 2012. The catch, of course, is that’s not how we elect presidents. The Electoral College is what matters, not the popular vote. With our current demographic trends (with the nation’s increasingly diverse population concentrated in a relative handful of states), it’s only going to be more likely that the popular vote winner is going to be different from the electoral vote winner. Is it good for American democracy if small, overwhelmingly white states get a disproportionate say in choosing our president? That’s not our question today. We’re dealing with the rules are they are, not how some might wish they were.

So here’s our first piece of advice: Ignore the national polls. They’re measuring something that doesn’t matter. Instead, pay attention to the state-by-state polls — and, realistically, you only need to pay attention to about a dozen of those, the so-called “battleground states.” (In a popular vote election, the whole nation would be a battleground. Under the Electoral College, it’s not).

The three states that were the biggest surprise in 2016 were Pennsylvania, Michigan and Wisconsin. Those had been Democratic strongholds in recent elections; Trump carried all three. His wins there weren’t a bug surprise, though, to those who really paid attention to the polls. Yes, through the fall the polls had consistently showed Clinton ahead in all three states. But here’s what we forget: The final poll in each state showed Trump ahead. It was a narrow lead, well within the margin of error, so was easy to dismiss as an aberration. Instead, those final polls were right — they caught the first sign of a rising Trump tide.

What lessons can we learn from that? As Yogi Berra famously said, “It ain’t over ’til it’s over.” Or, as every politician everywhere has always said, the most important poll is the one on Election Day. Campaigns are about persuasion — persuading more of your people to turn out, and persuading undecided voters to come your way. In 2016, late-deciding voters broke heavily for Trump. One exit poll in Wisconsin found that 14% of the voters made up their minds in the final week — and they went for Trump 59% to 30%, a margin of an astounding 29%. He likewise won by 17% among later-deciders in Pennsylvania and 11% in Michigan.

That’s why Clinton led in every poll in those states, except for the last one.

The other lesson we can learn from that is to pay attention to who are the undecided voters. Do they match a group that already has a clear preference for a particular candidate? Let’s take two hypothetical candidates — Flugelhorn and Hornblower. Let’s say that polls show most of the undecided voters are white men over 50 with less than a college education. Let’s also say that among the white men over 50 with less than a college education who have made up their minds, there’s a clear preference for Flugelhorn. Does that mean it’s likely these undecided voters are likely to eventually break Flugelhorn’s way? Or does it mean there’s some reason that Flugelhorn has yet to win their votes, which means there’s a chance they may yet go for Hornblower? That’s why campaign consultants get paid the big bucks — and why simply saying polls show Hornblower with a slight lead over Flugelhorn may not tell the full story. Remember that polls are not really predictive tools; they are merely “snapshots in time.” We just try to use them to make predictions.

We all pay far more attention to polls than we should, but we’re not here to change reality. If you really want to keep up with the polls, the two best places to look are the websites Real Clear Politics and Five Thirty Eight. Both run updated lists of all the polls, so you can see the variation (or lack thereof) between them — although you may have to root deeper in the internet to find the crosstabs to understand things like just who the undecided voters are. Don’t have time for that? Here’s a shortcut that usually works: Is a particular candidate over 50% in the poll? If so, it doesn’t matter how the undecided voters break.

Five Thirty Eight has an additional feature in its “Latest Polls” section: It grades the quality of each pollster, based on their historical accuracy and methodology. That’s not to say a low-graded pollster is wrong. The pollster who fielded those final 2016 polls in Pennsylvania, Michigan and Wisconsin actually is graded at C minus — but was lucky enough to have the final sampling. If you’re a baseball fan, think of that as a .250 hitter batting ninth who gets lucky and grooves a fastball into the left field bleachers to win the game. (Or, in the case of 2016, the right field).

If you really want to be a poll junkie, you want to understand things like: When was the poll conducted? (A more recent survey window is always better). Is this a poll of registered voters or likely voters? (At this stage of the campaign, you want the latter). Is the pollster using live questioners or an automated call? (Pollsters will argue over this. Some think live questioners do a better job. Others say that automated calls make it less likely that respondents will, um, lie — particularly if the caller and respondent are of different races or genders). What assumptions is the pollster making about the relative size of various voting groups — i.e., the percentage of voters who are Black, or under age 30, etc.? All those things can influence the results.

Is that a lot to take in? Yes, it is. But before you talk up that poll that’s favorable to your side, you might want to understand what it’s really saying, and what it’s not.

Catch the latest in Opinion

* I understand and agree that registration on or use of this site constitutes agreement to its user agreement and privacy policy.

Related to this story

Most Popular

Get up-to-the-minute news sent straight to your device.

Topics

Breaking News

Sports Breaking News

News Alert