Based on clear and consistent polling data, environmental advocates went into Election Day expecting sweeping victories in North Carolina. It didn’t work out that way. Why were the polls so wrong again after misreading the electorate just four years ago?
While late votes were still being tabulated in some states, the picture in North Carolina is clear. Pre-election polls were off in key contests across the state. It happened despite most reputable pollsters taking steps to address the problems they thought they’d identified from their big miss in the 2016 presidential contest.
This problem of polling reliability is important to environmental advocates in several ways. It affects how we analyze in which contests we can make a difference, and therefore where we should direct limited resources. It impacts which arguments we need to make in order to persuade voters. It can even affect which issues we choose to tackle now, versus what we recognize are good ideas that need to wait for another day because the critical mass of public support is not yet within reach.
Early post-election analysis is always subject to mistakes, and 2020 is likely to be no exception on that front. However, what we know so far suggests some likely culprits for the polling failures.
First, as in 2016, most polls here were within their estimated margin of error. However, that is supposed to account for potential sampling error within one poll, not to mask a consistent mistake in one direction across multiple polls. Thinking they had figured out what went wrong in 2016 and how to fix it, polling firms adjusted their projections about who would vote this time. They understood a consistent pattern had developed of white voters without a college degree voting differently on average from those with a college degree. They took pains to go to extra effort to poll voters from both groups.
However, it seems that there may well be a broader error involved. Across the demographic groups, there seems to be a difference between those who are willing to respond to a poll and those who are not. Voters who are more skeptical of “mainstream media” reporting, who respect scientific expertise less, and who are more distrustful of social institutions in general seem less likely to answer polls at all. It’s not that they’re misleading pollsters; they’re refusing to even talk with them.
This means fewer answers from those who reject climate science and public health advice. It means fewer responses from those who are just as likely to accept wild conspiracy theories from the dark reaches of the internet instead of fact-checked reporting from reputable journalists.
And when certain demographic groups are harder to reach, unrepresentative opinions from those who are reached are magnified in the weighting of the polls, skewing the ultimate results to suggest more favorable outcomes for environmental advocates.
This doesn’t mean polling can be abandoned. It does mean that for polls to provide the useful guidance we need to help inform our advocacy for a clean environment and public health, polling methodology will need to be updated to respond to the shifting public information environment.