Nate Cohn at the New York Times has good reason to be confident. The Times’ polling partner, Siena College, is ranked the best of all pollsters by 538. However, we have seen very good pollsters make big misses in the past. Its February presidential poll suggested it might be in the process of making a big miss this year. Or maybe there are significant shifts in support for Democrats from important demographic blocs. It also could just be a lot of noise and Siena will course correct as the fall election approaches and voters start really paying attention. There is a lot of time left before the election and 538 has traditionally looked at polling very close to an election to ascertain pollster accuracy. However, two weeks ago Siena issued a new national poll which looks a lot more like what we might expect things to be at this point. Maybe that’s why Siena is at the top of the rankings.
It’s a cliché to say that the only poll that matters is the one on election day. We usually hear that from campaigns that are losing, in order to deflect attention from poor polling which a lot of organizers assume depresses turnout for their side. But election results are important, and when we try to understand what is likely to happen in a regular national election polling is not the only resource we have. There have been dozens of special and off-year elections since the last national election (2022). The results of those elections are evidence of how actual voters are actually voting, and we have seen Democrats overperforming everywhere. But that prior performance is not a predictor of future performance, particularly in different electoral districts. However, it does give us a sense of where the electorate is when we see trends across districts throughout the country – which we have seen since 2022 – in fact, we’ve seen it since 2018.
It's even more persuasive when election results and polling are telling the same story. Here is the problem with Siena’s polling – and the Times’ reporting. It’s not unusual – but it is problematic – that media outfits that sponsor polls treat those polls as if they are definitive statements of public opinion rather than one of a number of competing snapshots in time replete with methodological challenges. The Times does a better job of this than most in my opinion, but it still focuses on its polling. It’s what media outlets operating on the profit motive do.
In a series of Twitter posts with polling analyst Adam Carlson, Cohn would have us believe that polling is more persuasive than election results. He criticizes those who prioritize election results over polling as choosing data that supports one’s political narrative. When election results and polling support the same narrative, that is when we have a very good idea something is true. However, he is committing the same sin in reverse. The Times has consistently privileged its polling over the story election results are telling us1 – even when there is very good reason to believe either something may be wrong with a poll’s methodology or the poll is an outlier.
The problem here is the incentive structure the media has in portraying the polling results it sponsors. Nate Cohn is no idiot – in fact, he’s one of the smartest polling analysts out there. When you read his analysis you will find appropriate caveats, but those are buried in the text that my guess very few readers ever get to. That is likely an editorial decision, not Cohn’s.2 Most readers – including most readers interested in politics – will only see the headline and graphics associated with the survey. They might read the first couple paragraphs, but they are not going to get into the weeds of the crosstabs and methods.
Look at the implausible results last weekend in two polls sponsored by competing media outlets that were in the field at the same time. CNN reports that Trump is beating Biden by nine points! That result is so clearly an outlier among other polling that has not just shown the race to be close, but that Biden has slowly but consistently increased support over the past several weeks. But if you only listen to CNN, you’ll think Trump is blowing Biden out of the water. Contrast that to the viewer who watches CBS News. Their polling – again, in the field during the same period as CNN’s – finds that the race is essentially tied in a number of battleground states. Both polls cannot be correct. There is no way Trump is beating Biden by nine points nationally and tied with him in Michigan, Pennsylvania, and Wisconsin. Trump could be winning if the latter is true, but not by nine points. Those states were essentially tied in 2020, and Biden won the national vote by 4.5 points.
Perhaps we should listen to the estimable political analyst Larry Sabato of UVA’s Center for Politics. There’s no sense in paying attention to this polling until we get past the conventions this summer. Few people follow politics as closely as those that do imagine, and we know that most voters do not really pay attention to politics in an election year until after Labor Day.
It seems clear right now that pollsters are having a hard time modeling the likely electorate this fall. The strange results we are seeing have in some part to do with how pollsters are weighting the responses they get to fit what they believe will be the demographic makeup of the electorate. If we want to get a random sample of the population of Oakland, California, we can compare a sample to the demographic data we have about the city. There could have been changes in that makeup since the latest data, but probably not so significantly that it would render useless whatever weights we used on the sample. But sampling for an electorate involves guessing what is going to happen in the future – and we know that electorates do not look like the general population, and not always for the same reason.
Additionally, we know that there are serious challenges in getting people to respond to surveys – which may be why more pollsters are moving towards non-probability sampling. This has been an ongoing challenge for many years, but there appear to be signs that the cost to getting enough responses for a useable sample size are becoming prohibitive. Pollsters are finding that panels in which voters opt in to participate are cheaper and more reliable. However, those panels cannot typically be used in the same way random samples can. I will be at the American Association for Public Opinion Research (AAPOR) conference in a couple of weeks where I hope to learn more about how pollsters are using these panels and non-probability surveys more generally.
One final thought that we have to consider is that there is a non-response bias that favors Trump or Republicans generally. This means that it is more likely that Republican-leaning voters are participating in surveys than Democratic-leaning voters (that does not mean party affiliation, it’s about how people vote). In some of the past recent elections we have notices a non-response bias that favored Democrats, so the phenomenon is not unheard of. It was thought that this bias was the result of Republican voters not having trust in institutions such as polling, and refusing to participate. We don’t really know why this phenomenon happened, but that’s as good a hypothesis as any. It sure sounds plausible. But we are seeing some weird things like Trump running almost even with Biden among young voters and significant movement of Black and Latino voters to Trump. Maybe pollsters are only getting into contact with members of these demographic blocks who support Republicans for any number of reasons. We have to at least consider that possibility. Maybe we’ll find out in November that those shifts are all real. But for right now, they seem strikingly implausible.
However, to be fair, in February 2024 the Times did publish an interesting analysis of turnout in special elections versus other elections. written by Cohn.
Although it could be considered consistent with the inverted pyramid style journalists are trained to follow.