One of the biggest sources of frustration I have is with how the media interprets polling results. I think it comes down less to journalists not understanding polling - although that is certainly a problem at times - than it is a desire to have a story about the horse-race. And there needs to be tension for the horse-race narrative to take hold. Blow-outs don’t make for a lot of good stories, but really close races do.
Now, to be clear, I am not suggesting there will be a blow-out in this year’s presidential election (although I do think there are encouraging signs that Harris could win by a bigger margin than the polling currently suggests). Harris looks like the better bet to win the election, but she could also lose, so let’s not get too confident that this election is anything but a close at this point. The best available data we have right now suggests that’s what it is.
I want to turn to something a bit more obscure to illustrate my point. There are five measures on the ballot in Massachusetts this year. Question 2 - sponsored by the Massachusetts Teachers Association, the largest of the Bay State’s two teacher union federations - would eliminate the use of a standardized test called the MCAS as a high school graduation requirement. The University of New Hampshire released a poll last week that has the MCAS question losing 40-38 with 22% undecided. The other questions have majority or plurality support
If the media reports are to be believed, the race is even. But it’s not. In fact, it is not even close. This measure is going to lose by at least ten points unless the MTA can dramatically change the narrative to its favor. There are some political reasons why I doubt MTA can do that, but let’s put that aside for now. The reason I am so confident that this result does not show that the race is even is due to the nature of polling issues and ballot measures, and from the way we know voters approach voting on them; things that seem to be missing from the media analysis of the results on Question 2.
Boston.com said this: “many are divided on Question 2, which would eliminate MCAS as a high school graduation requirement.”
Politico said this: “The closest contest, the UNH poll found, was on the question that would remove MCAS as one of the requirements for high school graduation (Question 2). The “no” slide polled slightly higher, notching 40 percent to the “yes” side’s 38 percent — though within the poll's margin of error.”
The first thing we have to do when interpreting the results of ballot measures polling is to remember that ballot measures are not candidates, and that voters approach them differently than they do candidates. The reason why we see voters say they support gun control and then elect people who oppose gun control is because voters are not usually voting for a candidate based on one or even a few issues. It’s usually much more complex than that - or it can even be simpler: voting strictly for partisan purposes, for instance. But few voters seem to use a single issue as the determinative factor in making a decision to cast a vote for a candidate (although it is possible that this year, abortion rights could be just such an issue.)
It is also because ballot measures are not people being elected to represent voters in enacting legislation; they are legislation. Voters are lawmakers when they cast a vote for a ballot measure. It is a form of direct democracy in which the entire point is to focus on a single issue. We have seen time and again that voters are much more wary of changing existing law through ballot measures than they are making new law. Not that it doesn’t happen - it does - it’s just a lot more difficult. And nearly impossible if you need to convince nearly a quarter of the electorate who is undecided six weeks out from the election.
There are two things that are typically misunderstood about ballot measures. The first is the connection between issue polling before a ballot measure is before the voter and the eventual support a ballot measure dealing with that issue can get. It’s not unusual to see activists pointing to polls showing that majorities favor their position on an issue, yet are often disappointed to find out after spending a lot of time, effort, and money that they eventually lose at the ballot box.
The blame is often placed on an opposition that outspends them (even when they don’t actually outspend them) or misrepresents the issues, or that there is something unfair about the system. Those complaints have merit from time to time and in place to place, but the bigger problem is misunderstanding the difference between what people will say they are for when there are no real-life consequences to consider and what they will do when there are. It is also sometimes not true that the issue polling and ballot measure are the same. For instance, polls that may have shown that people in Massachusetts do not like “high-stakes testing” and that they “trust their child’s teacher” are not in fact asking the same question as “do you support eliminating the MCAS requirement for high school graduation and not replacing it with an alternative?” (Which, with apologies to MTA, is in fact what this measure would do.)
And that leads us to the second problem: most undecided voters and a non-insignificant number of folks that previously supported a ballot measure will, in the end, vote no on any ballot measure that will change the status quo.
A Nevada ballot measure to expand background checks for purchasing guns in 2016 helps illustrate these points. Issue polling a year or so out from the election suggested overwhelming support for gun control measures. From this, gun control activists deduced that they could pass the most popular provision – expanded background checks – easily in a ballot measure. As the election drew closer, the polling showed a clear victory ahead for the measure, but in the end it barely passed with slightly over 50% support. A marginally different voter turnout might have resulted in the measure’s defeat. I have a more detailed analysis of what happened in Nevada, and about ballot measure polling more generally here.
The gun control measure in Nevada went from nearly 90% support in the issue polling to a bare majority on election day. The reasons were clear. While the campaign opponents conducted made an impact, it did so precisely because it made voters feel more uneasy about changing the law – something a lot of them were uneasy with from the outset. Many people might not admit it (although in my experience, a lot actually do), but voters often have no idea how they should vote on ballot measures. Some are really complex, others are not but many voters do not feel like they know enough to enact legislation (that is why we vote for legislators, isn’t it?).
And then there are the campaigns that make something clear look insidious – like the disingenuous campaign being conducted against California’s Proposition 33 to make it look like rent control will hurt renters by using spokespeople who allege they are in favor of rent control but “when they looked into it” discovered it was going to hurt people like them. Yes, it is a campaign funded by deep-pocketed builders and investors, not housing activists. Voters can figure all that out themselves, but what this ad campaign really does is tell voters that they are not smart enough to solve the housing crisis so they are only likely to make it worse by changing the law. This is something research has shown us that voters are already worried about.
What’s interesting about the Question 2 poll is that we have a recent Massachusetts ballot measure that provides further illustration of the phenomena. In 2022, Massachusetts voters passed Question 1, which was known as the Fair Share Amendment or the Millionaire’s Tax. This measure was also supported by the MTA, but as part of a broader coalition. Two years before the election, a MassINC poll found that 72% of respondents said that they would support an additional 4% tax on annual income over $1 million. A year later, MassINC found 69% support for the same question. The members of what became the Fair Share Coalition were certain that the measure would be easily passed.
No one, of course, was expecting there would not be strong opposition and that the margin would narrow. After all, Massachusetts voters had multiple times voted against creating a progressive income tax structure for the state. But with 70% support, they had a big cushion. Or so it was believed. In October 2022, polling showed less support for Question 1 but still nearly 60% support. The measure passed with just over 52% support.
MTA put a lot of support into fieldwork in the last month of the campaign, and that might have helped make sure the measure won. Campaigns still matter, even in ballot measure races. But with this year’s Question 2 at 38% support five weeks out from the election, MTA will need to see better results in its internal polling if it believes it has a chance to win this one. Question 1 was in much better shape at this point in 2022 than Question 2 is today. Media outlets should understand this rather than report the race as even (which people translate into “toss-up”). It is not a toss-up. Question 2 is headed for near-certain defeat.
That’s nice, but how does this help me understand the presidential polls?
I am sure you are thinking that by now. How does this help me understand the polling I am most interested in this year: the presidential election? The presidential election is a candidate election, not a ballot measure election. The phenomena discussed above is not directly transferable to candidate polling, but the idea of understanding how voters behave is important to understanding candidate polling as well.
Okay, so UNH has a poll out, and the press says Question 2 is even. In a sense it is: the result is 40-38 against. That looks even. But we see from how ballot measure polling works that this is a terrible result for the MTA. They need to convince over half of the undecideds to vote yes while making sure none of their current support wavers. And they have about 45 days to do so (or less when people start voting early). So, maybe it’s an outlier? Maybe Question 2 is really winning. Here’s why that is probably not true. And it is this method that helps us better understand any election polling, including for president.
The UNH poll was not just about Question 2. It was about all five questions and the reelection campaign of Sen. Elizabeth Warren (D-MA). Those results all look plausible. Because the same voters were asked the same questions, the result for one of the questions cannot be an outlier. The entire poll would have to be an outlier, and the responses to the other questions are about what we would expect to see. That does not mean the poll is correct, but it does mean that if the results for Question 2 are wrong then the rest probably are too.
This is a way to gut-check a poll. Does it seem plausible? Are the other results and data in the poll consistent with what we would expect? If the answer is no (as it is in today’s New York Times/Siena Arizona poll), then maybe there’s little reason to concern yourself about bad results. If the answer is yes (as it is in the UNH poll), then pay attention. In either case, the more important thing over time is seeing whether the results of the poll in question are replicated in other polling. It’s always important to remember that polls do not forecast election results, they provide the proverbial snapshot in time. It is possible that everything about the UNH poll is correct for the sample they had during the field period surveyed, but that the results on Question 2 are not the same for the larger population. It is possible. The question to ask yourself is whether it is plausible.
The results of the latest NYT/Siena poll are helpful to this discussion. Let’s look at the Arizona results. Overall, Trump has gained five points there since the last Siena poll of the state a month ago. This means Trump’s support has increased significantly since the debate. That is the first thing that should strike you as implausible. Not impossible, but not likely. It is possible that the overall result (Trump +5) is correct, but that the change is not, and the earlier poll was incorrect. But the direction and magnitude of the change seems unlikely. Possible? Yes, absolutely. Plausible? I don’t think so.
One thing we have seen across polling this year is that Harris has enjoyed a significant advantage among women in this election while Trump has led among men. The gender gap may be the largest ever seen in a US presidential election. However, in Arizona - a state that voted for Biden in 2020, voted for a Democratic women as governor and attorney general in 2022, and that has an abortion rights measure on the ballot - Harris is up among woman by just five points. Does that sound right? It doesn’t to me. Especially coming after the debate. I don’t see how it is plausible that Trump gained support from women in the past two weeks.
Demographics are not the same as issue questions, but they both tell us something about the kind of people in the sample and therefore both can give us an idea of whether the results we are seeing can be trusted to be representative of the larger population. Remember the AtlasIntel poll that had Trump up nationally by three points? That poll had Harris favored as best on abortion rights by two points and Trump best on healthcare by five points. Those two data points alone do not pass the smell test. It tells us that there is probably something wrong with that sample, and the overall results are thus unreliable.
Is it plausible that Harris is only polling 50% among women in Arizona? If it’s not, then the overall results are not plausible. If Harris is really winning 55% of women, then she would be even with Trump. Yesterday, NBC News released a national poll showing Harris beating Trump among women 58-37. That’s a 21-point margin! While we might assume that a larger proportion of women in Arizona support Trump than at large, is there really a sixteen point swing there? Maybe Trump won over Arizonian women in the last few weeks? It’s certainly possible, but I doubt it.
Again the question for Siena’s Arizona poll is: are these plausible results? If not, then the overall result is probably not plausible. This is because it suggests that the make-up of the sample is not actually representative of the electorate.
That brings us to another problem: what is the electorate? It’s not something that actually exists right now, and will not until all the votes have been cast. In fact, it is not really complete until all votes cast have been counted and certified because some votes will be rejected. This is a tricky problem for pollsters and the source of so much confusion for the public when polling misses the mark. It deserves more discussion than I have room for today, but I will leave you with a few observations.
Voter registration trends have been looking good for Democrats. A lot of young people and especially women of color have registered to vote in recent months. That gives us an idea of what the electorate might look like, especially since about 80% of people who register to vote in a presidential year actually vote.
When the early and mail vote data starts becoming available we will have an even better sense of what the electorate will look like.1
It is difficult to believe that after overperforming in just about every election held since the Dobbs decision the Democrats will underperform in November. But it is absolutely possible. AtlasIntel might have it right and others are wrong; perhaps Harris is only slightly better on abortion rights among the electorate than Trump is. Perhaps the race is a lot closer among women than seems likely. It’s hard to believe, but we’ll know for sure soon. In the meantime: campaigns matter!
Organize, mobilize, and vote!
Tom Bonior does an excellent job tracking and reporting on voter registration and early vote data. I highly recommend you follow him.