Did Trump’s indictments benefit him politically?
There has been a bit of concern that the prosecutions against Trump only help him politically. When you hear that coming from Trump supporters it tells you everything you need to know. If they thought it really helped him, they wouldn’t be so upset about it. But when pundits and Democrats say the same thing, does it mean it’s a real phenomenon? The short answer is no. While Democrats have a history of “bedwetting” around election time and pundits so often just repeat talking points that morph into “conventional wisdom,” it would be nice to have some research on this. At AAPOR last week, just such research was presented on whether the classified documents indictment made a difference in voter support for Trump.
The problem here is that a random control test (RCT) cannot be used for a sudden and unforeseeable event like an indictment being announced. RCT is a classic methodology to test a subject group against a control group to better examine whether an event actually produces an effect. Once the event occurs RCT is not an option. With RCT unavailable, another method might be to just ask the before/after question to voters. That is, “how did you feel about Trump before the indictment and how do you feel now?” But that creates a problem called “response substitution” in which supporters and opponents tend to exaggerate the influence of an event. So, that wouldn’t be helpful either. Finally, researchers at the conference last week attempted a different method: the counterfactual.
With the counterfactual method, voters were first asked “how likely are you to support Trump?” That question is then followed up with “how would you have answered that question if you did not know about the indictment?” This method comes closer to RCT than the before/after method does. It’s not perfect, but it forces the voter to consider what they might have thought about Trump before the indictment only after expressing their support for him currently. The results of survey showed that the indictment hurt Trump a little among Republicans and not much among independents. Consequently, there appears to be no real impact on the election. While Trump supporters might get louder on social media as a result of Trump’s legal problems, the research so far shows it is not changing anyone’s level of support.1
How important is the nonresponse impact for polling?
We have heard a lot about partisan nonresponse since the 2016 election. There does appear to be a phenomenon where people choose not to take surveys for partisan reasons. That was never the case before 2016. While there was always a portion of the population that did not respond to surveys, but it was as representative of the population as the response group was, so it did not impact the representativeness of the sample used.
We still are not sure what is going on now because when we can’t reach people to ask them, we don’t really know. In 2016 and again in 2020, there appeared to be a partisan nonresponse among certain Republican voters which made the polling look better for Democrats than it was. However, it might be more accurate to say that the electorate modeling was wrong in 2020. Most polling underestimated the turnout, using one based on 2016 but it was about 11 million higher than that model would predict. To be fair, there could be a relationship here. A partisan nonresponse in the polling could trick election modelers into thinking turnout would be lower and more like 2016.
In 2018, the polling was pretty good. This suggests that perhaps Trump’s presence on the ballot is somehow related to this partisan nonresponse. It seems difficult for people to believe this, but I am not convinced we are not seeing a Democratic partisan nonresponse in the polling this year. Democrats have been overperforming in all elections since 2020 and there is reason to believe that Biden’s low approval levels may be more a register of disagreement with certain policies than an indicator of voting intent. We don’t really know. I can speculate why partisan nonresponse might have flipped this year, but speculation is all it would be.
The interesting thing here is that some pollsters at AAPOR last week do not think nonresponse is having a big impact in the polling, but that does not mean it is a problem. Whether it is a problem depends on (1) what the purpose of the polling is and (2) how close the election might be.
Polling is conducted for use largely by two different types of sponsors: media and campaigns. The media want to know what how the horserace is going. Campaigns want to know where there are persuadable voters, how to appeal to them, and where to deploy resources. Getting the head-to-head numbers right is much more important for the media than it is for campaigns. I know this might sound counter-intuitive, but campaign polling is in some ways more akin to marketing surveys. If you know that as much of ten percent of a given population is open to buying your product and you know who they are, you can target your message and perhaps get them to buy it. It’s as true for a candidate as it is for toothpaste. Whether that ten percent is actually eight percent or twelve percent is not that important. But it is for media-sponsored polling.
The media-sponsored or public polling wants to get the election right at any given point. That means Biden winning the election 51-49 when the polling says Trump was winning 52-48 is a bad outcome for the media and with the public (and, consequently, with pollster reputations) even if both results were statistically valid in the final polling (say, a margin of error of +/- 4 points). Sure, campaigns want to know if they are winning or not, but for the operations of the campaign that is not what they need out of their polling data. And campaigns are more likely to appreciate the error margins in the polling than the public (and even some reporters).
So a small nonresponse impact, as some pollsters think we have right now, is not a problem for polling used for persuasion, but it is a big problem in a close race for predicting the outcome and thus pollster credibility.
An interesting coda to this was a question to a panel of pollsters about where the public can draw the line between quality and non-quality pollsters. The simple answer that everyone agrees on is transparency. AAPOR has a really good set of standards and any pollster that adheres to them is likely a quality pollster. However, transparency is tough to do for some private firms who have clients that are not interested or refuse to agree to have methods published publicly and some campaigns are concerned about revealing strategy is they provide too much transparency about what they are doing. One pollster – who works with independent expenditure committees – said even with those concerns we should all be skeptical of any polling that withholds the demographic and partisan composition of their samples. Even a campaign can provide this without risking any strategic disadvantage. If transparency doesn’t solve your problem in figuring out whether a pollster is quality or not, consider the track record. A good pollster will have hits and misses, but will not be wildly wrong a lot and will have plenty of polls to consider.
However, the indictments appear to have been beneficial to Trump for fundraising, or at least in short-term bumps in his fundraising. Perhaps when considered over the long term, the impact has been negligible. The research at AAPOR did not address this, although the researchers did acknowledge that increased fundraising might have been a benefit.