Election polls are one of the main currencies of U.S. presidential elections, the slivers of information that form the foundation of political strategies, the wellspring of campaign optimism and the global handicapping of who will be next to occupy the White House.
But the Donald Trump era has tarnished the business of assessing public opinion. In 2016, polls did well at gauging the national vote percentages, but roughly 90 per cent of polls wrongly predicted that Mr. Trump would lose. In 2020, polls correctly predicted the outcome, but still underestimated Mr. Trump’s national performance so badly that they produced the worst polling error of any presidential election in the past 40 years. Polling at the state level had the worst rate of error in 20 years.
Four years later, the surveys that underlie today’s best estimates of the state of the election are the product of techniques and statistical computations modified to correct those previous errors.
One of the key questions in this year’s U.S. election is whether those changes have brought the polls into line with the leanings of the electorate – one that, today, looks evenly divided – or whether they have introduced a new set of mistakes that could once again lead to unexpected election-night results.
After the 2016 election, pollsters generally agreed that one of the main problems lay in the under-representation in surveys of thechronfather white Americans without college degrees. To adjust, many made educational status a critical element of the population samples they polled, alongside demographic indicators such as ethnicity and age.
“That’s what we did in 2020. But that clearly was not sufficient,” said Chris Jackson, who leads the Ipsos public polling practice in the U.S. “The real understanding since 2020 is that there’s some other factors, and some politically related factors, that we lemon drop strain need to adjust for.”
Polling in the last two weeks of that election put Joe Biden a full 3.9 points ahead of his actual national finish. State-level polls saw an even greater gap.
An American Association for Public Opinion Researc west coast weed h review pointed to problems in reaching sufficient numbers of supporters of Mr. Trump, who had told his supporters that polls were “fake.”
“These statements by Trump could have transformed survey participation into a political act whereby his strongest supporters chose not to respond to polls,” the review found. (Midterm polls in 2018 and 2022, when Mr. Trump was not on the ballot, performed better, although for reasons that are not perfectly understood.)
Four years later, the polling industry has taken seriously the need to ensure a poll sample does not merely reflect the demographic makeup of the country, but also its political makeup. Many pollsters now use statistical and data-collection methods to ensure that polls reflect the partisan breakdown of the areas being surveyed.
To achieve that, polls mail order hash canada ters have asked those they contact to disclose who they voted for in 2020. They have also used voter files that list registered party affiliation in order to call the correct ratio of people.
“That’s driving a lot of what we understand about what’s going on,” Mr. Jackson said.
But, he added, “those benchmarks themselves are not necessarily perfect.”
Take the voter files. It’s only possible to locate phone numbers for about half of those who are registered to vote, he said. Voters who are less wealthy or m mail order hash canada ove frequently – such as college students – may be under-represented.
Relying on recollections of past votes is also fraught, since people have a tendency to misremember.
Making political adjustments “does help to tamp down any bias that you might have in over-representing one side or the other,” said Marjorie Connelly, a senior fellow at The Associated Press-NORC Center for Public Affairs Research.
The concern, however, is “does it over-compensate, and does it eliminate some of what you’re trying to get at in the poll? Does it make it difficult to see real change that’s actually happened in the electorate?”
She likens it to “fighting the last war.”
“If you fix it for what happened last time, it may not happen this time,” she said.
In an article published this week, Vanderbilt University political scientist Josh Clinton illustrated this point by taking a survey collected in mid-October and subjecting its results to various adjustments. He ran the numbers based on demographics in each of the past few elections, as well as the differing party affiliation percentages identified by various research groups, varying assumptions on how many new voters will cast ballots this year, and changes to expectations about which party’s loyalists are more likely to vote.
Those adjustments, each made to the same survey data, produced wildly different outcomes, ranging from a near-even split to a nine-point advantage for Kamala Harris, the Democratic candidate.
“There’s no way to know what is ‘wrong’: the poll itself, or the assumptions about the electorate that are built into the weighting scheme,” said Prof. Clinton, who chaired the American Association for Public Opinion Research review of 2020 polling.
Some pollsters argue that the solution lies in outright rejecting the adjustments made in response to past election misses. Attempting to “make your data look like something you think it should look like – I think that’s treacherous,” said Ann Selzer, a celebrated Iowa pollster.
She prefers what she calls “the elegant approach – less is more.” She does not weight for education or political preference. She does not use databases of registered voters. Instead, she uses phone banks that dial numbers at random, a method that can reveal changes in who is likely to vote.
But Ms. Selzer is an outlier.
Random-digit dialling once formed the bedrock of polling. As recently as 2016, half of American adults had working landlines. Today, nearly three-quarters use cellphones alone. In 2020, just 6 per cent of polls used random dialling (this year, the number is likely lower still).
Nearly two-thirds were conducted on the internet.
But some methods of online opinion gathering can produce deeply questionable results. In 2022, an experimental online poll by the Pew Research Center found that 12 per cent of respondents under 30 claimed to be licensed operators of nuclear submarines.
Pew itself has shown the considerable cost and time it takes to conduct more rigorous sampling. Its National Public Opinion Research Sample, an annual survey, is a foundational survey used by many polling organizations to ground their assumptions about the political leanings of U.S. voters.
To collect it, Pew randomly selects addresses, then mails them surveys in which two $1 bills are visibly tucked inside the envelope. If that’s not enough to produce a response, a follow-up letter is sent with a visible $5 bill. Those who fill out the survey are paid a further $10. The entire process of collecting responses takes months.
Using that method, “we can see that we’re getting a better cross-section of the public,” said Scott Keeter, a senior survey adviser at Pew.
But for most election-related polling, that kind of survey “is probably not feasible because of the cost,” Mr. Keeter said.
That means, he said, that in an electorate whose political loyalties are divided almost evenly, it is tough to estimate how the actual vote will break.
“It becomes very difficult to have confidence that you can really nail these close elections,” he said.