Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
World
Josh Nicholas and Tory Shepherd

The polls were off in Australia’s election – but it’s the uniformity that has experts really asking questions

Some of Queensland’s new Labor MPs elected in a huge win that not many pollsters saw coming.
Some of Queensland’s new Labor MPs elected in a huge win that few pollsters saw coming. Photograph: Jono Searle/AAP

Pollsters correctly called that Labor had the most support going into Saturday’s election, but all the polls also underestimated Labor on both primary and two-party preferred measures.

While the final results are within the stated margin of error for some of the polls, experts are worried about something else: across all of the polls, the results are too uniform.

“They all exaggerated the Coalition. They all underestimated Labor. They all exaggerated One Nation and so on. All of them,” says Murray Goot, an emeritus professor of politics at Macquarie University.

“And there are 10 of these polls. Roughly [taken] at the same time.”

How the polls played out

A simple average of the final polls had Labor’s two-party preferred at 52.3%, and primary vote at 31.6%. On the current count, these are smaller misses than in the 2019 election, when pollsters incorrectly showed Labor ahead.

But all of the polls this election were wrong in the same direction.

Several pollsters have told Guardian Australia the record number of votes for third parties and the record level of soft and undecided voters made their jobs more difficult – but say most were not too far off the mark in the end.

“Essential, along with most polls, accurately picked up that the trend was moving towards Labor during the course of the campaign,” says Peter Lewis, the executive director of Essential, who runs Guardian Australia’s Essential poll.

“Additionally our methodology of including undecideds means that with the final 4.8% that declared ‘unsure’ the week before the poll leaning Labor, the polling captured the momentum, if not the final ‘landslide’.”

The RedBridge director, Kos Samaras, says there was “a lot of heavy backgrounding” that the public polls were wrong. RedBridge’s final poll had Labor at 53% to 47%, and it was showing Labor was doing well in key seats and in Queensland.

“We were recording pretty big numbers for Labor and I thought maybe that was an aberration,” he says.

“Clearly it was not.”

The ‘herding’ effect

There should be some “noise” in polling, with estimates bouncing around. A month before election day, some on social media were already questioning if the polls were too stable. There were similar concerns by election watchers in 2019.

Adrian Beaumont, an election analyst at the Conversation, suspects “herding”, when pollsters consciously or unconsciously adjust their results to match those published by their competitors, so they won’t be singled out if they are wrong.

Guardian Australia is not accusing any of the pollsters of herding in this campaign.

In the 2019 election, there was such a small spread among the polls that the Nobel prize-winning astrophysicist Brian Schmidt calculated the odds of it being chance at greater than 100,000 to one.

“The polls were afraid of showing a Labor victory by a landslide margin,” Beaumont says of the polls this year. “That’s why they were out – the polls understated Labor’s vote.

“If you go back a couple of weeks, Roy Morgan had Labor winning 55.5% of the two-party vote. But then in the week before the election they came back down to 53%. They stayed at 53 in the final poll, which was published on the Friday before the election. If they hadn’t herded they may well have been the most accurate of pollsters.”

In response to questions, Roy Morgan says there were “no changes in sampling and methodology” over the final weeks of the campaign, except to stop survey respondents nominating candidates who were not running in their electorates after the final candidate list was announced.

“This is standard practice for pollsters during election campaigns,” Roy Morgan’s poll manager, Julian McCrann, says. “If there was any ‘herding’ it was towards us – we led the pack and picked up the swing to the ALP well before any other pollster.”

McCrann also points out that several projections now show the final result will be about 54-46, “which is closer to a 53-47 result [final published Roy Morgan Poll] than a 55.5-44.5 result”.

Method mixes

Goot dismisses the idea there could have been a “late swing” towards the Labor party in the days after polls were collecting data. There were five polls conducted within a day or two before the election, he says, and they were no more accurate than earlier polls.

“I can’t speak for other pollsters, but as far as Essential is concerned there is no herding by us,” Lewis says.

“We ran double samples through the campaign but stuck to the methodology which we disclose through the Australian Polling Council.”

There’s a lot of art to polling, including making assumptions about how preferences will flow, and the choice of how to “weight” survey samples so that they match the population at large.

Throughout the campaign, some election analysts showed these methodological assumptions can have huge impacts, by recalculating published polls using preference flows from the last election rather than respondents’ stated preferences.

“There was a fair discrepancy [among polls] depending on what you did,” Goot says.

“In one case it was a difference of at least two percentage points between going with one [method] or another.”

But there is no consensus on which method is more accurate.

Sample sourcing

Goot thinks herding was possible in this election – but “without transparency it’s difficult to know”. Either way, he says, the industry has more fundamental problems – such as whether survey samples cover everyone who should be included, and how pollsters handled “nonresponse” groups unable or unwilling to participate.

Modern polling samples aren’t randomly drawn from the population. Rather, companies get access to “panels” of people from online databases. These databases are put together from a variety of sources, including loyalty programs, but we know very little about them.

“What we suspect is they contain a very small percentage of all possible people that could be in it, and that should be in it,” Goot says.

“[Pollsters] all say that they’ve got the best selection to draw on, but one possibility is that most of them go to the same source, and that doesn’t have all that many people in it. Some of the people answering the polls may be in more than one [poll].”

Lewis says sourcing samples is a “challenge”, but that Essential’s outreach team “work hard to minimise the need for weighting”.

McCrann says Roy Morgan interviews about 1,500 Australians each week. “And that is via multi-mode interviewing including online, telephone and face-to-face interviewing and we aren’t using the same databases of any rival pollster,” he says.

How does weighting work?

While polling companies request a spread of genders, ages and locations for their panels, not all respond, requiring them to proportionately scale up or down those that do – weighting.

But weighting relies on the assumption that those who do and don’t respond to surveys are roughly similar. If this isn’t true, or the number of responses is very small, it can introduce other issues.

“If, for example, the young people who respond are going to vote Green in reasonably large numbers, and the young people that don’t respond are going to vote Green in much smaller numbers, then if you weight you’re going to exaggerate the Green vote,” Goot says.

Even after the 2019 election polling failure, there’s little transparency – but Goot believes there was a “good step” towards it with the polling council, formally established in 2020.

Still, he notes, “not all the pollsters are members. And members don’t have to disclose very much.

“They have to tell us what factors they weight by, but not how they do this. They have to put up their questions. They don’t tell us anything much about sampling, response rates or any of the other things that can go wrong with the sample.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.