It was, by a mile, the most polled election in British history. It was also, thanks to constituency surveys and whizzy spreadsheet models, the political event that gave rise to more precise predictions than any before. Precision, however, is not the same thing as accuracy. By the early hours of 8 May earlier this year – as the results rolled in – it was clear that the ubiquitous hung parliament forecasts were all wrong: David Cameron had won a majority.
A plea in mitigation can be made for the pollsters. They had, after all, correctly predicted the collapse of the Lib Dem share, the historic SNP surge and the new strength of Ukip, three developments outwith the ordinary range which nobody would have seen coming without the number-crunchers. There is, however, no appealing against the guilty verdict. The single most important statistic in any election, the number that determines who gets to govern, is the gap between the first- and second-placed party. And here the pollsters were not merely out, but substantially wrong, and to an eerily uniform degree.
The pollsters have since been trying to rekindle interest in their tainted product by tinkering on every front – experimenting with filtering more aggressively for likelihood to turn out, say, or increasing the quotient of shy Tories. Through such tweaks they may be able to retrofit the right result on to their final surveys in May, but it is all ad hoc. Until the mechanism that produced the bias is pinned down, there can be no guarantee that fixes dreamed up to produce the right answer from this year’s data won’t make things worse next time around. Until now, however, the pollsters have had one obvious rejoinder to recooking their data in the light of the results – namely, in the absence of any fresh evidence, what else are we supposed to do?
But now at last – and only six months after the event – a new poll has emerged which got this year’s election right. The British Election Study interviewed almost 3,000 adults face to face soon after the vote, and correctly gauged the all-important margin of Conservative victory. That is a very different result from what the pollsters have found when they recontacted their own pre-election samples after polling day, and mostly found that a decisive Tory victory margin continued to elude them. The most plausible reading is that something has gone right with the face-to-face BES sampling which had gone wrong with all the pre-election internet and phone polls. It looks as if a large slice of the population – comprising the quietly Conservative, and those too apathetic to vote at all – are disinclined to join internet panels that regularly ask for their opinions, and probably disinclined, too, to speak to pollsters who ring them at home.
It might seem an argument for doing away with all the hi-tech techniques, and returning to the era of the man with the clipboard. That, however, may not be practical since harvesting data online is now so easy compared with the vast effort involved in knocking on thousands of doors. But the BES provides a better basis than we’ve had until now for deciding between the various tweaks. It also provides a powerful reminder of the value of random sampling, a principle that refined online methods might be able to mimic. There may be cost implications, but so be it. For if there is one lesson from May, it is surely that the quantity of data counts for nothing at all, until the quality is assured.