Performance measurement in government is not likely to make the hearts of the electorate beat faster. But it is a means to an important end: whether the government has achieved what it set out to do.
In 2010, the coalition introduced a new system of performance measurement, with each department agreeing a list of actions (such as legislation it would pass and policies it would implement) and a set of impact indicators. The prime minister said these indicators were part of “the basic information that the public needs to hold government to account”. And the communities secretary, Eric Pickles, said he wanted to mobilise an army of armchair auditors as he opened the Department for Communities and Local Government’s (DCLG) books to public scrutiny. But our new Institute for Government report shows why the troops have not enlisted.
The 207 indicators – across 17 government departments – include how many people are on particular types of benefit (Department for Work and Pensions), the number of new housing starts (DCLG) and emergency admissions (Department of Health). Our report shows that since 2010, all of the Department of Energy and Climate Change’s indicators have moved in the right direction, while only two of the Ministry of Defence’s have. DWP and DCLG were both in the top five, while – with 50% of its indicators moving in the right direction – the DH was bang in the middle. Overall, more than half of impact indicators have moved in the right direction since 2010.
But some departments have not published comparable data for a significant number of their indicators. And using the Number 10 Transparency website – designed to be a portal for the public to access and use the data – it would be difficult, if not impossible, to find the scores in a usable, open format, if at all.
Our report assesses how useful the indicators were under three headings: as data (the raw material), as information (explaining what anything meant) and as evidence (whether the system is actually being used). On the data front, there were some problems with availability and with quality.
Some departments were very good at explaining and presenting what all of the numbers actually meant. DCLG, for example, has a dashboard for each indicator, explaining where the data comes from, what it shows and what increases and decreases actually mean. However, many departments do not explain what anything means. The lack of benchmarks or baselines in nearly all cases also makes understanding and comparisons difficult.
In many cases, there is only a tenuous link between the impact indicators and the actions that departments had committed to implementing. For example, many of the DH’s actions were focused on reorganising the health service – while this might have an overall beneficial effect on health, there were often few actions directly targeting specific impacts (such as increasing life expectancy).
As for evidence, we examined whether different audiences – the public, departments themselves, and ministers – could or were using the impact indicators. We concluded that the plans appeared to have lost any political link, and that many departments were using something else. As for the public, the lack of availability and poor quality of data poses a barrier.
If the government wants to recruit an army of auditors, then it needs to improve the accessibility and quality of the data it publishes. For some of the data we tried to find for this report, publishing at all, publishing in a timely way and publishing details of sources and caveats would be helpful.
Better information about what these impact indicators mean – how they relate to the actions departments are undertaking, baselines and benchmarks to understand performance, and why these measures matter – would also greatly help public understanding and use of the data.
Measuring outcomes is difficult. But whatever government is formed after the election will need to be clear on what it wants to achieve, and how it is going to do this and measure it.