If there's one set of people busier than those producing university rankings, it's the academics poring over the university rankings and drawing conclusions about the conclusions. The latest to cross our bows is from Canada's Educational Policy Institute.
There's lots of interest in there - some things that we'd disagree with in terms of their characterisation of the data we use (they get it wrong, in places, but let's not get peevish about this) and much we're happy with. In the end, it's not that different to any report about these things from academics - the rather strong feeling that they don't like rankings much.
Most in the university sector don't and most find it easy to criticise and mock. The report quotes Marc Chun's assertion of a few years ago of compliers using available data is like a drunk looking for his house keys under the streetlight because that's where the light is best. That's fair enough (though to think I bought that man a coffee), but looking for something in the dark is not that productive either. And in the end, it's the universities who've switched the lights off - if there's no universally agreed measurement system produced from within, or with the cooperation of, the sector, then there's going to be some disagreement. And it should hardly be beyond the wit of universities to produce more meaningful data, something they would prefer us to use as a measuring tool
Of the report's conclusions, it's hard to disagree with one thought - the future is digital. Online data is likely to lead to the aggregation of data and different kinds of rankings. The only trouble is that this will mean that data even less fit for purpose is chucking into the rankings machinery. And if universities are worried now, at least they are dealing with the devil they know and they know where to find us ...