Rankings of anything seem very good at attracting attention, and the simpler they are the more easily and effectively they draw attention. If anyone has ever told a clever joke and then been called upon to explain it you will understand what I am referring to, by the time your audience has understood the joke it has ceased to fulfil its primary purpose – to make people laugh.
There is a great deal of chatter online at the moment – speculation about what newly released rankings might look like, what will be included and what won’t the new THE/Thomson exercise and the CHERPA project through the European Commission are generating particular speculation. The premise on which both of these projects are being discussed is that existing rankings do not fairly measure every aspect of university quality, nor do they recognise the differing nature and structure of different institutions.
Any ranking operated on a global level will be constrained by the quality and quantity of data available and the opinion of its designers and contributors. The worrying trend at the moment is that two underlying assumptions seem to be beginning to resonate throughout this discussion:
- There is a “perfect solution” – or at least one that will meet with dramatically higher acceptance than those already put forward, and;
- The stakeholders in rankings are like lemmings and will automatically accept the conclusions of one, or the average of all rankings they consider respectable
The CHE is at the opposite end of the scale to Shanghai and QS methodologies – it gathers masses of data from Germany and surrounding countries but doesn’t actually rank institutions or aggregate indicators – their argument, and perhaps it is a valid one, is that it is not for them to decide what represents quality in the mind of the average stakeholder – particularly students. Fair enough but, broadly speaking, the more proscriptive rankings are not making this assertion either. To my knowledge neither Shanghai Jiao Tong nor QS have ever asserted that their results should be used as the only input to important decisions – the responsibility for such decisions remain the responsibility of the individual making them.
The focus of new developments seems to be on working to the needs and demands of the institutions being evaluated, rather than addressing the needs of the people using them. If such a thing is possible, developing a completely fair and even-handed evaluation, only comparing like against like, is going to become exponentially complex, involving tens, perhaps even hundreds of distinct indicators each engineered in deep technical ways to counter for discipline bias, cultural variety, financial environment, response rate, institution typology, focus, age to a degree that, however transparent the approach is intended to be – its complexity will serve to cloud understanding and the time involved to retrieve and understand results may be off putting.
So the assumptions above seem flawed – it would be irresponsible of any ranking or evaluation to suggest that it is sufficiently complete to be the sole source of data for effective decision making – this increased complexity will promote this illusion rather than allay it and, frankly, the vast majority of people referring to existing rankings are shrewd enough to not take them as more than a single input to their decision making process.
I am, personally, looking forward to seeing what emerges from some of these new projects but in order to achieve some of their bold stated objectives, they are likely to have to sacrifice simplicity, which is not necessarily in the interests of the user.