by Ben Sowter
In a recent article in Inside Higher Ed, Philip Altbach commenting on the latest set of rankings from THE said “Why do Bilkent University in Turkey and the Hong Kong Baptist University rank ahead of Michigan State University, the University of Stockholm, or Leiden University in Holland? Why is Alexandria University ranked at all in the top 200? These anomalies, and others, simply do not pass the “smell test.” Let it be hoped that these, and no doubt other, problems can be worked out.”
I would like to explore this notion of a “smell test” a little further, as in reality, it seems to be the single factor that defines the global credibility of any of these evaluations in the eyes of their many observers worldwide.
Having just returned from the QS-APPLE conference in Singapore I have had the good fortune of having a broad range of interesting discussions with representatives of Asian universities – many of whom are amongst the world’s hungriest for the results of rankings when they come around each year. Certainly there is a proliferation of studies beyond the four international exercises to which Altbach alludes in his post. An approximate and, I’m sure, incomplete chronology is as follows:
- 1998 – CHE Excellence Ranking (CHE Centre for Higher Education Development)
- 2003 – Academic Ranking of World Universities (ARWU) (initially Shanghai Jiao Tong University now ShanghaiRanking Consultancy)
- 2004 – QS World University Rankings® (Quacquarelli Symonds)
- 2004 – Webometrics* Ranking Web of World Universities (Cybermetrics Lab)
- 2005 – 4icu*.org University Web Ranking
- 2007 – HEEACT (Higher Education Evaluation and Accreditation Council of Taiwan) – Performance Ranking of Scientific Papers of World Universities
- 2007 – Mines ParisTech Professional Ranking of World Universities
- 2007 – Leiden (Centre for Science and Technology Studies, Leiden University)
- 2009 – Global Universities Ranking (Independent Rating Agency RatER)
- 2009 – SCImago Institutions Rankings (SIR)
- 2010 – Times Higher Education World University Rankings
- 2010 – High Impact University (Derived by faculty members from the University of Western Australia)
- 2011 – U-Multirank feasibility study results (CHERPA Alliance funded by European Commission)
Naturally all of these exercises yield different, if overlapping, results and some get closer to passing the “smell test” than others. The truth about this smell test, however, is that it is personal to the tester and this explains why different rankings have different levels of acceptance and prestige in different places. The truth is that, for most observers, the validity of a methodology has very little to do with its philosophy, transparency, purpose or accuracy and everything to do with whether the universities in the observer’s domain appear in an order that makes some sort of sense.
In some cases this has prompted the emergence of new studies – it seems clear that the Mines ParisTech exercise and the RatER exercise may have set forth to specifically highlight aspects in which universities from their originating country perform well – aspects in which prior exercises may not have placed as much emphasis.
Even the most sophisticated observer will tend to take the results in their specific context over the methodology as a signal of validity. Taking (South) Korea as an example. The latest rankings from QS present a top 3 of SNU, KAIST, POSTECH where the Shanghai interpretation shows SNU, KAIST, Korea U and the inaugural THE results say POSTECH, KAIST, SNU. A Korean observer may assume that the most valid methodology is that which reflects domestic perception best – if my information is up to date the Joong Ang Daily newspaper’s ranking (the most widely referenced domestic ranking in Korea) yields a top three of KAIST, POSTECH, SNU. This domestically oriented “smell test” would perhaps then encourage the observer to take the closest matching ranking as the best yardstick in other locations. In this case the best match would be THE.
If we choose another context though – Hong Kong perhaps – the picture is different. The domestically held viewpoint based on anecdotal conversations with representatives of the top six institutions seems to see HKU in top spot, HKUST and Chinese U vying for second and third, City U and Poly U vying for fourth and fifth, then Baptist U, then others. Here, our latest results see a top five of HKU, HKUST, Chinese U, City U, Poly U where Shanghai show Chinese U, Poly U, HKUST, HKU, City U and THE says HKU, HKUST, Baptist U, Poly U with no mention of Chinese U or City U. So here the domestic “smell test” brings out QS as the closest match.
We could do this for a dozen rankings in a dozen countries and in each we would likely get a different result. International ranking organisations find themselves in a difficult position trying to defend their methodological philosophy when the degree of face validity their results reveal is such a strong influence in an observer’s paradigm.
I once had a conversation with a student who expressed astonishment that Berkeley could be ranked ahead of Macquarie University in Australia as his experience as an international student at the latter had been so much more enriching and positive than the one he went through in the former. Were the positions reversed there would be many people who would express astonishment.
My advice to all observers, be they students, academics, university leaders, or employers is to study what is being measured and how that maps to what you consider important and if you must use face validity as a key input to your choice then ensure you cast the net further than the context with which you are personally familiar. Ultimately all of these systems use different measures for different purposes and a combination of these with other research may be the best way to develop a picture of comparative quality that is personal to you.