The twelfth edition of the QS World University Rankings is now online.
We pride ourselves on keeping the Rankings methodology as stable as possible, so that the results provide a genuine year-on-year comparison of the world’s top universities. But this year we have made a few improvements, one of them especially important, to our methodology.
The significant change we have made concerns our measure of academic paper citations per faculty member. This accounts for 20 per cent of each university’s possible rankings score. As before, we have used five years of publications data from the Scopus database as the foundation for this figure. However, we have long recognised that this approach favours institutions with a substantial commitment to the Life Sciences and Medicine, which account for 49 per cent of the citations in Scopus.
We have now cured this problem by equalising the effect of citations in the five main areas of academic life: biomedicine, Natural Sciences, Engineering and Technology, the Arts and Humanities, and the Social Sciences. We term this “normalisation.” We have also introduced a correction factor to allow for the fact that some important research in the Arts, Humanities and Social Sciences is not published in English and is therefore less likely to be cited. This change was warmly backed by the prestigious Global Advisory Board for the Rankings.
QS Intelligence Unit director Ben Sowter says that for top universities with the research capacity to invest across most or all faculty areas, this new approach will not have a massive effect. But those with a heavy emphasis on medicine stand to lose out, while others with strengths in the Arts and Social Sciences are likely to do well from this change. See the accompanying article for a quick look at the results, or examine the Rankings for yourself to see how this new approach has worked out in practice.
A further improvement involves our two annual surveys, one of active academics around the world, and one of recruiters, which together account for half of each university’s possible Rankings score. This year, 76,798 academics and 44,226 recruiters around the world took part. In the past, we have counted the latest response from any one respondent within the previous three years. If you responded a year ago and two years ago, for example, only last year’s response would be used. We are still following this rule. But in addition, we are now using data which is four or five years old as well, weighting these votes at a half or a quarter respectively of more recent ones. Again, this material is only used if the same person has not also voted more recently.
As well as adding stability to the ranking, this change improves its consistency. It means that we are using five years of data both for our surveys and for our citations measure. In addition, we have always normalised responses from the different areas of university life in our academic survey, which provides a further rationale for normalising the citations data.
by Martin Ince