The methodology used to create the World University Rankings uses six robust measures which encapsulate the principal activities of global higher education.
These measures are unchanged for the new 2015/16 Rankings. But as we explain here, the use we make of the data we collect has been improved markedly this year.
The first two of these measures involve asking informed people to identify the high points of the world university system. We do this by means of two annual surveys, one of the active academics around the world, and one of the recruiters. The academics are asked what their subject is and where the top 30 universities are in that field, although they tend to vote for a median of about 20. They cannot vote for their own institution. The employers are asked to name the subject or subjects in which they recruit graduates, and where they like to recruit them. These two measures account for 40 percent and 10 percent respectively of each institution’s possible score in this ranking.
These are the largest surveys of their kind, and this year have involved a total of 76,798 academics and 44,226 recruiters around the world who completed the survey in enough detail to provide valid and usable data. The academic survey covers all subjects, and the recruiter survey involves a wide range of public and private sector employers.
For 2015/16 we have improved the depth of these surveys by making more use of historic data. In the past, we have counted the latest response from anyone respondent within the previous three years. If you responded a year ago and two years ago, for example, only last year’s response would be used. We are still following this rule. But in addition, we are now using data which is four or five years old as well, weighting these votes at a half or a quarter respectively of more recent ones. Again, this material is only used if the same person has not also voted more recently. As well as adding stability to the ranking, this change improves its consistency. It means that we are using five years of data both for our surveys and for our citations measure described below.
The next measure we use is intended to find out whether universities have enough staff to teach their students. It is the ratio of faculty members to students and accounts for 20 percent of each institution’s possible score. This measure is unchanged from previous years.
Two indicators to which we apply a lower weighting are also unaltered from 2014, and indeed from the entire history of the QS rankings. Worth five percent each of a university’s possible score, these are our measures of internationalization, which we gauge on the basis of each university’s percentage of international faculty and of students. These measures show how serious a university is about being global. But in addition, they are an indirect indicator of quality. If a university is attracting staff and students from across the world, it is probably doing something right in terms of its research and teaching.
Rationalising citations
The biggest change to this year’s Rankings applies to the measure which makes up the final 20 percent of each institution’s possible score. This is the measure of citations per academic faculty member. This indicator looks radically different this year because we have introduced a system to compensate for the large volume of citations generated by researchers in the life sciences and, to a lesser degree, those in the natural sciences. The need for this process, which we term normalization, is apparent when one considers that the medical sciences account for 49 percent of the citations in the Scopus database used in these rankings but only 14 percent of university students (that figure being for the UK). By contrast, the arts and humanities make up nearly 30 percent of students but only one percent of citations, because of their very different publishing culture.
We believe that it is right to correct for this bias at the faculty level, in other words in terms of the arts and humanities; the social sciences, including management; the natural sciences; engineering and technology; and the biomedical sciences. We have normalized the weight of these five areas in our academic survey since its creation in 2004.
The normalization process works by weighting the citations from each of these areas at 20 percent of the total.
But even this reform does not recognize the full variation in academic publishing patterns around the world. In the arts and humanities and in the social sciences, a large amount of research is not published in English and does not appear in journals, reducing its chance of appearing in Scopus’s citations database. We allow for this by further adjusting the citations in these two areas, but not the other three, in accordance with the publishing pattern in each university’s home country, as reflected in the total percentage of papers in Scopus in these two fields.
Finally, the data we use will continue to cover five years of the Scopus database. But it will no longer credit citations where the paper has more than ten affiliated institutions. We feel that this thin level of participation is not worth acknowledging. This change cuts out only 0.34 percent of Scopus papers.
by Martin Ince