From a certain perspective, the work we do at a discipline level ought to be easy. After all, we don’t seek data directly from institutions to compile our rankings by subject which removes a major data collection and validation overhead. However, the scale of the output, in our terms is vast. Our main ranking aggregates performance in 6 indicators for just over 800 institutions and thus comprises around 5,000 individual processed data points; by contrast our rankings by subject use up to four indicators in 36 subjects for up to 400 published results. All in all the full analysis involves well over 40,000 processed data points.
Picking out trends, calibrating the approach, and identifying issues is a major effort. An effort which, I must confess, we underestimated in 2015.
In the coming days we will be releasing fact file information for the new version of the results prior to publication on April 29, and we expect to be similarly beset by questions as to how the results have been formed, what’s changed since the previous fact files we distributed, what can be inferred based on year on year performance and so forth. We’re aiming to give ourselves a little more time to get back to institutions with answers to their specific questions, but the most frequently asked questions are likely to be, what has changed since the previous version?
A substantial majority of institutions have been remarkably constructive and supportive despite previous results, in some cases, appearing to be dramatic downward departure from the previous year. The feedback has been precise, intelligent and constructive with many very specific observations which have been invaluable in our process rebuild. The international forum we ran in Sydney last month, was one of the most engaging events I have had the pleasure to attend. I personally experienced a surprising degree of empathy. There seemed to be a genuine understanding of the fact that this is and has been pioneering work, that it is deeply complex. It also provided us with an invaluable opportunity to listen to genuine experts in their field about what we are doing and how it could be improved – above and beyond any observed concerns about this edition.
We are committed to maintaining an active dialogue with as many stakeholders as possible and deeply appreciate the volume and nature of feedback we have received around this. We have listened, and we have taken the opportunity not only to identify and address some issues with this year’s edition but also to introduce some further refinements based on feedback, which I feel genuinely improves the work.
Our advisory board have also been supportive of the refinements.
The five key changes since the previously distributed, but unpublished, version, have been:
- The reintroduction of a regional weighting component in our survey analysis which had been inadvertently omitted
- The refinement of our analysis of the Scopus bibliometric database to address an issue where, in some instances, we had been counting articles only in the first subject to which they were categorized
- The adjustment of weightings in a further six subjects – making a total of nine subjects with modified weightings in 2015 – typically in favour of the citations and H measures – these changes are supported by the higher volumes of content from Scopus we have been able to retrieve in 2015
- The reinstatement of a paper threshold of 10 papers for English, and elevation of paper thresholds in Politics and History reflecting the higher volumes of research we are now taking into account
- The extension of our academic and employer survey samples to five years, with the earlier years weighted at 25% and 50% respectively. This stabilizes some of the subjects with lower levels of response and increases our total survey samples for this exercise to 85,062 academics and 41,910 employers
Once the fact files are distributed we will make ourselves available to answer specific enquiries and are currently in the process of scheduling some dedicated webinars to explain the developments in more detail – these will be announced soon. We have already made some changes to our methodology pages and updated response levels, weightings and paper thresholds as well as publishing our map of the ASJC codes used to allocate Scopus content to subjects. Read more here.