This article is more than 9 years old

Rankings, data, tables and spin

Nothing brings out the creative instincts of universities like a new set of research assessments. After 24 hours of rankings, tables and spin, John O'Leary looks at the new data from HESA which allows us to measure intensity and reflect on the true state of the research rankings.
This article is more than 9 years old

John O’Leary edits The Times and Sunday Times Good University Guide and has covered every Research Assessment Exercise, either for The Times or The Times Higher Education Supplement.

Nothing brings out the creative instincts of universities like a new set of research assessments. Already we know that Oxford came out top in the first Research Excellence Framework…and Cambridge…and UCL…and Imperial.

The free-for-all is encouraged by the refusal of the funding councils to produce any institutional summaries and the inexplicable gap of 10 hours between the publication of the REF results and the official figures for the numbers of academics who were eligible for assessment.

As a result, the first tables could take no account of intensity. Only today, when interest has cooled among the non-specialist media, are all the data available to make the kind of comparisons that were inherent in every Research Assessment Exercise, until the last one.

When the percentage of eligible academics is factored in to reflect research intensity, the Institute of Cancer Research has the best Grade Point Average (GPA), with Cambridge the top university. Throughout the table, the order is quite different to those calculated on straightforward research power or a GPA that does not include intensity.

There is no suggestion that the other claims to have achieved the best results are fraudulent. Oxford says that it has the “largest volume of world-leading research”. UCL says that it is “Number 1 for research strength”- multiplying its GPA by its number of entrants. And Imperial claims the highest proportion of world-leading or internationally excellent (4* and 3*) work, as well as the “greatest concentration” of high impact research.

Previous league tables have factored in intensity to avoid the distortions of size in comparisons of research power and the sometimes misleading effect of different strategies on selectivity of entry in the raw data published by the funding councils. Yesterday’s cause celebre was the nine researchers in English at the University of Bedfordshire who produced a higher proportion of 4* research than the 88 entered by Oxford.

In 2008, neither the funding councils nor the Higher Education Statistics Agency (HESA) published figures for the potential entry for assessment, but estimates produced by league table compilers made Cambridge the top university for research intensity. Today’s table, using HESA data published this morning, shows the same outcome.

Imperial College, which had a higher GPA before allowing for intensity, is second, with University College London third among non-specialist institutions. Oxford and Bristol tie for fourth place, despite occupying quite different positions in the tables of research power – in those, Oxford came top and Bristol ninth.

The HESA data confirms very different strategies in all types of university for the proportion of academics entered for the REF. Cambridge, for example, entered 95 per cent of eligible academics but Oxford only 87 per cent, a lower proportion than Imperial, UCL, Bristol, Southampton or Queen’s University Belfast.

King’s College London, which was seventh in the Research Fortnight table of research power, drops to 17th in today’s table because it entered only 80 per cent of its eligible staff. Several Russell Group universities entered fewer still – Liverpool only 70 per cent, which leaves it sharing 46th place in the new table.

Naturally, the newer, teaching-intensive universities entered much lower proportions of eligible staff – less than 10 per cent at Buckinghamshire New University, Cumbria and Southampton Solent, which appear at the bottom of the table. Coventry, which produced the best scores in this year’s National Student Survey, is in the bottom 10 for research intensity.

There are two anomalies in today’s table. According to the Hesa data, both the University of the Highlands and Islands (UHI) and Staffordshire University entered 100 per cent of their eligible staff and appear at tenth and 41st respectively. In UHI’s case, the high proportion of part-time staff makes it difficult to produce accurate full-time equivalents, while HESA say that Staffordshire’s figures reflect the small number of academics on research or research and teaching contracts.

However the tables are compiled, they appear to show a strengthening of the leading London institutions at the expense of those in the North of England. Imperial and UCL are mounting a stronger challenge to Oxford and Cambridge than in previous research assessments.

The funding councils may find it difficult to align their allocations with the Government’s Research and Innovation Strategy, which was published almost simultaneously with the REF results. The strategy’s emphasis on “grand challenges” and large concentrations of research power will be difficult to square with the funding councils’ stated intention to reward excellence wherever it is found.

Find our sector results table of REF 2014 with intensity here.

5 responses to “Rankings, data, tables and spin

  1. The REF submission strategies reflect different views of the purpose of the exercise. Universities such as Cardiff (5th on GPA and 50th on GPA adjusted for intensity) seem to take the view that REF is merely a marketing opportunity to try to attract students. They will, however, struggle then to convince the right people that they are truly research intensive universities.

  2. Interesting that so many Universities lower themselves to mislead the public. Imperial College for example are claiming to have “come top” in two units of submission and yet they actually came top in none. A remarkable reflection of there ability to play the REF game but not to excel in any discipline. Also sad that this should follow on so soon after the revelations that Professor Grimm was hounded to suicide in the quest for metrics over common decency.

  3. One issue with this analysis is that it (essentially) treats research from unsubmitted staff as “unclassified” (and so gets a score of 0). This clearly is unrealistic, and overly penalises institutions which have submitted slightly lower proportions of staff.

    There may be a more robust methodology to giving a league table that accounts for the proportion of staff submitted — and tries to estimate what the ranking would be if all institutions submitted all staff. That is to rank on percentage 4* (world-leading) research. It is then much more reasonable to assume that unsubmitted staff would not have had 4* outputs and institutions did not have 4* impact case studies that they did not submit.

    (Arguably you should also leave the environment score as fixed, as this is likely to be unaffected by the number of staff submitted).

Leave a Reply