Research & Analysis

Research Income 2024 TEST

11th December 2024
Opinion REF

Is the REF worth the trouble for “non-leading” universities?

8th November 2021

Authors

Richard Croucher

Professor, Middlesex University

Paul Gooderham

Professor, Middlesex University

As professors at Middlesex University Business School, we have long researched the Research Excellence Framework (REF).

In a recent paper*, we asked, is there a relationship between the REF and the annual national rankings of UK universities?

Three national rankings of UK universities are published annually – The Complete University Guide, the Guardian (GRG) and joint Times and Sunday Times tables. All three tables results’ are highly correlated. We selected the GRG to test our question because it has no research component, allowing a ‘cleaner’ analysis of the impact of changes in research performance. GRG uses eight criteria, each weighted between 5 and 15 percent. Overall, the measure is divided between inputs and outputs.

In output terms, it uses student perceptions of feedback, job prospects, overall quality, teaching quality and ‘value added’ (outputs overall = 55 percent). In input terms, it includes entry scores, spending per student and faculty-student ratios (inputs overall = 45 percent). It also contains a specific “value-added” factor comparing students’ degree results with their entry qualifications.

Our research was based on changes between the 2008 and 2014 REFs, whose core design was similar. We used Research Power (RP), as our measure of institutional performance.  Our overall purpose was to examine whether universities improving their RP between 2008-14 also experienced improvements in their GRG rankings, or whether the rankings are legacy effects and impervious to RP changes. If the latter, it is questionable whether non-leading universities should adopt a resource-intensive strategy geared to improving REF research performance. The implication would be that “non-leading” universities might be more effective if they directly targeted selected components of the GRG such as teaching quality.

Controlling for university size, faculty cost per full-time academic, total capital (non-faculty) expenditure, and Russell Group membership, we found that changes in RP between 2008 and 2014 were associated with changes in GRG rankings in the same period. So, we found that universities improving their RP can expect to change their GRG rank. However, that finding has to be tempered by the considerable stability not only in terms of the upper GRG rankings but also in RP that we observed. Change in GRG rankings mostly occurred outside the ‘leading universities’. Thus, none of the five universities whose RP rankings declined most between 2008 and 2014 was ranked by GRG 2008 in the upper quintile. In that sense, none was a ‘leading institution’. All of these universities’ GRG rankings also declined.  Among those universities with the greatest RP improvements’ ranking, none had leading GRG 2008 rankings. Nor did they in 2014.

We explored whether the uppermost quintile, together with the next 19 highest ranked universities (i.e. the top 39 institutions) were more or less volatile (heterogeneous) than the rest in their RP and GRG performance. We found much more stability (and less heterogeneity) within the top-quintile as well as the top-39 compared to others. We obtained similar results when we divided universities according to their 2008 RP or GRG positions.

In sum, while both correlation and regression analyses suggest that GRG rankings and RP are predominantly stable over time, changes to RP are associated with changes in GRG. Our analysis of the location of variability and volatility indicate that radical change to RP is a feature of middle-ranking universities. Thus, our analysis supports the notion that the UK contains two distinct types of universities. On the one hand, there are leading institutions whose RP and GRG did not change to any great degree between 2008 and 2014. On the other, non-leading institutions exist that are significantly more volatile in RP and GRG rankings and indeed, cases of dramatic variability also exist. Our analysis also supports the notion of the difficulty non-leading institutions have, regardless of changes to their research performance, of breaking into the leading institutions category as measured by GRG.

While one should exercise extreme caution in extrapolating from our data, it seems a tall order to expect that improvements or declines in research performance in themselves will result in substantial changes to the leading GRG rankings. However, with the new REF results set to be made available soon, we will be able to make a more valid judgement. It may indicate that for ‘non-leading’ universities while an improved RP score impacts GRG, it is unlikely to be sufficient to enable them to break out of the ‘non-leading’ stratum. Hence, the GRG rankings of ‘leading institutions’ are not obviously threatened by the research performance improvements of ‘non-leading institutions’.

Should, then, managers of ‘non-leading’ universities abandon trying to improve their universities’ REF performance? Given the path dependency of research performance and the resources involved in preparing for and participating in the REF, this may be tempting. However, there are dangers associated with it. Universities experiencing marked declines in their research performance generally suffered in terms of their GRG rankings. This may not inspire consumer confidence. Furthermore, dramatic improvements in research performance do generally enhance GRG rankings and therefore enhance the prestige of these universities in relation to their more immediate competitor institutions.