Introduction

The report's author's introduction to the 2016 Ranking.

Introduction

This report presents results for the fifth annual ranking of national systems of higher education undertaken under the auspices of the Universitas 21 (U21) group of universities. Our national rankings complement the plethora of rankings of institutions. The rationale for national rankings is that it is the higher education system as a whole, not only of research intensive universities, that matters for the economic and cultural development of a nation. Different institutions will contribute in different ways to achieving national objectives; they should not all be judged by the same criteria.

 Some 50 countries are ranked overall and in each of four areas: Resources, Environment, Connectivity and Output.  The Resources and Environment modules contain input measures.   Outcomes are measured in the Output and Connectivity modules.  By examining the relationship between inputs and outcomes our work provides measures of productivity and insights into ways of improving outcomes.

Resources, whether public or private, are a necessary condition of a well-functioning system of higher education but they are not sufficient: a well-designed policy environment is needed to ensure that resources are used well.   A consensus is emerging that the preferred environment is one where institutions are allowed considerable autonomy tempered by external monitoring and competition.  The Environment module contains measures of these characteristics that might be called ‘state variables’.

Turning to outcomes, our output measures encompass attributes such as participation rates, research performance, the existence of some world class universities, and employability of graduates.  There is a world-wide trend for governments to encourage institutions of higher education to strengthen relationships with business and the rest of the community. In the European literature this is frequently referred to as ‘the third mission’ -- in addition to teaching and research.  Elsewhere (de Rassenfosse and Williams, 2015) we have argued that connectivity is a wider concept that covers not only engagement with industry but activities such as the movement of students across international borders and international research links. The Connectivity module includes variables which span this wider concept.   

An important aim of our work is to permit countries to benchmark performance against other countries at similar stages of development.  In order to facilitate these comparisons, we present estimates of a country’s performance relative to its level of GDP per capita.  These adjusted estimates complement our main measures of performance. 

Our methodology is set out in detail in Williams, de Rassenfosse, Jensen and Marginson (2013) and in the reports published on the U21 website (www.universitas21.com).  There are 25 variables in total.  A description of each variable is given in the relevant section below and sources are given in Appendix 1.  For each variable the best performing country is given a score of 100 and scores for all other countries are expressed as a percentage of this highest score.

Changes in Data and Methodology from the 2015 Rankings

Most data are now published under the new ISCED 2011 education classification. This contains three levels for schooling, one level for post-secondary non-tertiary, and four levels for tertiary education: short-cycle tertiary (level 5), Bachelor’s (level 6), Master’s (level 7) and Ph.D. (level 8).  In our ranking, the only variable altered by this change is the measure of diversity of institutions, which is now based on the number of students in all tertiary institutions, not just universities. The new definition of diversity is given in section 3.2.

In the Resources module, the OECD data for public expenditure on institutions of higher education in the United Kingdom has been revised upwards to cover previous omissions of funding through the HEFCE.   Following this correction, OECD estimates of total expenditure on tertiary education in the United Kingdom have gone from 1.2 per cent of GDP in last year’s rankings to 1.8 per cent in this year’s rankings. Data for Australia now include capital expenditure.    

We have made attempts to improve estimates of expenditure financed from private sources where no official figures exist.  The difficulty in obtaining such estimates is that private expenditure in many countries is related only loosely to the mix of public and private universities.  Nevertheless, approximate values have been obtained based on the percentage of enrolments that are private, the percentage of enrolments in the lower cost short courses (ISCED 5), and country information on tuition fees paid in both public and private institutions. Estimates constructed in this manner give a more accurate ranking than our previous method of using first quartile or median values obtained from countries where data exist.

 In the Connectivity module, the source of the Webometrics data for full text file on the web is Google whereas in the previous year it was based on Google Scholar.  To smooth the change we average over the two years (after scaling each series with maximum values equal 100).

 There has been no change in the weights we use.  It follows that apart from the effect of better data (which we flag) changes in rankings represent real changes.

New to this ranking are measures of productivity and of drivers of research performance.