The report's author's introduction to the 2017 Ranking.


This report presents the results for the sixth annual ranking of national systems of higher education undertaken under the auspices of the Universitas 21 (U21) group of universities.  The national ranking of systems complements the many international rankings of universities.  The rankings of institutions are essentially rankings of research-intensive universities and as such are encouraging a bias in systems of higher education towards that type of institution.  A good system of higher education will encompass a range of institutions.  The need for a diverse system is cogently argued by the former tertiary education co-ordinator at the World Bank, Jamil Salmi (2017, p.237):

At the end of the day, the best tertiary educations systems are not those that boast the largest number of highly ranked universities.  Governments should worry less about increasing the number of world-class universities and dedicate more efforts to the construction of world-class systems that encompass a wide range of good quality and well-articulated tertiary education institutions with distinctive missions, able to meet collectively the great variety of individual, community and national needs that characterise dynamic economies and healthy societies.

We use 25 measures of performance grouped into four modules:  Resources, Environment, Connectivity and Output.  The first two are input measures and the second pair measure outcomes. For each variable the best performing country is given a score of 100 and scores for all other countries are expressed as a percentage of this highest score.  A description of each variable is given in the relevant section below and sources are given in Appendix 1.  Our methodology is set out in detail in Williams, de Rassenfosse, Jensen and Marginson (2013).

Resources, whether public or private, are a necessary condition for a well-functioning system of higher education, but they are not sufficient.  A well-designed policy environment is needed to ensure that resources are used well.   A consensus is emerging that the preferred environment is one where institutions are allowed considerable autonomy tempered by external monitoring and competition.  The Environment module contains measures of these characteristics.  

Turning to outcomes, our output measures encompass attributes such as participation rates, research performance, the existence of some world class universities, and employability of graduates.  There is a world-wide trend for governments to encourage institutions of higher education to strengthen relationships with business and the rest of the community. The Connectivity module includes variables which span this wider concept (see de Rassenfosse and Williams (2015)).  

Our work extends well beyond ranking.  Countries can benchmark performance over a range of attributes, noting strengths in some areas, weaknesses in others.  To permit countries to benchmark performance against other countries at similar stages of development, we also present estimates of a country’s performance relative to its level of GDP per capita.   In a new initiative we present national rankings for the three main activities of tertiary institutions: research, teaching and connectivity/engagement.

Changes in Data and Methodology from the 2016 Rankings

The 2017 ranking incorporates a new measure of the diversity of institutions.  It now comprises two equally-weighted components.  The new measure recognises more fully that a good system of higher education provides a range of institutions to meet differing student and national needs. The first component measures the mix of public and private institutions.  We argue that a mixed system promotes competition and innovation.  In previous rankings the public–private mix was measured on a three-point scale based on the percentage of students enrolled in private institutions: less than 10 per cent, between 10 and 50 per cent, and over 50 per cent.  In the 2017 ranking the three-point scale is replaced by a continuous variable: the percentage of students in private universities, capped at 50 per cent.  The new measure overcomes the quite different scores that could be obtained with very little difference in the institutional mix.  For example, a country with nine per cent of tertiary students in private institutions would score very much lower than a country with 11 per cent.  While there is no optimal public–private mix, the larger are private enrolments the less the need for autonomy in public institutions – another measure we use in the rankings.  Thus the private enrolment measure and the measure of autonomy of public institutions need to be considered jointly, which is done in our Environment module.     

The second component of the new diversity index is the percentage of tertiary enrolments that are at Level 5 in the ISCED classification.  The ISCED 2011 classification contains five levels for tertiary education: short-cycle tertiary (level 5), Bachelor’s (level 6), Master’s (level 7) and Ph.D (level 8.  Level 5 includes higher technical education, community colleges and advanced vocational training.  The inclusion of this component recognises that for many students Level 5 courses are more appropriate than university study.  These courses are particularly important for developing countries, where they are likely to be as important for economic growth as frontier research at universities. 

In the Connectivity module, Webometrics have changed the OPENNESS measure which is now labelled TRANSPARENCY and is a google citations measure.  To smooth the change we average over the two years.  

There has been no change in the weights we use.   The quality of data continues to improve each year and in some cases, which we highlight, new data explains shifts in a country’s rank. 

In this year’s ranking we include measures of the three types of activity undertaken by tertiary institutions, sometimes referred to as the triple helix: research, teaching and connectivity.   This cuts across our four-module approach.  This new section is made possible by the collection and publication of national competency scores by the OECD, albeit for only 30 of our 50 countries.