University Rankings' Usefulness Assessed

University Rankings' Usefulness Assessed

Guest Writer

更新日期 January 16, 2020 更新日期 January 16

Martin Ince, convener of the QS Academic Advisory Board, examines a new report on global university rankings from the European University Association.

Last month the European University Association, the representative body for higher education in 47 European nations, produced its report on global university rankings.

The media reports suggest that it is critical of rankings while accepting that they are not going to go away. But what is its real message?

Methodology concerns

Written by Andrejs Rauhvargers of Latvia, the report concedes that students and their advisors find university rankings valuable, and that media and information firms appreciate the interest they raise. For these reasons, rankings local and global are certain to continue.

But despite the useful service that rankings provide for students and other audiences, the EUA report has reservations about their value. It begins by pointing out that the criteria used in rankings are chosen and weighted by the rankings compilers, giving them influence over what counts as university quality.

However, rankings compilers might reply that their criteria, and the weightings applied to them, have to come from somewhere. In the case of the QS rankings, the criteria used have been developed over time to be robust and reliable and to reflect as many aspects as possible of university life.

At QS, we also have an active Advisory Board, made up of distinguished academic advisors from around the world who help us to think about these issues.

This misunderstanding is in keeping with the report’s extraordinary ignorance of QS’s World University Rankings. Its author seems not to know that we published the World University Rankings in 2010, the seventh in an unbroken series using comparable methodology.

They have been seen by millions of people around the world online and in print. (He has noticed our collaboration with US News and World Report, one of our media partners.)

This muddle suggests that at the very least, this report should be withdrawn in its current form and a corrected version should be issued. And incidentally, we have never seen our work as the “European answer to ARWU”, the Shanghai rankings.

Limited reach?

The other central point made in the report is that world rankings use elitist criteria which embrace only a few per cent of the world’s 17,000 universities. As the EUA has only 850 members, it might be careful about using this argument.

In any case, there would be problems in a ranking that went 17,000 entries deep. What real difference would there be between university 16,000 and university 17,000?

In fact, we have always been clear that these ranking systems are measuring a defined group of competitive world-class universities, not institutions with a local or regional reach.

Anthony van Raan in Leiden has shown that most of the world’s internationally cited research comes from this group of a few hundred universities. It is right to measure these institutions against each other on a global scale, providing that criteria are used that allow new entrants to gain admission.

If you want to go deeper, there is always the Webometrics ranking, which runs all the way down to a nine-way tie for 11,996th position.

But the report is right to point out, as others have, that big annual world rankings are now being supplemented by more specialist endeavours, such as the QS World University Rankings by Subject as well as our Asian University Rankings.

However, these initiatives raise their own questions. One is the reliance of some new learning-centred rankings on student satisfaction. Students in the Netherlands, say, are unlikely to want the same things from a university as students in China.

It is problematic to rank the satisfaction felt by students who come from very different societies around the world. The QS rankings use employer opinion, which might be more informative given a sufficiently large and varied sample.

And in the final analysis, the EUA report misses the big point. University leaders, including those who attend EUA meetings, are paid good money to decide what their universities should look like.

They might find rankings helpful in this task, but they cannot delegate their responsibility of shaping their institutions to rankings compilers or anyone else.

本文首发于 2012 Default , 更新于 2020 January 。

作者:

 

QS China