A vice-chancellor at an Australian university tries to come to grips with the remarkable variability of that country’s universities on international rankings over the last few years.
A vice-chancellor at an Australian university tries to come to grips with the remarkable variability of that country’s universities on international rankings over the last few years.
Trackback URL for this post:
https://www.margaretsoltan.com/wp-trackback.php?p=26447
September 19th, 2010 at 5:27PM
This used to be the sort of thing we’d entertain ourselves with over drinks on Friday night. Who’s better: John or Paul? Scorsese or Coppola? Wendy’s or Burger King? Harvard or Yale?
We all knew there was no answer. It depended not just on taste, but also on how you weighted the variables: lyrics or tunes; best film or career; fried or grilled; Kennedys or Bushes. So we never took these conversations seriously.
Same thing with all those “best places to live” surveys that always tell us we’d be happiest in Fargo or Minneapolis or something. Low crime is weighted higher than fear of frostbite. But, again, nobody takes these things seriously: it’s not as though North Dakota is growing quickly (or at all).
But somehow academics are just suckers for measuring themselves against one another. Maybe it’s just an overreaction to the difficulty of determining–within limits–just how good we are at our jobs. We can’t count widgets produced (we can count publications, but the judgments there can be frustratingly qualitative) and we don’t have bosses in the formal sense (sure, our department chair evaluates us, but we all know that she’s just a peer with a summer salary).
So we grab onto these stupid, inconsistent, unsupportable rankings as though they were solid rock. I’m a social scientist, and if one of my grad students turned in a project with measurements this arbitrary and variable weighting this capricious, I would send him back either to the lab or all the way home.
The NRC departmental rankings are coming out at the end of the month. I hope that all deans, provosts, presidents, and–especially–college PR officials lose their internet connections that week…
September 19th, 2010 at 6:14PM
It’s not, of course, the universities that are jumping around – it’s the criteria used in the rankings. Not only do the various rankings (world and nation) use different methodologies, but any one of them may change its system from year to year (as the London Times just has). This is true even in fairly straightforward attempts to gauge conventionally defined academic quality — USNWR at one point shuffled its rankings by giving greater weight to research dollars, for example, raising the standing of top public universities.
The other factor is that below the top rank, a lot of institutions are closely bunched, so their rank order can change more than their raw score.
At the top, the rankings are quite stable and clear once the criteria are set. In the US, it’s hard to put any school other than Harvard, Princeton, Berkeley or Cal Tech at the top of the list of universities, or any but Amherst, Williams or Swarthmore leading the liberal arts colleges. You can in effect take your pick, but you can’t really devise plausible criteria that yield answers outside of these groups. Stanford, Yale, Penn, Chicago and Columbia are too much like Harvard, MIT is too much like Cal Tech, all the liberal arts schools are pretty similar, etc.