DON AITKIN, a former vice chancellor, believes there are no bad Australian universities. And when choosing one over another university rankings and ratings are more likely to mislead than to help
I READ the news about ANU’s loss of rank in the “Good Universities Guide” with great irritation.
Its vice chancellor was quite correct to point out that the new low rating did not make much sense given that the ANU had scored highly in the same field (teaching quality) in the past rankings. How could it have dropped precipitately in a short time?
My own view is that these ratings are simply rubbish, and should not be taken seriously by anyone. I feel the same about the world ratings of universities, too, where the ANU usually scores reasonably highly. They are also rubbish.
But first, the teaching scores. These have been based on what students say in what amount to exit surveys. The students are not obliged to complete the surveys, and even if they were, nothing can guarantee that the scores the students give mean anything in particular. They can like or dislike the lecturer, be doing well or poorly, be interested or uninterested, and so on. Oh yes, and the lecturer can be competent or incompetent, interested or uninterested in teaching, absorbed or not absorbed in research, and so on.
I was opposed to them when they were introduced 20 or so years ago. But the Government wanted some “key performance indicators”, and the AVCC helped to set up this particular one, fearing that if we did not the Government would impose something more stringent.
My first examination of the outcome confirmed my fears. The signal-to-noise ratio was terribly small. I wanted UC, of which I was the VC at the time, to use them internally, to look at differences from year to year in the same courses. And we did that, which was why I was unimpressed with the reliability and validity of the measures.
But to use them across universities, and then average them to show that university A is “better at teaching” than University B is just silly.
But it is done every year, and produces the same kinds of news stories, right across the country.
What it tells us is that people who like numbers, like numbers. Once they can apply numbers to a quality, or attribute, or process, they can perform all their favourite statistical tests on the numbers, get averages, standard deviations, and “indexes” – and to two decimal places, as well. This happens everywhere, not just in universities, where you would think that commonsense, and some knowledge of research methodology, would throw out such bogus findings. But no.
The general ratings of universities are no less sensible, and for a related set of reasons. It all depends on what you measure. What is easiest? Well, income, age of foundation, possession of medical school, research graduate production, research income.
It happens that all these attributes are inter-related, at least in our country. It was the oldest universities that gained the medical schools, and the big gifts from wealthy graduates, and the big graduate programs, and the early reputations. That has given them a big advantage in these vacuous ratings games.
In my opinion, there are no bad Australian universities, and I once knew them all quite well. The reasons for choosing one over another for your son or daughter will be quite complex. Some intending students would benefit from leaving home and going into residence; some won’t; some can’t afford to.
Some know exactly what they want to do; some are unsure; some are going to university because their mates are (a bad reason). All will know a great deal more about themselves and what they should be doing after their first year.
But these ratings ought not to be consulted. They are empty of meaning, and more likely to mislead than to help.
Don Aitkin, political scientist and historian, served as vice-chancellor of the University of Canberra from 1991 to 2002.