Friday, April 18, 2014

The False Science of the Times Higher Education World University Rankings



Anthropologists complain about rationalization and bad quantification, but we generally moan about it to each other and are often not very specific. I want to try to explain what is wrong with the Times Higher Education (THE) World University Rankings. I want to argue that the survey is based on bad data and thus gives spurious results. All the results do is confirm loosely held biases, but by presenting them in quantitative form, the league tables become more real and powerful than they deserve.

This year, I was again asked to participate in the survey. I received the following email:

Dear Colleague,
Thomson Reuters is pleased to invite you to participate in the annual Academic Reputation Survey, which will support the Global Institutional Profiles Project and Times Higher Education World University Rankings.  You have been statistically selected to complete this survey and will represent thousands of your peers. The scholarly community, university administrators, and students worldwide depend on the survey results, as they provide the most reliable access to the voice of scholars like you.  

A sample survey is available online. I have to commend ThomsonReuters for their transparency on this; most pollsters try to protect themselves from criticism by keeping their questions secret. 

The heart of the questionnaire are four questions, in which we are asked to list the 15 best research universities 1) in our region and 2) in the world, and the 15 best teaching institutions 3) in our region and 4) in the world. This seems straightforward enough, and I am a great fan of "freelisting" (see in Bernard’s Research Methods in Anthropology or in ANTHROPAC). But people can only list things they know something about.

I do not normally think in terms of “best research universities”; I know of some good scholars in different universities whose work I admire, but in each case, I do not know most of their colleagues unless they also publish in areas I am interested in. I may notice that “State University” appears more often than “Podunk U” in authors’ affiliations, but if State is a large school, I may not actually attribute its frequent appearance to “excellence” in research. It may just be because they are big. It may also be because many of their graduating PhDs still have not found jobs, and are using their more prestigious school affiliation rather than the schools where they teach as adjuncts.

Ranking the “best research universities” may be more of a problem in anthropology, where we do not work in “research teams”, and where there is an emphasis on departments covering the diversity of cultures and approaches to culture, rather than concentrating on one area. I am actually more likely to know which university has a good Asian Studies program than which one has a good anthropology program. If I were asked to list the best universities for the anthropology of China, that would be easier. Recently the Society for Economic Anthropology weblist collectively listed the top graduate programs for economic anthropology. That is a reasonable list: where to go to study a particular subfield. Anthropology PhD students work with a small number of scholars, so it is more important that there be scholars in their primary area of interest rather than a "good program."

The problems with ranking "teaching universities" are even worse. First of all, teaching at the undergraduate and graduate levels are totally different. Many large state research universities have huge classes, and undergrads are often taught by TAs and instructors rather than the famous teachers who appear in the catalog. Those famous professors are freed up for research and to work with graduate students. So my recommendation for student is totally different depending on whether they are an undergrad or looking for a PhD program.

In addition, the truth is that most of us have NO IDEA what the teaching environment is like at other universities. (And here I’m not even going into the issue of whether students prefer more experiential participatory learning, or seminar style, or more traditional lecture formats, among many other variables. The idea that there is one teaching scale, good to bad, that we can all agree on is laughable.) Even for two elite universities where I have three data points--students who attended their graduate programs--I do not know how to generalize, as some thrived, and others struggled or were not that happy. I cannot understand how anyone can confidently list the best 15 schools for teaching.

Obviously, the problems with ranking universities for teaching is a point of contention, because it was addressed in the justification of the methodology this year

ThomsonReuters say that respondents who at the start of the survey indicate that their work is primarily teaching “are later asked to identify the one institution they would recommend that a student attend ‘to experience the best undergraduate and/or graduate teaching environment’ in their subject area.”  

They thus assume that those who teach primarily are in the best position to know where the best teaching programs are. This is ridiculous. Most academics who are primarily teachers are overburdened with teaching and struggling hard to try to get a research position. Just because they primarily teach in their current position does not mean they are aware of which universities offer excellent teaching.

Thus, since most people cannot really answer which are the best universities for research or teaching, they fall back on vague images of “prestige” and image. And when one adds in all the universities of the world, the results get even more confused. Do people filling out the survey really know whether Tokyo University or Beijing University or the Sorbonne is good? Aren’t people just filling out the survey based on vague impressions, many of them decades old?

I have noticed that many, indeed most, China scholars in the USA cannot remember the difference between Hong Kong University and The Chinese University of Hong Kong. I know because I am often introduced as coming from Hong Kong University (not correct). I am convinced that a small but significant portion of HKU’s lead over CUHK in THE tables and in other such reputational rankings is due to similar mistakes by the people who fill in such surveys. In medicine and some other areas, HKU may well be better than CUHK. But since HKU does not have an anthropology department, and their history department has gone through serious difficulties, I can confidently say that in a number of the social sciences and humanities, CUHK is stronger than HKU.

Another problem with the survey is more prosaic: the software did not work. On one page, I had not finished filling in the names of universities but hit the “Enter” key instead of the  Tab key and it went on to the next question. There is no way to go back on the survey. Then, when I was asked to select non-university research centers, the page would not let me add any because an error message popped up claiming that center had already been selected. I had to claim “I don’t know of any NON-UNIVERSITY research only institutions in this subject area” in order to go on. I also had to run the survey three times before I could get to the end, because it hung on me twice.

I hope we can pull the screen back and show that there is really just vague bias behind these league tables, and they should not have the outsized importance that they have achieved. I for one refuse to submit my survey. I think it is unconscionable to participate in this fraud. And I urge other scholars to publicize the tricks their deans promote to game the system. The more people realize the fraudulent and unscientific nature of the surveys, and the perverse incentives they engender, the less they will treat them as serious measures of quality.