Monday, January 16, 2012

Scholarship and Research

I attended a meeting yesterday where a colleague from the English Department noted that in Australia, some administrators (mostly from engineering and the sciences) have started making a distinction between "scholarship" and "research."  "Scholarship" is viewed as uncovering of old knowledge, while "research" is the creation of new knowledge, according to them.  This way of thinking is strange to most in the social sciences and humanities, where even "uncovering old knowledge" is effectively also the creation of new knowledge (if you did not know it before, or did not recognize it, then it is new!) 

The attempt to distinguish "old" from "new" knowledge is especially interesting when one considers a recent Chronicle of Higher Education column in which an English professor complains about "Fast Food Scholarship"--articles that are hastily written, with poorly developed arguments, with little or only anecdotal evidence, and with few citations and no examination of past scholarship on the topic. She notes that journals get a wave of submissions shortly after their discipline's annual meetings; academics desperate to get into print submit their conference papers without doing the difficult work of converting a talk into a proper article.

The crisis in publishing is even worse in China, and affects science journals too; the Gene Dog Blog copies an article in Nature News that notes that China's "scientific journals are filled with incremental work, read by virtually no one and riddled with plagiarism." It includes an estimate that "one-third of the roughly 5,000 predominantly Chinese-language journals are ‘campus journals’, existing only so that graduate students and professors can accumulate the publications necessary for career advancement, according to one senior publisher" and one editor reports that 31% of papers submitted to it contain plagiarized material. This is probably true. Someone could do an interesting study of how relationships ("guanxi" and notions of mutual obligation) and ideas of hierarchy make it difficult for a journal editor to reject papers (without destroying his social network).

Academics in China are evaluated based on the number and length of articles, so it is not surprising that you get many and long articles.  Unfortunately, and again due to the dominance of science and engineering models, China wants to use impact factors to decide which are the better journals (My view is that while impact factor can show which are the dead journals, high impact factors do not equal good or important research.  This is especially true in anthropology, where books and edited volumes, which do not enter into impact factor calculations.  Plus, as soon as you rely on impact factor as an indicator of quality, you will create incentives for people to cite their own and their friends' publications.)

The bigger problem is the attempt to find quantitative and administrative means of evaluating research.  Deans and provosts who know nothing about a field feel they need some "objective" way of measuring researchers' and departments' output, so they use silly measures like number of characters written, number of articles, or impact factor. The problem is that these do not track creativity, innovation, and research. In a law firm, associates are evaluated based on how much money they bring in. This is not a perfect measure the quality of work, but it does do a fairly good job of tracking quality and quantity of work (work of the highest quality that is too slow does not help the firm), and shows how satisfied clients are with the work. Plus, firms make allowances for other factors such as promotion work. With academic research, however, there is no measure, like revenue, or even billable hours, that can stand in for productivity.  But administrators are determined to find a quantitative measure that they can use across departments, to reward one department over an other.  This is killing many social sciences and humanities, but perhaps because it benefits engineering, business, and medicine (so it seems, at least), there are many who see nothing wrong with "audit culture" and assume all gripes come from people who resist change because they are not research active so feel threatened by change. They do not see how their "reforms" are going to kill certain disciplines.  Maybe they do not care.

No comments: