Two interesting recent publications address the role and effectiveness of peer review for the identification and reward of “academic excellence”. I would like to engage with their ideas in this post.
Through peer review, scholars evaluate the quality of each other’s work (most of the time confidentially). It has become the key mechanisms for assessing scientific excellence, not only when it comes to journal and book publications, but also for the allocation of research grants, fellowships, awards, university rankings, as well as – indirectly through publication records, and directly through recommendation letters and selection committees – for hiring and tenure procedures. Most of us spend a considerable amount of our time reviewing articles and applications etc. and indeed, there does not seem to be any clear alternative for the measurement of academic quality. Yet, despite its huge significance, peer review is – quite naturally – a very secretive process, for which no clear standards exist, and about which we know very little. What is a good review? How do peers define and recognize quality, or excellence? How do we know if peer review produces effective and fair results?
In a recent article in the Neue Zürcher Zeitung, Margit Osterloh, a Professor in business management at the University of Zurich raised a number of very critical questions regarding the assessment of scientific quality and performance on the basis of peer reviewed publications. On the one hand, she argues, the output-focus of this measure may create distorted incentives for researchers: they may try to publish results too quickly and divide them in bits and pieces, in order to increase the number of publications, since “everybody writes, nobody reads”. Researchers may also be tempted to please reviewers by correcting results and arguments in a way suggested by the anonymous peers, even if they actually don’t agree with the changes. The most fundamental doubt she raises, however, is that peer review may be a very unreliable measure of quality. There is hardly any research on this, but she cites a study in neurosciences, which demonstrates how strongly different peer reviews tend to diverge from each other. In addition (and this is not a point Osterloh addresses), in times of googlescholar and specialized research communities, double-blind reviewing is oftentimes not quite as blind as it is supposed to be, which opens gates for power dynamics affecting the review process. So, the basic question is whether peer review “works”, i.e. whether and how reviewers tell good from bad from excellent in a fair and prudent way.
On this topic, the new book by Michèle Lamont “How Professors Think. Inside the Curious World of Academic Judgment” (2009) is a fascinating read. Lamont, a Harvard Professor in sociology, presents the results of a study in which she observed and interviewed the panelists of some of the most competitive and prestigious American fellowships and grants. These are the gatekeepers who have to define excellence (within and across disciplines), rank proposals, and who are supposed to identify the cream that should eventually rise to the top. In a nutshell, she investigates whether Bourdieu was right in arguing that peer review is basically about power, since “judgments of scholars reflect and serve their position in their academic field, even if they naturalize these judgments and legitimize them in universalistic terms” (2009: 20). Indeed, she does find that the danger of “homophily” (appreciating work that most resembles one’s own) is very real. However, Lamont’s results also demonstrate that the reviewers were not just motivated by the opportunity to maximize their position, but also by motivations linked to intellectual curiosity, a sense of responsibility and the sheer pleasure of discovering exciting research. She also shows in a very vivid way how important the process of discussing and defending definitions of excellence is for the self-concept of academics.
What does Lamont’s study on grant panels imply for Osterloh’s generally critical assessment of peer review for journal publications? Are there transferable insights that tell us about the value and quality of peer review? Yes and no. Yes, because Lamont shows that intrinsic motivations, both substantial and identity-related, are important when academics perform their work as evaluators. No, because a) her study focuses on a very selective group of highly established scholars who have nothing to prove and who adjudicate the most coveted grants, rather than on the average reviewer of an average article; and b), because – most importantly – Lamont studied deliberative face-to-face processes among reviewers. The deliberations and group dynamics made sure that reviews had to be carefully argued, fair and well prepared. In a good review, arguments are substantiated and balanced, and the reviewer truly engages with the authors’ work and ambitions. In journal article reviews, such a social control of quality and fairness is lacking. The journal editor is the final gatekeeper, who has the difficult task of determining the value of a review and of weighting different reviews against each other.
Given the importance of peer reviews in today’s academia, we may want to start thinking about how the quality of someone’s work as a reviewer could be factored into the evaluation of his or her performance, aside standard measures of research and teaching quality. This might be more of an input-measure, since it reflects a reviewer’s care for the discipline and quality as such. So far, academics list the journals they act as reviewers for in their CVs, but one could imagine more substantive measures, such as the extent to which someone is asked repeatedly for advice by editors, or – why not – taking into account the last five peer reviews someone has written for appraisal interviews or hiring procedures. Not sure this is the perfect solution, but it would shed some light on an important aspect of our work.
 Osterloh, Margit (2010). “Unternehmen Universität? Wie die Suche nach Effizienz, Output-Messung und das Ranking-Fieber in der Wissenschaft zu Opportunismus und Ideenarmut führen können”, Neue Zürcher Zeitung, Nr. 113 (19.5.2010).