Reviewing the reviewers?

Two interesting recent publications address the role and effectiveness of peer review for the identification and reward of “academic excellence”. I would like to engage with their ideas in this post.

Through peer review, scholars evaluate the quality of each other’s work (most of the time confidentially). It has become the key mechanisms for assessing scientific excellence, not only when it comes to journal and book publications, but also for the allocation of research grants, fellowships, awards, university rankings, as well as – indirectly through publication records, and directly through recommendation letters and selection committees – for hiring and tenure procedures. Most of us spend a considerable amount of our time reviewing articles and applications etc. and indeed, there does not seem to be any clear alternative for the measurement of academic quality. Yet, despite its huge significance, peer review is – quite naturally – a very secretive process, for which no clear standards exist, and about which we know very little. What is a good review? How do peers define and recognize quality, or excellence? How do we know if peer review produces effective and fair results?

In a recent article[1] in the Neue Zürcher Zeitung, Margit Osterloh, a Professor in business management at the University of Zurich raised a number of very critical questions regarding the assessment of scientific quality and performance on the basis of peer reviewed publications. On the one hand, she argues, the output-focus of this measure may create distorted incentives for researchers: they may try to publish results too quickly and divide them in bits and pieces, in order to increase the number of publications, since “everybody writes, nobody reads”. Researchers may also be tempted to please reviewers by correcting results and arguments in a way suggested by the anonymous peers, even if they actually don’t agree with the changes. The most fundamental doubt she raises, however, is that peer review may be a very unreliable measure of quality. There is hardly any research on this, but she cites a study in neurosciences, which demonstrates how strongly different peer reviews tend to diverge from each other. In addition (and this is not a point Osterloh addresses), in times of googlescholar and specialized research communities, double-blind reviewing is oftentimes not quite as blind as it is supposed to be, which opens gates for power dynamics affecting the review process. So, the basic question is whether peer review “works”, i.e. whether and how reviewers tell good from bad from excellent in a fair and prudent way.

On this topic, the new book by Michèle Lamont “How Professors Think. Inside the Curious World of Academic Judgment” (2009) is a fascinating read. Lamont, a Harvard Professor in sociology, presents the results of a study in which she observed and interviewed the panelists of some of the most competitive and prestigious American fellowships and grants. These are the gatekeepers who have to define excellence (within and across disciplines), rank proposals, and who are supposed to identify the cream that should eventually rise to the top. In a nutshell, she investigates whether Bourdieu was right in arguing that peer review is basically about power, since “judgments of scholars reflect and serve their position in their academic field, even if they naturalize these judgments and legitimize them in universalistic terms” (2009: 20). Indeed, she does find that the danger of “homophily” (appreciating work that most resembles one’s own) is very real. However, Lamont’s results also demonstrate that the reviewers were not just motivated by the opportunity to maximize their position, but also by motivations linked to intellectual curiosity, a sense of responsibility and the sheer pleasure of discovering exciting research. She also shows in a very vivid way how important the process of discussing and defending definitions of excellence is for the self-concept of academics.

What does Lamont’s study on grant panels imply for Osterloh’s generally critical assessment of peer review for journal publications? Are there transferable insights that tell us about the value and quality of peer review? Yes and no. Yes, because Lamont shows that intrinsic motivations, both substantial and identity-related, are important when academics perform their work as evaluators. No, because a) her study focuses on a very selective group of highly established scholars who have nothing to prove and who adjudicate the most coveted grants, rather than on the average reviewer of an average article; and b), because – most importantly – Lamont studied deliberative face-to-face processes among reviewers. The deliberations and group dynamics made sure that reviews had to be carefully argued, fair and well prepared. In a good review, arguments are substantiated and balanced, and the reviewer truly engages with the authors’ work and ambitions. In journal article reviews, such a social control of quality and fairness is lacking. The journal editor is the final gatekeeper, who has the difficult task of determining the value of a review and of weighting different reviews against each other.

Given the importance of peer reviews in today’s academia, we may want to start thinking about how the quality of someone’s work as a reviewer could be factored into the evaluation of his or her performance, aside standard measures of research and teaching quality. This might be more of an input-measure, since it reflects a reviewer’s care for the discipline and quality as such. So far, academics list the journals they act as reviewers for in their CVs, but one could imagine more substantive measures, such as the extent to which someone is asked repeatedly for advice by editors, or – why not – taking into account the last five peer reviews someone has written for appraisal interviews or hiring procedures. Not sure this is the perfect solution, but it would shed some light on an important aspect of our work.

[1] Osterloh, Margit (2010). “Unternehmen Universität? Wie die Suche nach Effizienz, Output-Messung und das Ranking-Fieber in der Wissenschaft zu Opportunismus und Ideenarmut führen können”, Neue Zürcher Zeitung, Nr. 113 (19.5.2010).

About these ads

4 thoughts on “Reviewing the reviewers?

  1. Very interesting, although as you said the peer reviewers pool is different from the pool of panelists. On peer review, we should not be too surprised to discover that peer reviewers are human beings, and as such they use heuristics, they care about academic tribes etc. But the counterfactual is what would happen without peer review — the example of many socio-legal journals who are not double blind peer review is instructive. Further, in all good political science journals peer reviewers ADVISE the editors, but editorial choices are made by the editors. This means that editors can correct for the systematic shortcomings of the review process. It also raises the importance of editors’ integrity and their responsibility though.

    • Thank you, Claudio. I agree with everything you write. The purpose was not to criticize peer review (as I wrote in the post, I cannot think of any valuable alternative), but to discuss its importance and intricacies. And certainly, the next book should be … “how editors think” … no doubt that would be a bestseller!

      • It is however true that peer review is under pressure in the social sciences – there is also a profound unfairness in the political economy of the journals business, with all profits going to publishers rather than editors and reviewers. It would be useful to think of alternatives to peer review, do you know if in economics or other social sciences there are examples of top journals that have gone ‘beyond’ peer review — whatever that means? I guess that in principle quality can be delivered by using different tools. For example, if you post something silly on the net you find that the next day there are 1000 people who dislike what you said and only 1 or 2 with thumbs up. Shall we post anonymous articles on the journal’s website to be reviewed by the readership rather than only two people chosen by the editors…? This is HOW EDITORS THINK at 18h00 on Friday before a long holiday week end. Apologies for wasting you time….

  2. I would not go for the “like/dislike”-button option, since I do believe that good reviews have to be carefully argued (to be helpful to editors and fair to authors). Btw: living in Switzerland, I don’t totally trust the wisdom of the majority in anonymous polls in general. Alternatives? I don’t know about other disciplines, but unless they changed that, Politics & Society takes decisions on the basis of deliberations in the editorial committee, rather than on the basis of double-blind peer review. That is more like the grant panel-version. It is still peer review, obviously, but it is a deliberative process. I guess it is difficult to say which one is better, since in any case everything depends on how peers and editors do their work, which is why I thought that this part of everyday scholarly work deserves attention and some form of credit. Was this what you meant when you say it is unfair that no profits go to editors and reviewers?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s