Rumours were already around, but at this year’s APSA (American Political Science Association) conference it became clearer what was going on: the European Consortium of Political Research will have to face a challenger, the European Political Science Association (EPSA). EPSA has been founded in June 2010 as a competing organization to ECPR which has not always been perceived as the most efficient organization. ECPR as the central political science association in Europe organizes quite popular joint sessions (week-long workshops on certain fixed topics) and since 2001 also general conferences. However, in particular with regard to the general conferences ECPR seemed to be an organization with a declining learning curve. At last year’s ECPR conference in Potsdam it was not possible to organize a fully working paper room in which conference papers are distributed easily before the conference takes place, also the coordination between section organizers, panel organisers and paper presenters has often been criticized. A couple of political scientists now founded EPSA which demonstrates its difference by being organized according to individual and not organisational membership (as ECPR) and by being active on Facebook and on Twitter .
At this year’s APSA, EPSA already challenged ECPR by organizing its reception at the same time as the popular ECPR reception funded by Cambridge University Press. This competitive approach did not seem to be the most successful since only few people turned up whereas the ECPR reception received the usual crowd. This might already be a hint that it might be more successful in the long run to organize complementary conferences than challenging conferences to ECPR. However, it is to be hoped that political science in Europe benefits from this challenger by motivating ECPR to improve and by offering attractive alternative or additional conferences. In June 2011, EPSA will organize its first conference in Dublin – the call for papers will be out soon.
The slogan “publish or perish” is certainly appropriate in the sense that, in most countries and universities, becoming a professor is impossible unless the respective candidate has published. This slogan does not tell us, however, how many publications of what type are required to make it into and survive in academia. There are, of course, no absolute standards. As in any competitive situation, how good is good enough depends on how good your competitors are. Fair enough. But can we really compare in any meaningful way political scientist A with political scientist B or any given peer group of A? Can we compare political science department A with department B? Can we compare academics and institutions in country A with those in country B, strongly differing national academic traditions, requirements, and capacities notwithstanding?
Over at the University Blog, Ferdinand von Prondzynski, President of Dublin City University between 2000 and 2010, writes:
One crucial issue facing Irish higher education over the next while will be institutional diversity. Broadly the question goes like this: we are a small country, so why do we need seven universities that cover more or less the same territory, and a dozen or so institutes with the same mission, and some other colleges? Why not identify a specialism for each and then ensure they are the best they could be in that area? Or maybe, why not identify one or two all-rounder institutions, with everyone else occupying a niche?
These questions are highly relevant for the Swiss political science landscape.
As soon as I learned that a bunch of World Cup statistics are readily available online I could not help checking an obvious hypothesis, namely that countries with more members of the American Political Science Association (APSA) should score goals more efficiently, if not be overall more successful. After all, American political science is decried for spreading publish-or-perish norms that lead to the proliferation of minimum publishable units, so it is only natural to expect that countries where more political scientists are affiliated with APSA tend to be less wasteful when it comes to scoring precious goals at the world’s most prestigious competition.
Well, it turns out that there can be too much of a good thing. APSA membership helps improve the goals to shots ratio but only up to a point. The French could have avoided an
entertaining embarassing situation if their political scientists were more internationalized, and the Dutch would almost certainly not have reached the final without their good share of APSA members. However, the main problem of the English and Swiss teams was that their universities have too many scholars that mingle with American political science. Therefore, the advice for Fabio Capello and Ottmar Hitzfeld is straightforward: reduce the share of APSA members in England and Switzerland by about 50% and you will do much better next time.
I came across this futuristic seating arrangement in one of the ETH’s buildings last week. Turns out this little monster is, or better was, a supercomputer named Cray X-MP/28. It was the world’s fastest computer in the mid 1980s (64 MB RAM, two 118 MHz processors). Even the computer you use to read these lines almost surely has more computing power than that. But unlike Cray X-MP’s catalog price your machine was certainly less than ten million francs. While back in the 1980s Cray X-MP was mainly used by physicists and chemists, nowadays also political science depends heavily on the availability of computing power in both state-of-the-art theoretical and empirical research.
Political scientists working on formal models of politics, for example, use computer programs to solve complex systems of equations within a few minutes. If I could reactivate former ten million Francs supercomputer Cray X-MP/28, it would probably need hours to deliver the solutions to the same equations. Or look at empirical political research that applies advanced statistical models. Some of these models still require hours if not days to be estimated. And sometimes, even computers with processors outperforming those of our huge yellow friend by a factor of 25 or more, fail to deliver any results, because the estimations do not converge.
So even though I sometimes wish myself back in time when political science was more about sitting down in a comfortable armchair and philosphizing about how the world works and should work, our profession nowadays is not at all like that. However, the next time I’m waiting for my computer to deliver results, I can sit down on what probably is the world’s most expensive furniture arrangement and have a cup of coffee.
Two interesting recent publications address the role and effectiveness of peer review for the identification and reward of “academic excellence”. I would like to engage with their ideas in this post.
Through peer review, scholars evaluate the quality of each other’s work (most of the time confidentially). It has become the key mechanisms for assessing scientific excellence, not only when it comes to journal and book publications, but also for the allocation of research grants, fellowships, awards, university rankings, as well as – indirectly through publication records, and directly through recommendation letters and selection committees – for hiring and tenure procedures. Most of us spend a considerable amount of our time reviewing articles and applications etc. and indeed, there does not seem to be any clear alternative for the measurement of academic quality. Yet, despite its huge significance, peer review is – quite naturally – a very secretive process, for which no clear standards exist, and about which we know very little. What is a good review? How do peers define and recognize quality, or excellence? How do we know if peer review produces effective and fair results?
A colleague forwarded this job opening to me:
Just to make clear for those who do not read German, the Department of Political Science of the University of Mainz has an open assistant professor position reserved exclusively for women. It is the first time that I see this kind of affirmative action. How common is it? What do you think about it?
Over at The Monkey Cage, Erik Voeten writes:
the American Economic Association (AEA) has started an experiment in which authors of rejected papers can voluntarily ask the editor of its flagship journal (the American Economic Review) to forward referee reports to one of the four field journals that the AEA publishes. The basic idea is that many submissions are good but don’t quite make the cut for the flagship journal. Allowing the field journals to piggyback on referee reports that were already written has major efficiency advantages for authors, journal editors and referees.
It seems to me that this amounts to a centralized peer-review system. There are strong incentives for all to submit everything to the AER: best case scenario the paper is accepted, worst case scenario another good journal can quickly decide whether to accept it or not. If there were additional steps “down the food chain of journals” (as Erik Voeten puts it), the paper would eventually find its place at the appropriate level and the incentives to send it at the top would be even stronger.
Of course, if there is one thing economists like to do, that’s think about incentives, so I wonder whether they’ve figured this out?