Open preprint reviews by Adam Eyre-Walker

A simple proposal for the publication of journal citation distributions

Vincent Lariviere, Veronique Kiermer, Catriona J MacCallum, Marcia McNutt, Mark Patterson, Bernd Pulverer, Sowmya Swaminathan, Stuart Taylor, Stephen Curry

Review posted on 15th July 2016

We have known since Seglen’s seminal paper in the 1990s that the distribution of the number of citations for papers published in a journal is highly skewed, that there is considerable overlap in the citation distribution between journals and that there is a poor correlation between the number of citations a paper receives and the journal IF. These observations have been used to suggest that the journal IF should not be used to assess the merit or quality of a particular paper. Usually it is suggested that either the paper is read or that article level metrics are used to assess the merit or quality.

Reading the paper may be considered the gold standard but it is impractical in many circumstances in which one is interested in assessing merit; if for example, you have 100 CVs to look through, you can’t possible read all their papers, or even the best three. Even the papers of those on the shortlist may be too many and you may not be an expert in the field under consideration.

As for citations, as all researchers know articles are cited for all sorts of reasons, often incorrectly. The only quantitative analysis I know of, concluded that the vast majority of the variation in the number of citations a paper receives is just noise, and has nothing to do with the underlying merit of the paper (http://journals.plos.org/plosb.... I suspect the same is true for other article level metrics.

I find there is a strange disconnect in arguments about the IF. The journal IF must contain some information about the merit of the papers published in a journal because we, the scientific community, are the ones that determine where things get published and what gets cited. We don’t publish any old paper in Nature and Science; we publish what we believe is the best and most interesting science. Now sometimes, may be even often, we will get this wrong, but an informed decision is made to publish a paper in a particular journal. In a sense all the IF represents is someone else’s opinion about the merit of a paper. I think this might be one of the reasons people are uncomfortable with the IF along with the fact that the IF is clearly subject to error as a measure of merit. However, all measures of merit are subject to error and there is no evidence that the IF is any worse (http://journals.plos.org/plosb.... I’m not suggesting that the IF should be used blindly to assess papers and researchers, but suggesting that it contains little or no information about the merit of a paper seems illogical to me.

show less