For the 100th volume of the journal Biogeochemistry, Bob Howarth, its founding editor-in-chief, was invited to write an editorial in which he would celebrate the continued success of the journal. Bob wrote a characteristically upbeat article highlighting the continued growth of the field of biogeochemistry, and the key role the journal has played in this evolution. While it would be hard to argue with anything Bob has written, it appears useful at the start of the 101st volume to point out a worrying trend that has unfolded over the last few years and seems to be gathering momentum. The point is not to counterbalance Bob’s positive outlook with a slightly gloomier perspective. The purpose here is to highlight what may turn out relatively rapidly to be a serious problem for the field of biogeochemistry, and to stimulate a debate on corrective measures so that Biogeochemistry can continue to prosper on the positive trajectory described in Bob’s editorial.

When Bob started Biogeochemistry 26 years ago, reviews were obtained by blindly mailing paper copies of manuscripts to potential reviewers and waiting for paper copies of reviews to be returned by mail. At the time one of us (K.L.) took over as Editor-in-Chief, in 2004, our journal, like most other journals, moved to a web-based reviewing system, and the editorial board was assured that reviewing would become far easier. That assurance seemed realistic, since manuscripts were available online to all reviewers, and reviews would be submitted electronically. Even conservatively, one could expect that the time line of the entire review process, simply by avoiding snail mail, could be drastically cut down.

Reality has turned out to be quite different. The reviewing process has become increasingly difficult, associate editors are frustrated at the extreme hassle of finding willing reviewers, and it is not uncommon to have to invite at least 10 scientists for a review before one or two agree. This is not the case just with Biogeochemistry. Fellow editors are reporting with increasing frequency, and a rising sense of alarm, that a growing proportion of review invitations are declined in most disciplines, across the board (e.g., Hauser and Fehr 2007; Anonymous 2009; Baveye et al. 2009; Martin et al. 2009; Fox and Petchey 2010; Trimble et al. 2010; Siegel and Baveye 2010).

Most often, researchers declining invitations to review invoke the fact that they are too busy to add yet another item to their already overcommitted schedule. Have things changed dramatically in the last 26 years? In many respects, they have. First, the number of manuscripts submitted to most journals, and the number of journals in most disciplines, have both increased tremendously in the past 2 decades. In 1990, Biogeochemistry published 46 articles. In 2009, the number was 91. That increase is still moderate, compared to many other journals, where the number of published articles has tripled or quadrupled during the same period. There are several reasons for these increases, not the least of which being a frenzy to published, stimulated by large-scale bean-counting in universities and research centers (Baveye 2010). Also, several decades ago, the funding rate in most ecological panels at the National Science Foundation was close to 20–25%, while now it hovers at about 5–10%. As a result, we are all writing more and more proposals, and chasing grants frantically, as funding rates decline but the cost of doing modern science keeps rising. An additional deterrent for reviewers is that as our fields succeed and expand, they also are less and less personal: we are less likely to be asked to review the paper of someone we know personally, with whose work with are familiar, and more likely to be asked to review the paper of a total stranger, a competitor or someone else’s graduate student.

Yet, however one looks at it, peer reviewing in any of its different flavors (e.g., open, single or double-blind) is a crucial component of the publishing process. Nobody has yet come up with a viable alternative. Therefore, we need to find a way to convince our colleagues to peer review manuscripts more often, including the “prima donas” who delude themselves in thinking that the task of reviewing is somehow too menial for them. This can be done with a stick or with various types of carrots. Some discussion on what is likely to work best in each discipline would be most welcome.

The “sticks”, occasionally envisaged by editors (e.g., Anonymous 2009), are straightforward, at least to explain. For the peer-reviewing enterprise to function well, it is a civic duty for each researcher to review every year as many manuscripts as the number of reviews he or she is getting for his/her own papers. So, someone submitting 10 manuscripts in a given year should be willing to review 20–30 manuscripts during the same timeframe (assuming that each manuscript is reviewed by 3 individuals, as is commonly the case). If this person does not meet the required quota of reviews, there would be some restrictions imposed on the submission of any new manuscript for publication. Such a “stick” has been advocated in the case of the submission of grant proposals (Anonymous 2009). Hauser and Fehr (2007), in an elaborate and provocative system of “penalties” or “costs”, suggest that for every manuscript that a reviewer refuses to review for a given journal, the journal add on a one week delay to reviewing their own next submission. For reviewers who accept to review but subsequently turn in their review late, Hauser and Fehr (2007) advocate an approach by which, for every day since receipt of the manuscript for review plus the number of days past the deadline, the reviewer’s next personal submission to the journal will be held in editorial limbo for twice as long before it is sent for review.

Another possibility for a less harsh stick, which would also be easier to track is to ask any author submitting a paper to a journal to agree to review a minimum of 2 papers for that journal within the next 2 years. Because all editing is web-based now, journals could have a database of submitting authors with their areas of expertise. Authors, as they submit a paper, would have an electronic form to sign, agreeing to this policy. This is likely one direction that we will explore at Biogeochemistry in the near future.

A hurdle faced by any implementation of an automatic accounting of reviewing activities or any direct appeal to the civism of researchers is that, unavoidably, some civically-challenged individuals will try to defeat the system by writing short, useless reviews just to make the number. Pointing out typos and syntax errors in a manuscript, as some reviewers limit themselves to doing, is useful, but not hugely so. Identifying problems and offering ways to overcome them, proposing advice on how to analyze data better, or editing the text to increase its readability are all ways to make more substantial contributions. Generally, one might consider that there is an usefulness gradation from reviews focused on finding flaws in a manuscript to those focused on helping authors improve their text. To avoid poor reviews being tallied in any accounting scheme, someone would have to assess whether reviews meet minimal standards of quality before they can be counted. With this feature, and additional breaks, e.g., to allow young researchers to get established in their career, it will be challenging to keep the review accounting system from rapidly become unwieldy.

An alternative approach, instead of sanctioning bad reviewing practices or simply making reviewing a requirement for publication, would be to actively reward good reviewing ethics. Several journals do that already in a number of ways, for example by giving awards to outstanding reviewers (Baveye et al. 2009). The lucky few who are so singled out by such awards see their reviewing efforts validated. But fundamentally, these awards do not change the unsupportive atmosphere in which researchers review manuscripts. The problem has to be attacked at its root, in the current culture of universities and research centers, where administrators tend to equate research productivity with the number of articles published and the amount of extramural funding brought in (Siegel and Baveye 2010). Annual activity reports occasionally require individuals to mention the number of manuscripts or grant proposals reviewed, but these data are currently unverifiable, and therefore are generally assumed not to matter at all for promotions or salary adjustments.

Fortunately, there may be a way out of this difficulty. All major publishers have information on who reviews what, how long reviewers take to respond to invitations, how long it takes them to send in their reviews. All it would take, in addition, would be for editors or associate editors who receive reviews to assess and record their usefulness, and one would have a very rich data set, which, if it were made available to universities and research centers in a way that preserves the anonymity of the peer-review process, could be used fruitfully to evaluate individuals’ reviewing performance and impact. A debate among scientists would likely result in a reliable set of guidelines on how to evaluate peer reviews.

Beyond making statistics available to decision-makers, other options are also available to raise the level of visibility and recognition of peer reviews (Baveye 2010). Right or wrong, universities and research centers worldwide now rely more and more on some type of scientometric index, like the h-index, to evaluate the “impact” of their researchers. Many of these indexes, and certainly the h-index, implicitly encourage researchers to publish more articles, which detracts researchers from engaging in peer reviewing. In addition, none of these indexes, at the moment, encompasses in any way the often significant impact individuals can have on a discipline via their peer reviewing. One could conceive of scientometric indexes that would include some measure of peer-reviewing impact, calculated on the basis of the statistics mentioned earlier.

Clearly, these “carrot-type” developments will not happen overnight. Before any of them can materialize, a necessary first step is for researchers to discuss with their campus administration, or the managers of their research institution, about the crucial importance of peer-reviewing and the need to have this activity valued in the same way that research, teaching, and outreach are. Such a debate is long overdue. Once administrators perceive that there is a need in this respect, are convinced that it will not cost a fortune to give peer reviewing more attention, and formulate a clear demand to librarians and publishers to help move things forward, there is every hope that sooner or later the scholarly publishing enterprise will once again operate under optimal conditions.