The Peaks, Valleys, Perils, and Rewards of Peer Review: Part I. How to Write Peer Evaluations

January 29, 2019
by
Natalie B. Dohrmann
Stone balance

Some Modest Advice for Authors and Evaluators

(for Part II: How to Read Peer Evaluations, click here)

This advice comes from reading hundreds of these: the good, the lazy, and the vicious. When we read a well-executed evaluation (and we often do), we feel better about the field and about humanity in general. Unlike the articles themselves, which will bring public recognition to their authors, peer evaluators toil in darkness without any hope of reward beyond the karmic. Good readers are knowledgeable, that’s a given, but a really effective peer evaluator is also genuinely curious, generous, and hard working. Kindness is nice, too, but not mandatory if the other attributes are in place. Thankfully there are many of this ilk out there, and JQR works hard to develop and reach out to such readers wherever possible. We work these saints mercilessly.

Yet we all know that not all those who work under the cloak of anonymity behave their very best. Indeed there are some evaluators to whom we won’t turn again; this for a range of reasons. The first being that their evaluations are too brief and lack substantive content. An evaluation of only a few lines—even a few lines of glowing praise—is not helpful to us or to the author. Since such reports do not specify what is original or ground-breaking about a piece, they cannot help counter-balance a longer evaluation with substantive critique, and so we cannot use them effectively to build a case for accepting a piece if a case is close. Other readers we avoid are consistently or self-servingly savage, or single-mindedly forward their own theses about the essay without truly engaging the paper before them.

Problems aside, we at JQR have little reason to be cynical about peer review. While there may be some meanness out there, it is the exception, not the rule.

So a few rules for the world we’d like to live in.

  1. The golden rule. It goes without saying, but, well, we think it needs be said: write the evaluation you would like to receive, even if—especially if—it is a critical or negative assessment
  2. Read the article. Do the work of fully understanding what the author wanted to do and how, and then make it clear that you have done so by opening the evaluation with a summary of the thesis, methodology, and the evidence adduced. A negative report is easier to swallow if the author feels as though the evaluator understood the paper at least. (Yes, this too needs to be said.)
  3. Be part of the solution. To the extent possible, help the author reach their stated goal. This may mean refining the goal, sharpening, redirecting, limiting. The evaluator’s job is not to tell the author that the evaluator’s own replacement thesis is the one they should be arguing. If the stated goal is deemed worthy, then the work is to hone their arguments so they build most effectively to that goal: point out weak and strong points, suggest additional reading, primary and secondary, or explain clearly why the goal is unattainable using the tools currently applied.
  4. Don’t be dismissive. JQR sends most submissions out for review. Sometimes we realize a paper is not ready for publication (sometimes really not ready), but if that author is credibly trained and in the profession, then we feel it is good for the guild for them to hear unbiased reactions from specialists and senior people in the field. In these cases evaluators at times complain that their time is wasted reading weak essays. It is not.

(Note well, though, eager authors: this should not be read as an invitation to submit work that is not polished. The field is small, and readers may put two and two together when the topic of a very weak paper they read reappears with a name attached in the program of the AJS. First impressions still count for something.)

Peer evaluators do God’s work. It is thankless, unpaid, and yet nonetheless mission critical (sort of like the TSA). If the journal could pay you for your labor we would. However, our funding streams are drying up as fast as you can say Academia.edu.

Peer evaluation is a vital cog in the machinery of true learning and rigorous scholarship. This post comes to reflect on the best of its practitioners, to guide the novice, and chasten the sinner. 

Carry on.

 

Part II: How to Read Peer Evaluations

Tags:

About the Author

Natalie B. Dohrmann

Natalie B. Dohrmann

Read more