Skip to main content
Log in

Understanding peer review of software engineering papers

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Context

Peer review is a key activity intended to preserve the quality and integrity of scientific publications. However, in practice it is far from perfect.

Objective

We aim at understanding how reviewers, including those who have won awards for reviewing, perform their reviews of software engineering papers to identify both what makes a good reviewing approach and what makes a good paper.

Method

We first conducted a series of interviews with recognised reviewers in the software engineering field. Then, we used the results of those interviews to develop a questionnaire used in an online survey and sent out to reviewers from well-respected venues covering a number of software engineering disciplines, some of whom had won awards for their reviewing efforts.

Results

We analyzed the responses from the interviews and from 175 reviewers who completed the online survey (including both reviewers who had won awards and those who had not). We report on several descriptive results, including: Nearly half of award-winners (45%) are reviewing 20+ conference papers a year, while 28% of non-award winners conduct that many. The majority of reviewers (88%) are taking more than two hours on journal reviews. We also report on qualitative results. Our findings suggest that the most important criteria of a good review is that it should be factual and helpful, which ranked above others such as being detailed or kind. The most important features of papers that result in positive reviews are a clear and supported validation, an interesting problem, and novelty. Conversely, negative reviews tend to result from papers that have a mismatch between the method and the claims and from papers with overly grandiose claims. Further insights include, if not limited to, that reviewers view data availability and its consistency as being important or that authors need to make their contribution of the work very clear in their paper.

Conclusions

Based on the insights we gained through our study, we conclude our work by compiling a proto-guideline for reviewing. One hope we associate with our work is to contribute to the ongoing debate and contemporary effort to further improve our peer review models in the future.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. http://tiny.cc/rosefest

  2. A good tutorial on data disclosure when using a double-blind review process is provided by Daniel Graziotin: https://tinyurl.com/DBDisclose.

  3. See, e.g., artifact evaluation track of ICSE 2021 https://doi.org/10.6084/m9.gshare.14123639 or the open science initiative of the EMSE journal https://github.com/emsejournal/openscience

  4. https://github.com/acmsigsoft/EmpiricalStandards

  5. https://github.com/researchart/patterns/blob/master/standards/artifact.md

  6. http://www.icse-conferences.org/reports.html

  7. https://neuripsconf.medium.com/what-we-learned-from-neurips-2020-reviewing-process-e24549eea38f

  8. https://cs.gmu.edu/~offutt/stvr/17-3-sept2007.html

  9. https://peerj.com/articles/cs-111/reviews/

  10. https://openreview.net/forum?id=rklXaoAcFX&noteId=HyeF-4_9hm

  11. https://doi.org/10.6084/m9.gshare.5086357.v1

  12. https://www.slideshare.net/aserebrenik/peer-reviews-119010210

  13. https://reviewqualitycollector.org

  14. http://www.inf.fu-berlin.de/w/SE/ReviewQualityCollectorHome

  15. http://cscw.acm.org/2019/CSCW-2020-changes.html

  16. http://www.icse-conferences.org/reports.html

  17. https://conf.researchr.org/track/icse-2021/icse-2021-papers#Call-for-Papers

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neil A. Ernst.

Additional information

Communicated by: Romain Robbes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ernst, N.A., Carver, J.C., Mendez, D. et al. Understanding peer review of software engineering papers. Empir Software Eng 26, 103 (2021). https://doi.org/10.1007/s10664-021-10005-5

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10664-021-10005-5

Keywords

Navigation