VIDEO DOI: https://doi.org/10.48448/8m22-z377

poster

Peer Review Congress 2022

September 11, 2022

Chicago, United States

Utility of Machine Learning in Predicting Success of a Peer Review Paper From Peer Reviewer Scores

keywords:

peer review process and models

editorial and peer review process

artificial intelligence

Objective To investigate the utility of machine learning algorithms in predicting the likelihood of publication of a manuscript from peer reviewer scores.

Design Using a cross-sectional study design, 263 manuscripts that had undergone peer review between 2017 and 2021 were selected, and a final decision made on whether to accept or reject a manuscript for publication. Excluded were manuscripts with incomplete data on peer reviewer scores. Data were collected on the manuscripts’ peer reviewer scores and the final decision made by the journal. Peer reviewers’ scores included ratings by 2 peer reviewers per manuscript on originality, quality, interest, overall rating, and priority for publishing. Two-thirds (174 manuscripts) of the data were used for training and one-third (89 manuscripts) for testing the algorithms. Microsoft Excel 2019 was used to preprocess the data and Weka version 3.9.5 was used for model assessment. Training and testing of the model were conducted using various machine learning algorithms. The model with the highest accuracy in predicting the likelihood of a manuscript to be published would be further improved and deployed for application.

Results One-hundred and thirty-four manuscripts were accepted for publication and 129 rejected for the final analysis. The performance of various machine learning algorithms in predicting the likelihood of publication ranged from 58.4% to 65.2% (Table 27).



Conclusions A machine learning model to reduce peer review workload would ensure that scarce peer review resources are utilized by optimizing desk rejections. Such a model would promote efficiency in the publishing process and improve overall journal output and satisfaction with authors. To implement such a model, the in-house reviewers and editorial team could score manuscripts and assess their performance before advancing them for peer review. To improve the model’s performance and reduce bias, there would be a need to enhance the selection of data variables for scoring the manuscripts with a greater focus towards objective variables. Other limitations included small sample size and possible interrater variability in scoring individual manuscripts.

References 1. Checco A, Bracciale L, Loreti P, Pinfield S, Bianchi G. AI-assisted peer review. Humanit Soc Sci Commun. 2021;8(1):25. doi.org/10.1057/s41599-020-00703-8

2. Heaven D. AI peer reviewers unleashed to ease publishing grind. Nature. 2018;563:609-610. doi.org/10.1038/d41586- 018-07245-9

Conflict of Interest Disclosures Dr Kigera is a member of the Peer Review Congress Advisory Board but was not involved in the review or decision for this abstract.

Downloads

Transcript English (automatic)

Next from Peer Review Congress 2022

Inaugural Drummond Rennie Lecture: Bias, Spin, and Problems With Transparency of Research, Isabelle Boutron
lecture

Inaugural Drummond Rennie Lecture: Bias, Spin, and Problems With Transparency of Research, Isabelle Boutron

Peer Review Congress 2022

Isabelle Boutron

08 September 2022

Similar lecture

An Overview of Chatbot Technology
technical paper

An Overview of Chatbot Technology

AIAI 2020

Eleni Adamopoulou
Eleni Adamopoulou

05 June 2020

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved