technical paper
Evaluation of Editors' Abilities to Estimate Citation Potential of Research Manuscripts Submitted to The BMJ
keywords:
publication metrics and performance indicators
and scientometrics
informatics
editorial and peer review process
bibliometrics
Objective To evaluate editors’ ability to estimate the citation
potential of a cohort of research submissions after
publication.
Design Research manuscripts submitted to The BMJ, sent
for peer review, and subsequently scheduled for discussion at
an editorial meeting between August 27, 2015, and December
29, 2016, were rated independently by attending editors for
citation potential prior to discussion at meetings. For each
manuscript, editors indicated how many citations they
thought each manuscript would generate in the year they
were first published plus the first calendar year after
publication, in relation to the median number of citations for
a paper published in The BMJ at the time. Editors could
choose from the following 4 categories: no citations; below
The BMJ average number of citations (<10); around The BMJ
average number of citations (10-17); and more than The BMJ
average number of citations (>17). Google, PubMed,
ResearchGate, institutional websites, ORCID, and trial
registries were searched for subsequent journal publications
using key information submitted by authors. Actual citations
generated were extracted from the Web of Science (WOS)
Core Collection on May 10, 2022. To ensure citation counts
were complete, analysis was restricted to articles published by
December 31, 2019, or not published at the time of analysis.
Results Of 530 manuscripts, 508 were published as full-
length articles and indexed in the WOS and 22 were
unpublished (1 abstract, 1 preprint, 1 substantially changed,
19 not found). Among the 507 manuscripts published by the
end of 2019, the median (IQR range) number of citations in
the year of publication plus the following year was 8 (4-16
0-150). A total of 291 manuscripts (57%) generated below
The BMJ average number of citations (<10), 102 (20%)
generated around The BMJ average number of citations
(10-17), and 114 (23%) generated above The BMJ average
number of citations (>17). The number of citations was higher
for accepted manuscripts (median, 12 IQR, 7-24 citations)
compared with rejected manuscripts (median, 5 IQR,
3-10.75 citations). For each of the 10 editors’ ratings, there
was a tendency for actual citation counts to be higher in line
with the editor’s increasing estimated citation categories but
with considerable variation within categories; 9 of 10 editors
were unable to identify the correct citation category for more
than 50% (range, 31%-54%) of manuscripts. A κ analysis
revealed that agreement between the estimated and actual
categories for all editors was slight or fair (κ value range,
0.02-0.27). Table 25 shows that editors frequently rated
papers that were highly cited as having low citation potential
and vice versa. Secondary analysis using citations in the first
2 years after publication showed similar results.
Conclusions Many editors are motivated to publish highly
citable manuscripts because this determines impact factor;
however, this motivation can bias which articles get published
and where they are published. This study found that The BMJ
editors were not good at estimating the citation potential of
manuscripts they accepted or rejected.
Conflict of Interest Disclosures Sara Schroter, Wim Weber,
and Elizabeth Loder are employed by or seconded to The BMJ.
Jack Wilkinson holds statistical or methodologic editor roles for
Cochrane Gynaecology and Fertility, BJOG: An International
Journal of Obstetrics and Gynaecology, Reproduction and
Fertility, and Fertility and Sterility. Jamie J. Kirkham is a
statistical editor for The BMJ.
Acknowledgments Thanks to Nillanee Nehrujee for assistance
with data collection for articles accepted to The BMJ and to The
BMJ research editors for their participation.
Additional Information Jack Wilkinson is a co–corresponding
author.