technical paper
Comparison of Review Scores of Computer Science Conference Submissions With Cited and Uncited Reviewers
keywords:
citations and impact factor
peer review
bias
Objective Many anecdotes suggest that including citations to
the works of potential reviewers is a good (albeit unethical)
way to increase the acceptance chances of a paper. However,
previous attempts 1,2 to quantify this effect (citation bias) had
low sample sizes and unaccounted confounding factors, such
as paper quality (stronger papers had longer bibliographies)
or reviewer expertise (cited reviewers had higher expertise).
In this work, the question of whether positive comments from
reviewers are associated with their work being cited in the
papers that they review was investigated.
Design The study used data from 2 top-tier computer science
conferences: the 2021 Association for Computing Machinery
Conference on Economics and Computation (EC) and 2020
International Conference on Machine Learning (ICML). Both
conferences received full-length papers that underwent
rigorous review (similar to top journals in other areas). The
study analyzed anonymized observational data, and consent
collection was not required. The dependent variable of the
analysis was the overall score given by a reviewer to a paper
(between 1 and 5 in EC and 1 and 6 in ICML; higher meant
better). To investigate the association between the citation of
a reviewer and their score, parametric (linear regression for
EC and ICML) and nonparametric (permutation test with
covariate matching for ICML) tests at significance level α =
.05 were combined, circumventing various confounding
factors, such as paper quality, genuinely missing citations,
reviewer expertise, reviewer seniority, and reviewers’
preferences in which papers to review. The approach
comprised matching cited and uncited reviewers within each
paper and then carefully analyzing the differences in their
scores. In this way, the aforementioned paper quality
confounder was alleviated as matched cited and uncited
reviewers reviewed the same paper. Additionally, various
attributes of reviewers (eg, their expertise in the paper’s
research area) were used to account for confounders
associated with the reviewer identity (eg, reviewer expertise).
Finally, the genuinely missing citation confounder was
accounted for by excluding papers in which an uncited
reviewer genuinely decreased their evaluation of a paper
because it failed to cite their own relevant past work.
Results Overall, 3 analyses were conducted, with sample
sizes ranging from 60 to 1031 papers and from 120 to 2757
reviewers’ evaluations. These analyses detected citation bias
in both venues and indicated that citation of a reviewer was
associated with an increase in their score (approximately 0.23
point on a 5-point scale). For reference, a 1-point increase of a
score by a single reviewer would improve the position of a
paper by 11% on average.
Conclusions To improve peer review, it is important to
understand the biases present and their magnitude. This
work 3 studied citation bias and raised an important open
problem of mitigating the bias. The reader should be aware of
the observational nature of this study when interpreting the
results.
References
1. Beverly R, Allman M. Findings and implications
from data mining the IMC review process. ACM SIGCOMM
Computer Communication Review. 2012;43(1):22-29.
doi:10.1145/2427036.2427040
2. Sugimoto CR, Cronin B. Citations gamesmanship:
testing for evidence of ego bias in peer review.
Scientometrics. 2013;95(3):851-862. doi:10.1007/s11192-012-
0845-z
3. Stelmakh I, Rastogi C, Liu R, Echenique F, Chawla S,
Shah NB. Cite-seeing and reviewing: a study on citation bias
in peer review. arXiv. Preprint posted online March 31, 2022.
doi:10.48550/arXiv.2203.17239
Conflict of Interest Disclosures Ivan Stehlmakh reported
interning for Google. Charvi Rastogi reported interning for IBM. No
other disclosures were reported.
Funding/Support Ivan Stehlmakh, Charvi Rastogi, Ryan Liu, and
Nihar B. Shah were supported by a National Science Foundation
(NSF) CAREER award (1942124), which supports research on the
fundamentals of learning from people with applications to peer
review. Federico Echenique was supported by NSF grants SES
1558757 and CNS 1518941. Charvi Rastogi was partially supported by
a J. P. Morgan artificial intelligence research fellowship.
Role of the Funder/Sponsor The funders did not play a role in any of the following: design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the abstract; and decision to submit the abstract for presentation.
Acknowledgement We appreciate the efforts of all reviewers involved in the review process of the 2021 Association for
Computing Machinery Conference on Economics and Computation and 2020 International Conference on Machine Learning. We thank Valerie Ventura, PhD, Carnegie Mellon University, for useful comments on the design of the analysis procedure.
Additional Information Nihar B. Shah is a co–corresponding author.