poster

Peer Review Congress 2022

September 09, 2022

Chicago, United States

Comparison of Evaluations of Grant Proposals With and Without Numerical Scoring Submitted to Marie Sklodowska-Curie Actions Innovative Training Networks

keywords:

funding/grant peer review

editorial and peer review process

quality assurance

Objective The evaluation of European Union research grant proposals consists of 2 consecutive steps: (1) individual expert assessment and (2) consensus evaluation made by multiple reviewers. The result is an evaluation summary report, and previous studies have established this approach as a stable procedure in the assessment of research grants.1,2 In 2020, numerical scores were replaced by textual comments in the individual expert assessment. The objective was to compare the linguistic characteristics of the comments for Excellence, Impact and Implementation criteria in evaluation reports of Marie Skłodowska-Curie Actions’ Innovative Training Networks (ITN) proposals submitted in 2019 and 2020 to assess whether the removal of numerical scoring affected the structure of individuals evaluation report textual comments and evaluation outcome.

Design In this observational study for which data were collected in fall 2022, all ITN proposals submitted in 2019 and 2020 were considered. Information was collected about proposal scores and outcome, evaluation panel, and textual comments of the individual expert evaluations on all proposals submitted to the call. Linguistic characteristics of experts’ comments were assessed using the Linguistic Inquiry and Word Count software, a program that counts words related to different psychological states and phenomena and gives a score that is a proportion of the specific category in the entire text. We used logistic regression to compare differences between the 2 call years, in which proposal variables (proposal status, word count for research excellence weaknesses, word count for implementation strengths, and negative effect levels for implementation strengths) were factors and the year of the call was criterion, with the significance level set at P < .001.

Results The number of proposals was similar in 2019 (n = 1554) and 2020 (n = 1503). The proportion of accepted proposals was slightly higher in 2020 (148 9.85%) than in 2019 (128 8.24%) (Table 37). In logistic regression, experts’ comments from 2020 differed from 2019 proposals in 2 linguistic domains. Comments in the Excellence section related to weaknesses had a greater number of words in the description of the proposal (Table 37). The comments on strengths in Implementation for proposals from 2020 had slightly more words and lower negative tone or words related to negative emotions, such as wrong, suffer, and sad (Table 37). All factors jointly explained around only 4% of the variance of the criterion.



Conclusions It seems that removing numerical scoring in the evaluation of ITN proposals at the stage of the individual assessment had little effect on the linguistic characteristics of the experts’ comments, because all differences were marginal and we analyzed the whole proposal cohort. References

1. Pina DG, Buljan I, Hren D, Marušić A. A retrospective analysis of the peer review of more than 75,000 Marie Curie proposals between 2007 and 2018. eLife. 2021;10:e59338. doi:10.7554/eLife.59338

2. Buljan I, Pina DG, Marušić A. Ethics issues identified by applicants and ethics experts in Horizon 2020 grant proposals. F1000Res. 2021;10:471. doi:10.12688/ f1000research.52965.2

Conflict of Interest Disclosures David G. Pina is employed by the European Research Executive Agency. Ana Marušić occasionally serves as an ethics expert for the European Research Executive Agency and is an advisory board member of the Peer Review Congress but was not involved in the review or decision of this abstract. Ivan Buljan and Antonia Mijatović declare no competing interests.

Funding/Support This study was funded by the Croatian Science Foundation “Professionalism in Health – Decision making in practice and research” (ProDeM) under grant agreement IP-2019- 04-4882.

Role of the Funder/Sponsor The funders had no role in design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the abstract; and decision to submit the abstract for presentation.

Disclaimer All views expressed in this abstract are strictly those of the authors and may in no circumstances be regarded as an official position of the European Research Executive Agency or the European Commission.

Next from Peer Review Congress 2022

Attitudes and Experiences of Authors, Reviewers, and Editors About Responsible and Detrimental Research Practices and the Transparency and Openness Promotion Guidelines Across Scholarly Disciplines
poster

Attitudes and Experiences of Authors, Reviewers, and Editors About Responsible and Detrimental Research Practices and the Transparency and Openness Promotion Guidelines Across Scholarly Disciplines

Peer Review Congress 2022

Mario Malicki
Mario Malicki

09 September 2022

Similar lecture

Evaluating Machine Learning Models in Predicting the Dose Distribution Index in Radiation Treatment Planning QA
technical paper

Evaluating Machine Learning Models in Predicting the Dose Distribution Index in Radiation Treatment Planning QA

CAP Virtual Congress 2021

James Chow
James Chow and 2 other authors

07 June 2021

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved