poster

Peer Review Congress 2022

September 11, 2022

Chicago, United States

Sorting Out Journals: Quality Criteria, Ranking Principles, and Tensions of Chinese Scientific Journal Lists

keywords:

publication metrics and performance indicators

citations and impact factor

predatory journals

Objective Journal lists are developed in China to avoid the domination of the journal impact factor in journal and research evaluation. Journal lists are instruments to categorize, compare, and assess scholarly publications. Avoiding the misleading precision of indicators, these simpler ordinal or nominal lists have been established, evaluated, used, and debated by different users of scholarly publishing channels globally. This study investigated the remarkable proliferation of journal lists in China and analyzed their underlying values, quality criteria, and ranking principles. In contrast with well-established international lists, this study investigated the concerns specific to the Chinese research policy and publishing system.

Design This qualitative study was based on an analysis of policy documents concerning Chinese research and publishing policy and specific list-making initiatives. This study investigated 20 disqualifying journal lists (2 authoritative lists and 18 lists from universities and hospitals) and 2 qualifying lists (Table 84). The document analysis was complemented by interviews with journal list makers to investigate the list-making process. The study focused on Chinese journals in science, technology, engineering, and mathematics (STEM).

Results In an overview of the current Chinese journal lists, several key distinctions and contrasts were highlighted in listing criteria. Disqualifying lists of “bad journals” reflect concerns over inferior research publications, but also the involved drain on public resources. For example, the National Science Library of the Chinese Academy of Sciences uses 7 criteria to compile the Early Warning List of International Journals to inform researchers’ publishing choices and publishers’ journal quality management. Qualifying lists of “good journals” are based on criteria valued in research policy, typically sorting journals into ordinal quality levels. The considerations in the development of these lists reflect specific policy concerns. For example, the Chinese STM Journal Excellence Action Plan generated a journal funding list as a reference for public investment in journals. The High-Quality STEM Journal Catalogue Graded by Fields includes evaluative lists of domestic and international journals for use in academic evaluation. Contrasting concerns and inaccuracies lead to contradictions in the qualify and disqualify binary logic, as demonstrated in the case of a journal listed on both the qualifying and disqualifying lists. Similarly, different qualifying lists provide different assessments of what constitutes a good or excellent journal.

Conclusions The administrative logic of state-led Chinese research and publishing policy ascribes worth to scientific journals for its specific national and institutional needs. These needs involve the challenges of public resource allocation, a shift away from output dominated research evaluation, research misconduct, and balancing national research needs against international standards. Therefore, Chinese journal lists use quality criteria in a specific way that is different from other journal lists. However, journal lists may not always be able to represent both general journal quality and quality for specific purposes.

References

  1. Pölönen J, Guns R, Kulczycki E, Sivertsen G, Engels TCE. National lists of scholarly publication channels: an overview and recommendations for their construction and maintenance. J Data Inf Sci. 2021;6(1):50. doi:10.2478/ jdis-2021-0004
  2. Zhang L, Sivertsen G. The new research assessment reform in China and its implementation. Scholarly Assessment Reports. 2020;2(1):7. doi:10.29024/sar.15
  3. Fochler M, Felt U, Müller R. Unsustainable growth, hyper-competition, and worth in life science research: narrowing evaluative repertoires in doctoral and postdoctoral scientists’ work and lives. Minerva. 2016;54(2):175-200. doi:10.1007/s11024-016-9292-y

Conflict of Interest Disclosures None reported.

Funding/Support The work reported in this abstract is funded by the China Scholarship Council.

Downloads

Slides

Next from Peer Review Congress 2022

Quality of Reporting of Randomized Clinical Trials in Artificial Intelligence: A Systematic Review
poster

Quality of Reporting of Randomized Clinical Trials in Artificial Intelligence: A Systematic Review

Peer Review Congress 2022

Rida Shahzad

11 September 2022

Similar lecture

Who reviews for predatory and legitimate journals? A study on reviewer characteristics
technical paper

Who reviews for predatory and legitimate journals? A study on reviewer characteristics

PEERE

+2Marc DomingoTiago BarrosAnna Severin
Anna Severin and 4 other authors

30 September 2020

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved