{"title":"Textual features of peer review predict top-cited papers: An interpretable machine learning perspective","authors":"Zhuanlan Sun","doi":"10.1016/j.joi.2024.101501","DOIUrl":null,"url":null,"abstract":"<div><p>Peer review is crucial in improving the quality and reliability of scientific research. However, the mechanisms through which peer review practices ensure papers become top-cited papers (TCPs) after publication are not well understood. In this study, by collecting a data set containing 13, 066 papers published between 2016 and 2020 from <em>Nature communications</em> with open peer review reports, we aim to examine how textual features embedded within the peer review reports of papers that reflect the reviewers’ emotions may predict the papers to be TCPs. We compiled a list of 15 textual features and classified them into three categories: peer review features, linguistic features, and sentiment features. We then chose the XGBoost machine learning model with the best performance in predicting TCPs, and utilized the explainable artificial intelligence techniques SHAP to interpret the role of feature importance on the prediction results. The distribution of feature importance ranking results demonstrates that sentiment features play a crucial role in determining papers’ potential to be highly cited. This conclusion still holds, even when the ranking of the feature importance changes in the subgroup analysis of dividing the samples into four disciplines (biological sciences, health sciences, physical sciences, and earth and environmental sciences), as well as two groups based on whether reviewers’ identities were revealed. This research emphasizes the textual features retrieved from peer review reports that play role in improving manuscript quality can predict the post-publication research impact.</p></div>","PeriodicalId":48662,"journal":{"name":"Journal of Informetrics","volume":null,"pages":null},"PeriodicalIF":3.4000,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Informetrics","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1751157724000142","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Peer review is crucial in improving the quality and reliability of scientific research. However, the mechanisms through which peer review practices ensure papers become top-cited papers (TCPs) after publication are not well understood. In this study, by collecting a data set containing 13, 066 papers published between 2016 and 2020 from Nature communications with open peer review reports, we aim to examine how textual features embedded within the peer review reports of papers that reflect the reviewers’ emotions may predict the papers to be TCPs. We compiled a list of 15 textual features and classified them into three categories: peer review features, linguistic features, and sentiment features. We then chose the XGBoost machine learning model with the best performance in predicting TCPs, and utilized the explainable artificial intelligence techniques SHAP to interpret the role of feature importance on the prediction results. The distribution of feature importance ranking results demonstrates that sentiment features play a crucial role in determining papers’ potential to be highly cited. This conclusion still holds, even when the ranking of the feature importance changes in the subgroup analysis of dividing the samples into four disciplines (biological sciences, health sciences, physical sciences, and earth and environmental sciences), as well as two groups based on whether reviewers’ identities were revealed. This research emphasizes the textual features retrieved from peer review reports that play role in improving manuscript quality can predict the post-publication research impact.
期刊介绍:
Journal of Informetrics (JOI) publishes rigorous high-quality research on quantitative aspects of information science. The main focus of the journal is on topics in bibliometrics, scientometrics, webometrics, patentometrics, altmetrics and research evaluation. Contributions studying informetric problems using methods from other quantitative fields, such as mathematics, statistics, computer science, economics and econometrics, and network science, are especially encouraged. JOI publishes both theoretical and empirical work. In general, case studies, for instance a bibliometric analysis focusing on a specific research field or a specific country, are not considered suitable for publication in JOI, unless they contain innovative methodological elements.