Pub Date : 2024-02-16DOI: 10.1186/s41073-023-00141-5
Priya Silverstein, Colin Elman, Amanda Montoya, Barbara McGillivray, Charlotte R Pennington, Chase H Harrison, Crystal N Steltenpohl, Jan Philipp Röer, Katherine S Corker, Lisa M Charron, Mahmoud Elsherif, Mario Malicki, Rachel Hayes-Harb, Sandra Grinschgl, Tess Neal, Thomas Rhys Evans, Veli-Matti Karhulahti, William L D Krenzer, Anabel Belaus, David Moreau, Debora I Burin, Elizabeth Chin, Esther Plomp, Evan Mayo-Wilson, Jared Lyle, Jonathan M Adler, Julia G Bottesini, Katherine M Lawson, Kathleen Schmidt, Kyrani Reneau, Lars Vilhuber, Ludo Waltman, Morton Ann Gernsbacher, Paul E Plonski, Sakshi Ghai, Sean Grant, Thu-Mai Christian, William Ngiam, Moin Syed
Journal editors have a large amount of power to advance open science in their respective fields by incentivising and mandating open policies and practices at their journals. The Data PASS Journal Editors Discussion Interface (JEDI, an online community for social science journal editors: www.dpjedi.org ) has collated several resources on embedding open science in journal editing ( www.dpjedi.org/resources ). However, it can be overwhelming as an editor new to open science practices to know where to start. For this reason, we created a guide for journal editors on how to get started with open science. The guide outlines steps that editors can take to implement open policies and practices within their journal, and goes through the what, why, how, and worries of each policy and practice. This manuscript introduces and summarizes the guide (full guide: https://doi.org/10.31219/osf.io/hstcx ).
{"title":"A guide for social science journal editors on easing into open science.","authors":"Priya Silverstein, Colin Elman, Amanda Montoya, Barbara McGillivray, Charlotte R Pennington, Chase H Harrison, Crystal N Steltenpohl, Jan Philipp Röer, Katherine S Corker, Lisa M Charron, Mahmoud Elsherif, Mario Malicki, Rachel Hayes-Harb, Sandra Grinschgl, Tess Neal, Thomas Rhys Evans, Veli-Matti Karhulahti, William L D Krenzer, Anabel Belaus, David Moreau, Debora I Burin, Elizabeth Chin, Esther Plomp, Evan Mayo-Wilson, Jared Lyle, Jonathan M Adler, Julia G Bottesini, Katherine M Lawson, Kathleen Schmidt, Kyrani Reneau, Lars Vilhuber, Ludo Waltman, Morton Ann Gernsbacher, Paul E Plonski, Sakshi Ghai, Sean Grant, Thu-Mai Christian, William Ngiam, Moin Syed","doi":"10.1186/s41073-023-00141-5","DOIUrl":"10.1186/s41073-023-00141-5","url":null,"abstract":"<p><p>Journal editors have a large amount of power to advance open science in their respective fields by incentivising and mandating open policies and practices at their journals. The Data PASS Journal Editors Discussion Interface (JEDI, an online community for social science journal editors: www.dpjedi.org ) has collated several resources on embedding open science in journal editing ( www.dpjedi.org/resources ). However, it can be overwhelming as an editor new to open science practices to know where to start. For this reason, we created a guide for journal editors on how to get started with open science. The guide outlines steps that editors can take to implement open policies and practices within their journal, and goes through the what, why, how, and worries of each policy and practice. This manuscript introduces and summarizes the guide (full guide: https://doi.org/10.31219/osf.io/hstcx ).</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10870631/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139742810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-19DOI: 10.1186/s41073-023-00142-4
Irina Ibragimova, Helen Fulbright
Background: Objectives of this study were to analyze the impact of including librarians and information specialist as methodological peer-reviewers. We sought to determine if and how librarians' comments differed from subject peer-reviewers'; whether there were differences in the implementation of their recommendations; how this impacted editorial decision-making; and the perceived utility of librarian peer-review by librarians and authors.
Methods: We used a mixed method approach, conducting a qualitative analysis of reviewer reports, author replies and editors' decisions of submissions to the International Journal of Health Governance. Our content analysis categorized 16 thematic areas, so that methodological and subject peer-reviewers' comments, decisions and rejection rates could be compared. Categories were based on the standard areas covered in peer-review (e.g., title, originality, etc.) as well as additional in-depth categories relating to the methodology (e.g., search strategy, reporting guidelines, etc.). We developed and used criteria to judge reviewers' perspectives and code their comments. We conducted two online multiple-choice surveys which were qualitatively analyzed: one of methodological peer-reviewers' perceptions of peer-reviewing, the other of published authors' views on the suggested revisions.
Results: Methodological peer-reviewers assessed 13 literature reviews submitted between September 2020 and March 2023. 55 reviewer reports were collected: 25 from methodological peer-reviewers, 30 from subject peer-reviewers (mean: 4.2 reviews per manuscript). Methodological peer-reviewers made more comments on methodologies, with authors more likely to implement their changes (52 of 65 changes, vs. 51 of 82 by subject peer-reviewers); they were also more likely to reject submissions (seven vs. four times, respectively). Where there were differences in recommendations to editors, journal editors were more likely to follow methodological peer-reviewers (nine vs. three times, respectively). The survey of published authors (87.5% response rate) revealed four of seven found comments on methodologies helpful. Librarians' survey responses (66.5% response rate) revealed those who conducted peer-reviews felt they improved quality of publications.
Conclusions: Librarians can enhance evidence synthesis publications by ensuring methodologies have been conducted and reported appropriately. Their recommendations helped authors revise submissions and facilitated editorial decision-making. Further research could determine if sharing reviews with subject peer-reviewers and journal editors could benefit them in better understanding of evidence synthesis methodologies.
{"title":"Librarians and information specialists as methodological peer-reviewers: a case-study of the International Journal of Health Governance.","authors":"Irina Ibragimova, Helen Fulbright","doi":"10.1186/s41073-023-00142-4","DOIUrl":"10.1186/s41073-023-00142-4","url":null,"abstract":"<p><strong>Background: </strong>Objectives of this study were to analyze the impact of including librarians and information specialist as methodological peer-reviewers. We sought to determine if and how librarians' comments differed from subject peer-reviewers'; whether there were differences in the implementation of their recommendations; how this impacted editorial decision-making; and the perceived utility of librarian peer-review by librarians and authors.</p><p><strong>Methods: </strong>We used a mixed method approach, conducting a qualitative analysis of reviewer reports, author replies and editors' decisions of submissions to the International Journal of Health Governance. Our content analysis categorized 16 thematic areas, so that methodological and subject peer-reviewers' comments, decisions and rejection rates could be compared. Categories were based on the standard areas covered in peer-review (e.g., title, originality, etc.) as well as additional in-depth categories relating to the methodology (e.g., search strategy, reporting guidelines, etc.). We developed and used criteria to judge reviewers' perspectives and code their comments. We conducted two online multiple-choice surveys which were qualitatively analyzed: one of methodological peer-reviewers' perceptions of peer-reviewing, the other of published authors' views on the suggested revisions.</p><p><strong>Results: </strong>Methodological peer-reviewers assessed 13 literature reviews submitted between September 2020 and March 2023. 55 reviewer reports were collected: 25 from methodological peer-reviewers, 30 from subject peer-reviewers (mean: 4.2 reviews per manuscript). Methodological peer-reviewers made more comments on methodologies, with authors more likely to implement their changes (52 of 65 changes, vs. 51 of 82 by subject peer-reviewers); they were also more likely to reject submissions (seven vs. four times, respectively). Where there were differences in recommendations to editors, journal editors were more likely to follow methodological peer-reviewers (nine vs. three times, respectively). The survey of published authors (87.5% response rate) revealed four of seven found comments on methodologies helpful. Librarians' survey responses (66.5% response rate) revealed those who conducted peer-reviews felt they improved quality of publications.</p><p><strong>Conclusions: </strong>Librarians can enhance evidence synthesis publications by ensuring methodologies have been conducted and reported appropriately. Their recommendations helped authors revise submissions and facilitated editorial decision-making. Further research could determine if sharing reviews with subject peer-reviewers and journal editors could benefit them in better understanding of evidence synthesis methodologies.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10797710/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139491777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-30DOI: 10.1186/s41073-023-00139-z
Aurélien Allard, Anna Catharina Vieira Armond, Mads Paludan Goddiksen, Mikkel Willum Johansen, Hillar Loor, Céline Schöpfer, Orsolya Varga, Christine Clavien
Background: Studies on academic integrity reveal high rates of plagiarism and cheating among students. We have developed an online teaching tool, Integrity Games ( https://integgame.eu/ ), that uses serious games to teach academic integrity. In this paper, we test the impact of a soft intervention - a short quiz - that was added to the Integrity Games website to increase users' interest in learning about integrity. Based on general principles of behavioral science, our quiz highlighted the intricacy of integrity issues, generated social comparisons, and produced personalized advice. We expected that these interventions would create a need for knowledge and encourage participants to spend more time on the website.
Methods: In a randomized controlled trial involving N = 405 students from Switzerland and France, half of the users had to take a short quiz before playing the serious games, while the other half could directly play the games. We measured how much time they spent playing the games, and, in a post-experimental survey, we measured their desire to learn about integrity issues and their understanding of integrity issues.
Results: Contrary to our expectations, the quiz had a negative impact on time spent playing the serious games. Moreover, the quiz did not increase participants' desire to learn about integrity issues or their overall understanding of the topic.
Conclusions: Our quiz did not have any measurable impact on curiosity or understanding of integrity issues, and may have had a negative impact on time spent on the Integrity games website. Our results highlight the difficulty of implementing behavioral insights in a real-world setting.
Trial registration: The study was preregistered at https://osf.io/73xty .
{"title":"The quizzical failure of a nudge on academic integrity education: a randomized controlled trial.","authors":"Aurélien Allard, Anna Catharina Vieira Armond, Mads Paludan Goddiksen, Mikkel Willum Johansen, Hillar Loor, Céline Schöpfer, Orsolya Varga, Christine Clavien","doi":"10.1186/s41073-023-00139-z","DOIUrl":"https://doi.org/10.1186/s41073-023-00139-z","url":null,"abstract":"<p><strong>Background: </strong>Studies on academic integrity reveal high rates of plagiarism and cheating among students. We have developed an online teaching tool, Integrity Games ( https://integgame.eu/ ), that uses serious games to teach academic integrity. In this paper, we test the impact of a soft intervention - a short quiz - that was added to the Integrity Games website to increase users' interest in learning about integrity. Based on general principles of behavioral science, our quiz highlighted the intricacy of integrity issues, generated social comparisons, and produced personalized advice. We expected that these interventions would create a need for knowledge and encourage participants to spend more time on the website.</p><p><strong>Methods: </strong>In a randomized controlled trial involving N = 405 students from Switzerland and France, half of the users had to take a short quiz before playing the serious games, while the other half could directly play the games. We measured how much time they spent playing the games, and, in a post-experimental survey, we measured their desire to learn about integrity issues and their understanding of integrity issues.</p><p><strong>Results: </strong>Contrary to our expectations, the quiz had a negative impact on time spent playing the serious games. Moreover, the quiz did not increase participants' desire to learn about integrity issues or their overall understanding of the topic.</p><p><strong>Conclusions: </strong>Our quiz did not have any measurable impact on curiosity or understanding of integrity issues, and may have had a negative impact on time spent on the Integrity games website. Our results highlight the difficulty of implementing behavioral insights in a real-world setting.</p><p><strong>Trial registration: </strong>The study was preregistered at https://osf.io/73xty .</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10688455/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138464957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-24DOI: 10.1186/s41073-023-00140-6
Piitu Parmanne, Joonas Laajava, Noora Järvinen, Terttu Harju, Mauri Marttunen, Pertti Saloheimo
Background: There is a power imbalance between authors and reviewers in single-blind peer review. We explored how switching from single-blind to double-blind peer review affected 1) the willingness of experts to review, 2) their publication recommendations, and 3) the quality of review reports.
Methods: The Finnish Medical Journal switched from single-blind to double-blind peer review in September 2017. The proportion of review invitations that resulted in a received review report was counted. The reviewers' recommendations of "accept as is", "minor revision", "major revision" or "reject" were explored. The content of the reviews was assessed by two experienced reviewers using the Review Quality Instrument modified to apply to both original research and review manuscripts. The study material comprised reviews submitted from September 2017 to February 2018. The controls were the reviews submitted between September 2015 and February 2016 and between September 2016 and February 2017. The reviewers' recommendations and the scorings of quality assessments were tested with the Chi square test, and the means of quality assessments with the independent-samples t test.
Results: A total of 118 double-blind first-round reviews of 59 manuscripts were compared with 232 single-blind first-round reviews of 116 manuscripts. The proportion of successful review invitations when reviewing single-blinded was 67%, and when reviewing double-blinded, 66%. When reviewing double-blinded, the reviewers recommended accept as is or minor revision less often than during the control period (59% vs. 73%), and major revision or rejection more often (41% vs 27%, P = 0.010). For the quality assessment, 116 reviews from the double-blind period were compared with 104 reviews conducted between September 2016 and February 2017. On a 1-5 scale (1 poor, 5 excellent), double-blind reviews received higher overall proportion of ratings of 4 and 5 than single-blind reviews (56% vs. 49%, P < 0.001). Means for the overall quality of double-blind reviews were 3.38 (IQR, 3.33-3.44) vs. 3.22 (3.17-3.28; P < 0.001) for single-blind reviews.
Conclusions: The quality of the reviews conducted double-blind was better than of those conducted single-blind. Switching to double-blind review did not alter the reviewers' willingness to review. The reviewers became slightly more critical.
{"title":"Peer reviewers' willingness to review, their recommendations and quality of reviews after the Finnish Medical Journal switched from single-blind to double-blind peer review.","authors":"Piitu Parmanne, Joonas Laajava, Noora Järvinen, Terttu Harju, Mauri Marttunen, Pertti Saloheimo","doi":"10.1186/s41073-023-00140-6","DOIUrl":"10.1186/s41073-023-00140-6","url":null,"abstract":"<p><strong>Background: </strong>There is a power imbalance between authors and reviewers in single-blind peer review. We explored how switching from single-blind to double-blind peer review affected 1) the willingness of experts to review, 2) their publication recommendations, and 3) the quality of review reports.</p><p><strong>Methods: </strong>The Finnish Medical Journal switched from single-blind to double-blind peer review in September 2017. The proportion of review invitations that resulted in a received review report was counted. The reviewers' recommendations of \"accept as is\", \"minor revision\", \"major revision\" or \"reject\" were explored. The content of the reviews was assessed by two experienced reviewers using the Review Quality Instrument modified to apply to both original research and review manuscripts. The study material comprised reviews submitted from September 2017 to February 2018. The controls were the reviews submitted between September 2015 and February 2016 and between September 2016 and February 2017. The reviewers' recommendations and the scorings of quality assessments were tested with the Chi square test, and the means of quality assessments with the independent-samples t test.</p><p><strong>Results: </strong>A total of 118 double-blind first-round reviews of 59 manuscripts were compared with 232 single-blind first-round reviews of 116 manuscripts. The proportion of successful review invitations when reviewing single-blinded was 67%, and when reviewing double-blinded, 66%. When reviewing double-blinded, the reviewers recommended accept as is or minor revision less often than during the control period (59% vs. 73%), and major revision or rejection more often (41% vs 27%, P = 0.010). For the quality assessment, 116 reviews from the double-blind period were compared with 104 reviews conducted between September 2016 and February 2017. On a 1-5 scale (1 poor, 5 excellent), double-blind reviews received higher overall proportion of ratings of 4 and 5 than single-blind reviews (56% vs. 49%, P < 0.001). Means for the overall quality of double-blind reviews were 3.38 (IQR, 3.33-3.44) vs. 3.22 (3.17-3.28; P < 0.001) for single-blind reviews.</p><p><strong>Conclusions: </strong>The quality of the reviews conducted double-blind was better than of those conducted single-blind. Switching to double-blind review did not alter the reviewers' willingness to review. The reviewers became slightly more critical.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10598992/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50159492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-21DOI: 10.1186/s41073-023-00138-0
Anna Nuechterlein, Tanya Barretto, Alaa Yehia, Judy Illes
Background: Diversity among editorial boards and in the peer review process maximizes the likelihood that the dissemination of reported results is both relevant and respectful to readers and end users. Past studies have examined diversity among editorial board members and reviewers for factors such as gender, geographic location, and race, but limited research has explored the representation of people with disabilities. Here, we sought to understand the landscape of inclusivity of people with lived experience of spinal cord injury specifically in journals publishing papers (2012-2022) on their quality of life.
Methods: An open and closed 12-question adaptive survey was disseminated to 31 journal editors over a one-month period beginning December 2022.
Results: We received 10 fully completed and 5 partially completed survey responses (response rate 48%). Notwithstanding the small sample, over 50% (8/15) of respondents indicated that their journal review practices involve people with lived experience of spinal cord injury, signaling positive even if incomplete inclusivity practices. The most notable reported barriers to achieving this goal related to identifying and recruiting people with lived experience to serve in the review and editorial process.
Conclusions: In this study we found positive but incomplete trends toward inclusivity in journal practices involving people with lived experience of spinal cord injury. We recommend, therefore, that explicit and genuine efforts are directed toward recruitment through community-based channels. To improve representation even further, we suggest that editors and reviewers be offered the opportunity to self-identify as living with a disability without discrimination or bias.
{"title":"Bridges of perspectives: representation of people with lived experience of spinal cord injury in editorial boards and peer review.","authors":"Anna Nuechterlein, Tanya Barretto, Alaa Yehia, Judy Illes","doi":"10.1186/s41073-023-00138-0","DOIUrl":"10.1186/s41073-023-00138-0","url":null,"abstract":"<p><strong>Background: </strong>Diversity among editorial boards and in the peer review process maximizes the likelihood that the dissemination of reported results is both relevant and respectful to readers and end users. Past studies have examined diversity among editorial board members and reviewers for factors such as gender, geographic location, and race, but limited research has explored the representation of people with disabilities. Here, we sought to understand the landscape of inclusivity of people with lived experience of spinal cord injury specifically in journals publishing papers (2012-2022) on their quality of life.</p><p><strong>Methods: </strong>An open and closed 12-question adaptive survey was disseminated to 31 journal editors over a one-month period beginning December 2022.</p><p><strong>Results: </strong>We received 10 fully completed and 5 partially completed survey responses (response rate 48%). Notwithstanding the small sample, over 50% (8/15) of respondents indicated that their journal review practices involve people with lived experience of spinal cord injury, signaling positive even if incomplete inclusivity practices. The most notable reported barriers to achieving this goal related to identifying and recruiting people with lived experience to serve in the review and editorial process.</p><p><strong>Conclusions: </strong>In this study we found positive but incomplete trends toward inclusivity in journal practices involving people with lived experience of spinal cord injury. We recommend, therefore, that explicit and genuine efforts are directed toward recruitment through community-based channels. To improve representation even further, we suggest that editors and reviewers be offered the opportunity to self-identify as living with a disability without discrimination or bias.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10512589/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41159668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-05DOI: 10.1186/s41073-023-00137-1
Thomas Perneger
Background: Scientific productivity is often evaluated by means of cumulative citation metrics. Different metrics produce different incentives. The H-index assigns full credit from a citation to each coauthor, and thus may encourage multiple collaborations in mid-list author roles. In contrast, the Hm-index assigns only a fraction 1/k of citation credit to each of k coauthors of an article, and thus may encourage research done by smaller teams, and in first or last author roles. Whether H and Hm indices are influenced by different authorship patterns has not been examined.
Methods: Using a publicly available Scopus database, I examined associations between the numbers of research articles published as single, first, mid-list, or last author between 1990 and 2019, and the H-index and the Hm-index, among 18,231 leading researchers in the health sciences.
Results: Adjusting for career duration and other article types, the H-index was negatively associated with the number of single author articles (partial Pearson r -0.06) and first author articles (-0.08), but positively associated with the number of mid-list (0.64) and last author articles (0.21). In contrast, all associations were positive for the Hm-index (0.04 for single author articles, 0.18 for first author articles, 0.24 for mid-list articles, and 0.46 for last author articles).
Conclusion: The H-index and the Hm-index do not reflect the same authorship patterns: the full-credit H-index is predominantly associated with mid-list authorship, whereas the partial-credit Hm-index is driven by more balanced publication patterns, and is most strongly associated with last-author articles. Since performance metrics may act as incentives, the selection of a citation metric should receive careful consideration.
背景:科学生产力通常通过累积引用指标来评估。不同的指标产生不同的激励。h指数将引文的全部荣誉分配给每个共同作者,因此可能会鼓励以中等作者角色进行多次合作。相比之下,hm指数只给一篇文章的k个共同作者分配了1/k的引用信用,因此可能会鼓励较小的团队进行研究,并以第一或最后作者的身份进行研究。H和Hm指数是否受到不同作者模式的影响尚未得到检验。方法:使用公开可用的Scopus数据库,我检查了1990年至2019年期间以单一作者、第一作者、中作者或最后作者发表的研究文章数量与h指数和hm指数之间的关系,研究对象是18231名健康科学领域的主要研究人员。结果:调整职业时间和其他文章类型后,h指数与单作者文章数(偏Pearson r -0.06)和第一作者文章数(-0.08)呈负相关,与中位作者文章数(0.64)和末位作者文章数(0.21)呈正相关。相比之下,所有相关的hm指数都是正的(单作者文章为0.04,第一作者文章为0.18,中间列表文章为0.24,最后作者文章为0.46)。结论:H-index和Hm-index并不反映相同的作者模式:完全署名的H-index主要与中排作者相关,而部分署名的H-index受更平衡的发表模式驱动,与最后作者的文章关系最密切。由于绩效指标可能起到激励作用,因此应该仔细考虑引用指标的选择。
{"title":"Authorship and citation patterns of highly cited biomedical researchers: a cross-sectional study.","authors":"Thomas Perneger","doi":"10.1186/s41073-023-00137-1","DOIUrl":"10.1186/s41073-023-00137-1","url":null,"abstract":"<p><strong>Background: </strong>Scientific productivity is often evaluated by means of cumulative citation metrics. Different metrics produce different incentives. The H-index assigns full credit from a citation to each coauthor, and thus may encourage multiple collaborations in mid-list author roles. In contrast, the Hm-index assigns only a fraction 1/k of citation credit to each of k coauthors of an article, and thus may encourage research done by smaller teams, and in first or last author roles. Whether H and Hm indices are influenced by different authorship patterns has not been examined.</p><p><strong>Methods: </strong>Using a publicly available Scopus database, I examined associations between the numbers of research articles published as single, first, mid-list, or last author between 1990 and 2019, and the H-index and the Hm-index, among 18,231 leading researchers in the health sciences.</p><p><strong>Results: </strong>Adjusting for career duration and other article types, the H-index was negatively associated with the number of single author articles (partial Pearson r -0.06) and first author articles (-0.08), but positively associated with the number of mid-list (0.64) and last author articles (0.21). In contrast, all associations were positive for the Hm-index (0.04 for single author articles, 0.18 for first author articles, 0.24 for mid-list articles, and 0.46 for last author articles).</p><p><strong>Conclusion: </strong>The H-index and the Hm-index do not reflect the same authorship patterns: the full-credit H-index is predominantly associated with mid-list authorship, whereas the partial-credit Hm-index is driven by more balanced publication patterns, and is most strongly associated with last-author articles. Since performance metrics may act as incentives, the selection of a citation metric should receive careful consideration.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10478343/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10159698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-23DOI: 10.1186/s41073-023-00135-3
Sherif Ahmed Kamel, Tamer A El-Sobky
Background: Abstracts should provide a brief yet comprehensive reporting of all components of a manuscript. Inaccurate reporting may mislead readers and impact citation practices. It was our goal to investigate the reporting quality of abstracts of interventional observational studies in three major pediatric orthopedic journals and to analyze any reporting inconsistencies between those abstracts and their corresponding full-text articles.
Methods: We selected a sample of 55 abstracts and their full-text articles published between 2018 and 2022. Included articles were primary therapeutic research investigating the results of treatments or interventions. Abstracts were scrutinized for reporting quality and inconsistencies with their full-text versions with a 22-itemized checklist. The reporting quality of titles was assessed by a 3-items categorical scale.
Results: In 48 (87%) of articles there were abstract reporting inaccuracies related to patient demographics. The study's follow-up and complications were not reported in 21 (38%) of abstracts each. Most common inconsistencies between the abstracts and full-text articles were related to reporting of inclusion or exclusion criteria in 39 (71%) and study correlations in 27 (49%) of articles. Reporting quality of the titles was insufficient in 33 (60%) of articles.
Conclusions: In our study we found low reporting quality of abstracts and noticeable inconsistencies with full-text articles, especially regarding inclusion or exclusion criteria and study correlations. While the current sample is likely not representative of overall pediatric orthopedic literature, we recommend that authors, reviewers, and editors ensure abstracts are reported accurately, ideally following the appropriate reporting guidelines, and that they double check that there are no inconsistencies between abstracts and full text articles. To capture essential study information, journals should also consider increasing abstract word limits.
{"title":"Reporting quality of abstracts and inconsistencies with full text articles in pediatric orthopedic publications.","authors":"Sherif Ahmed Kamel, Tamer A El-Sobky","doi":"10.1186/s41073-023-00135-3","DOIUrl":"10.1186/s41073-023-00135-3","url":null,"abstract":"<p><strong>Background: </strong>Abstracts should provide a brief yet comprehensive reporting of all components of a manuscript. Inaccurate reporting may mislead readers and impact citation practices. It was our goal to investigate the reporting quality of abstracts of interventional observational studies in three major pediatric orthopedic journals and to analyze any reporting inconsistencies between those abstracts and their corresponding full-text articles.</p><p><strong>Methods: </strong>We selected a sample of 55 abstracts and their full-text articles published between 2018 and 2022. Included articles were primary therapeutic research investigating the results of treatments or interventions. Abstracts were scrutinized for reporting quality and inconsistencies with their full-text versions with a 22-itemized checklist. The reporting quality of titles was assessed by a 3-items categorical scale.</p><p><strong>Results: </strong>In 48 (87%) of articles there were abstract reporting inaccuracies related to patient demographics. The study's follow-up and complications were not reported in 21 (38%) of abstracts each. Most common inconsistencies between the abstracts and full-text articles were related to reporting of inclusion or exclusion criteria in 39 (71%) and study correlations in 27 (49%) of articles. Reporting quality of the titles was insufficient in 33 (60%) of articles.</p><p><strong>Conclusions: </strong>In our study we found low reporting quality of abstracts and noticeable inconsistencies with full-text articles, especially regarding inclusion or exclusion criteria and study correlations. While the current sample is likely not representative of overall pediatric orthopedic literature, we recommend that authors, reviewers, and editors ensure abstracts are reported accurately, ideally following the appropriate reporting guidelines, and that they double check that there are no inconsistencies between abstracts and full text articles. To capture essential study information, journals should also consider increasing abstract word limits.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10463470/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10121003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1186/s41073-023-00134-4
Fabrice Frank, Nans Florens, Gideon Meyerowitz-Katz, Jérôme Barriere, Éric Billy, Véronique Saada, Alexander Samuel, Jacques Robert, Lonni Besançon
Background: The practice of clinical research is strictly regulated by law. During submission and review processes, compliance of such research with the laws enforced in the country where it was conducted is not always correctly filled in by the authors or verified by the editors. Here, we report a case of a single institution for which one may find hundreds of publications with seemingly relevant ethical concerns, along with 10 months of follow-up through contacts with the editors of these articles. We thus argue for a stricter control of ethical authorization by scientific editors and we call on publishers to cooperate to this end.
Methods: We present an investigation of the ethics and legal aspects of 456 studies published by the IHU-MI (Institut Hospitalo-Universitaire Méditerranée Infection) in Marseille, France.
Results: We identified a wide range of issues with the stated research authorization and ethics of the published studies with respect to the Institutional Review Board and the approval presented. Among the studies investigated, 248 were conducted with the same ethics approval number, even though the subjects, samples, and countries of investigation were different. Thirty-nine (39) did not even contain a reference to the ethics approval number while they present research on human beings. We thus contacted the journals that published these articles and provide their responses to our concerns. It should be noted that, since our investigation and reporting to journals, PLOS has issued expressions of concerns for several publications we analyze here.
Conclusion: This case presents an investigation of the veracity of ethical approval, and more than 10 months of follow-up by independent researchers. We call for stricter control and cooperation in handling of these cases, including editorial requirement to upload ethical approval documents, guidelines from COPE to address such ethical concerns, and transparent editorial policies and timelines to answer such concerns. All supplementary materials are available.
{"title":"Raising concerns on questionable ethics approvals - a case study of 456 trials from the Institut Hospitalo-Universitaire Méditerranée Infection.","authors":"Fabrice Frank, Nans Florens, Gideon Meyerowitz-Katz, Jérôme Barriere, Éric Billy, Véronique Saada, Alexander Samuel, Jacques Robert, Lonni Besançon","doi":"10.1186/s41073-023-00134-4","DOIUrl":"https://doi.org/10.1186/s41073-023-00134-4","url":null,"abstract":"<p><strong>Background: </strong>The practice of clinical research is strictly regulated by law. During submission and review processes, compliance of such research with the laws enforced in the country where it was conducted is not always correctly filled in by the authors or verified by the editors. Here, we report a case of a single institution for which one may find hundreds of publications with seemingly relevant ethical concerns, along with 10 months of follow-up through contacts with the editors of these articles. We thus argue for a stricter control of ethical authorization by scientific editors and we call on publishers to cooperate to this end.</p><p><strong>Methods: </strong>We present an investigation of the ethics and legal aspects of 456 studies published by the IHU-MI (Institut Hospitalo-Universitaire Méditerranée Infection) in Marseille, France.</p><p><strong>Results: </strong>We identified a wide range of issues with the stated research authorization and ethics of the published studies with respect to the Institutional Review Board and the approval presented. Among the studies investigated, 248 were conducted with the same ethics approval number, even though the subjects, samples, and countries of investigation were different. Thirty-nine (39) did not even contain a reference to the ethics approval number while they present research on human beings. We thus contacted the journals that published these articles and provide their responses to our concerns. It should be noted that, since our investigation and reporting to journals, PLOS has issued expressions of concerns for several publications we analyze here.</p><p><strong>Conclusion: </strong>This case presents an investigation of the veracity of ethical approval, and more than 10 months of follow-up by independent researchers. We call for stricter control and cooperation in handling of these cases, including editorial requirement to upload ethical approval documents, guidelines from COPE to address such ethical concerns, and transparent editorial policies and timelines to answer such concerns. All supplementary materials are available.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10398994/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9938883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-24DOI: 10.1186/s41073-023-00131-7
Stephen A Gallo, Michael Pearce, Carole J Lee, Elena A Erosheva
Background: In many grant review settings, proposals are selected for funding on the basis of summary statistics of review ratings. Challenges of this approach (including the presence of ties and unclear ordering of funding preference for proposals) could be mitigated if rankings such as top-k preferences or paired comparisons, which are local evaluations that enforce ordering across proposals, were also collected and incorporated in the analysis of review ratings. However, analyzing ratings and rankings simultaneously has not been done until recently. This paper describes a practical method for integrating rankings and scores and demonstrates its usefulness for making funding decisions in real-world applications.
Methods: We first present the application of our existing joint model for rankings and ratings, the Mallows-Binomial, in obtaining an integrated score for each proposal and generating the induced preference ordering. We then apply this methodology to several theoretical "toy" examples of rating and ranking data, designed to demonstrate specific properties of the model. We then describe an innovative protocol for collecting rankings of the top-six proposals as an add-on to the typical peer review scoring procedures and provide a case study using actual peer review data to exemplify the output and how the model can appropriately resolve judges' evaluations.
Results: For the theoretical examples, we show how the model can provide a preference order to equally rated proposals by incorporating rankings, to proposals using ratings and only partial rankings (and how they differ from a ratings-only approach) and to proposals where judges provide internally inconsistent ratings/rankings and outlier scoring. Finally, we discuss how, using real world panel data, this method can provide information about funding priority with a level of accuracy in a well-suited format for research funding decisions.
Conclusions: A methodology is provided to collect and employ both rating and ranking data in peer review assessments of proposal submission quality, highlighting several advantages over methods relying on ratings alone. This method leverages information to most accurately distill reviewer opinion into a useful output to make an informed funding decision and is general enough to be applied to settings such as in the NIH panel review process.
{"title":"A new approach to grant review assessments: score, then rank.","authors":"Stephen A Gallo, Michael Pearce, Carole J Lee, Elena A Erosheva","doi":"10.1186/s41073-023-00131-7","DOIUrl":"https://doi.org/10.1186/s41073-023-00131-7","url":null,"abstract":"<p><strong>Background: </strong>In many grant review settings, proposals are selected for funding on the basis of summary statistics of review ratings. Challenges of this approach (including the presence of ties and unclear ordering of funding preference for proposals) could be mitigated if rankings such as top-k preferences or paired comparisons, which are local evaluations that enforce ordering across proposals, were also collected and incorporated in the analysis of review ratings. However, analyzing ratings and rankings simultaneously has not been done until recently. This paper describes a practical method for integrating rankings and scores and demonstrates its usefulness for making funding decisions in real-world applications.</p><p><strong>Methods: </strong>We first present the application of our existing joint model for rankings and ratings, the Mallows-Binomial, in obtaining an integrated score for each proposal and generating the induced preference ordering. We then apply this methodology to several theoretical \"toy\" examples of rating and ranking data, designed to demonstrate specific properties of the model. We then describe an innovative protocol for collecting rankings of the top-six proposals as an add-on to the typical peer review scoring procedures and provide a case study using actual peer review data to exemplify the output and how the model can appropriately resolve judges' evaluations.</p><p><strong>Results: </strong>For the theoretical examples, we show how the model can provide a preference order to equally rated proposals by incorporating rankings, to proposals using ratings and only partial rankings (and how they differ from a ratings-only approach) and to proposals where judges provide internally inconsistent ratings/rankings and outlier scoring. Finally, we discuss how, using real world panel data, this method can provide information about funding priority with a level of accuracy in a well-suited format for research funding decisions.</p><p><strong>Conclusions: </strong>A methodology is provided to collect and employ both rating and ranking data in peer review assessments of proposal submission quality, highlighting several advantages over methods relying on ratings alone. This method leverages information to most accurately distill reviewer opinion into a useful output to make an informed funding decision and is general enough to be applied to settings such as in the NIH panel review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10367367/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9865500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-12DOI: 10.1186/s41073-023-00132-6
Edwin Were, Jepchirchir Kiplagat, Eunice Kaguiri, Rose Ayikukwei, Violet Naanyu
Background: Research misconduct i.e. fabrication, falsification, and plagiarism is associated with individual, institutional, national, and global factors. Researchers' perceptions of weak or non-existent institutional guidelines on the prevention and management of research misconduct can encourage these practices. Few countries in Africa have clear guidance on research misconduct. In Kenya, the capacity to prevent or manage research misconduct in academic and research institutions has not been documented. The objective of this study was to explore the perceptions of Kenyan research regulators on the occurrence of and institutional capacity to prevent or manage research misconduct.
Methods: Interviews with open-ended questions were conducted with 27 research regulators (chairs and secretaries of ethics committees, research directors of academic and research institutions, and national regulatory bodies). Among other questions, participants were asked: (1) How common is research misconduct in your view? (2) Does your institution have the capacity to prevent research misconduct? (3) Does your institution have the capacity to manage research misconduct? Their responses were audiotaped, transcribed, and coded using NVivo software. Deductive coding covered predefined themes including perceptions on occurrence, prevention detection, investigation, and management of research misconduct. Results are presented with illustrative quotes.
Results: Respondents perceived research misconduct to be very common among students developing thesis reports. Their responses suggested there was no dedicated capacity to prevent or manage research misconduct at the institutional and national levels. There were no specific national guidelines on research misconduct. At the institutional level, the only capacity/efforts mentioned were directed at reducing, detecting, and managing student plagiarism. There was no direct mention of the capacity to manage fabrication and falsification or misconduct by faculty researchers. We recommend the development of Kenya code of conduct or research integrity guidelines that would cover misconduct.
{"title":"Institutional capacity to prevent and manage research misconduct: perspectives from Kenyan research regulators.","authors":"Edwin Were, Jepchirchir Kiplagat, Eunice Kaguiri, Rose Ayikukwei, Violet Naanyu","doi":"10.1186/s41073-023-00132-6","DOIUrl":"https://doi.org/10.1186/s41073-023-00132-6","url":null,"abstract":"<p><strong>Background: </strong>Research misconduct i.e. fabrication, falsification, and plagiarism is associated with individual, institutional, national, and global factors. Researchers' perceptions of weak or non-existent institutional guidelines on the prevention and management of research misconduct can encourage these practices. Few countries in Africa have clear guidance on research misconduct. In Kenya, the capacity to prevent or manage research misconduct in academic and research institutions has not been documented. The objective of this study was to explore the perceptions of Kenyan research regulators on the occurrence of and institutional capacity to prevent or manage research misconduct.</p><p><strong>Methods: </strong>Interviews with open-ended questions were conducted with 27 research regulators (chairs and secretaries of ethics committees, research directors of academic and research institutions, and national regulatory bodies). Among other questions, participants were asked: (1) How common is research misconduct in your view? (2) Does your institution have the capacity to prevent research misconduct? (3) Does your institution have the capacity to manage research misconduct? Their responses were audiotaped, transcribed, and coded using NVivo software. Deductive coding covered predefined themes including perceptions on occurrence, prevention detection, investigation, and management of research misconduct. Results are presented with illustrative quotes.</p><p><strong>Results: </strong>Respondents perceived research misconduct to be very common among students developing thesis reports. Their responses suggested there was no dedicated capacity to prevent or manage research misconduct at the institutional and national levels. There were no specific national guidelines on research misconduct. At the institutional level, the only capacity/efforts mentioned were directed at reducing, detecting, and managing student plagiarism. There was no direct mention of the capacity to manage fabrication and falsification or misconduct by faculty researchers. We recommend the development of Kenya code of conduct or research integrity guidelines that would cover misconduct.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10337100/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10190722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}