Pub Date : 2021-09-30DOI: 10.1186/s41073-021-00115-5
Jan-Ole Hesselberg, Knut Inge Fostervold, Pål Ulleberg, Ida Svege
Background: Vast sums are distributed based on grant peer review, but studies show that interrater reliability is often low. In this study, we tested the effect of receiving two short individual feedback reports compared to one short general feedback report on the agreement between reviewers.
Methods: A total of 42 reviewers at the Norwegian Foundation Dam were randomly assigned to receive either a general feedback report or an individual feedback report. The general feedback group received one report before the start of the reviews that contained general information about the previous call in which the reviewers participated. In the individual feedback group, the reviewers received two reports, one before the review period (based on the previous call) and one during the period (based on the current call). In the individual feedback group, the reviewers were presented with detailed information on their scoring compared with the review committee as a whole, both before and during the review period. The main outcomes were the proportion of agreement in the eligibility assessment and the average difference in scores between pairs of reviewers assessing the same proposal. The outcomes were measured in 2017 and after the feedback was provided in 2018.
Results: A total of 2398 paired reviews were included in the analysis. There was a significant difference between the two groups in the proportion of absolute agreement on whether the proposal was eligible for the funding programme, with the general feedback group demonstrating a higher rate of agreement. There was no difference between the two groups in terms of the average score difference. However, the agreement regarding the proposal score remained critically low for both groups.
Conclusions: We did not observe changes in proposal score agreement between 2017 and 2018 in reviewers receiving different feedback. The low levels of agreement remain a major concern in grant peer review, and research to identify contributing factors as well as the development and testing of interventions to increase agreement rates are still needed.
Trial registration: The study was preregistered at OSF.io/n4fq3 .
{"title":"Individual versus general structured feedback to improve agreement in grant peer review: a randomized controlled trial.","authors":"Jan-Ole Hesselberg, Knut Inge Fostervold, Pål Ulleberg, Ida Svege","doi":"10.1186/s41073-021-00115-5","DOIUrl":"10.1186/s41073-021-00115-5","url":null,"abstract":"<p><strong>Background: </strong>Vast sums are distributed based on grant peer review, but studies show that interrater reliability is often low. In this study, we tested the effect of receiving two short individual feedback reports compared to one short general feedback report on the agreement between reviewers.</p><p><strong>Methods: </strong>A total of 42 reviewers at the Norwegian Foundation Dam were randomly assigned to receive either a general feedback report or an individual feedback report. The general feedback group received one report before the start of the reviews that contained general information about the previous call in which the reviewers participated. In the individual feedback group, the reviewers received two reports, one before the review period (based on the previous call) and one during the period (based on the current call). In the individual feedback group, the reviewers were presented with detailed information on their scoring compared with the review committee as a whole, both before and during the review period. The main outcomes were the proportion of agreement in the eligibility assessment and the average difference in scores between pairs of reviewers assessing the same proposal. The outcomes were measured in 2017 and after the feedback was provided in 2018.</p><p><strong>Results: </strong>A total of 2398 paired reviews were included in the analysis. There was a significant difference between the two groups in the proportion of absolute agreement on whether the proposal was eligible for the funding programme, with the general feedback group demonstrating a higher rate of agreement. There was no difference between the two groups in terms of the average score difference. However, the agreement regarding the proposal score remained critically low for both groups.</p><p><strong>Conclusions: </strong>We did not observe changes in proposal score agreement between 2017 and 2018 in reviewers receiving different feedback. The low levels of agreement remain a major concern in grant peer review, and research to identify contributing factors as well as the development and testing of interventions to increase agreement rates are still needed.</p><p><strong>Trial registration: </strong>The study was preregistered at OSF.io/n4fq3 .</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"12"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8485516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39474032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-02DOI: 10.1186/s41073-021-00113-7
Joanna Diong, Cynthia M Kroeger, Katherine J Reynolds, Adrian Barnett, Lisa A Bero
Background: Australian health and medical research funders support substantial research efforts, and incentives within grant funding schemes influence researcher behaviour. We aimed to determine to what extent Australian health and medical funders incentivise responsible research practices.
Methods: We conducted an audit of instructions from research grant and fellowship schemes. Eight national research grants and fellowships were purposively sampled to select schemes that awarded the largest amount of funds. The funding scheme instructions were assessed against 9 criteria to determine to what extent they incentivised these responsible research and reporting practices: (1) publicly register study protocols before starting data collection, (2) register analysis protocols before starting data analysis, (3) make study data openly available, (4) make analysis code openly available, (5) make research materials openly available, (6) discourage use of publication metrics, (7) conduct quality research (e.g. adhere to reporting guidelines), (8) collaborate with a statistician, and (9) adhere to other responsible research practices. Each criterion was answered using one of the following responses: "Instructed", "Encouraged", or "No mention".
Results: Across the 8 schemes from 5 funders, applicants were instructed or encouraged to address a median of 4 (range 0 to 5) of the 9 criteria. Three criteria received no mention in any scheme (register analysis protocols, make analysis code open, collaborate with a statistician). Importantly, most incentives did not seem strong as applicants were only instructed to register study protocols, discourage use of publication metrics and conduct quality research. Other criteria were encouraged but were not required.
Conclusions: Funders could strengthen the incentives for responsible research practices by requiring grant and fellowship applicants to implement these practices in their proposals. Administering institutions could be required to implement these practices to be eligible for funding. Strongly rewarding researchers for implementing robust research practices could lead to sustained improvements in the quality of health and medical research.
{"title":"Strengthening the incentives for responsible research practices in Australian health and medical research funding.","authors":"Joanna Diong, Cynthia M Kroeger, Katherine J Reynolds, Adrian Barnett, Lisa A Bero","doi":"10.1186/s41073-021-00113-7","DOIUrl":"10.1186/s41073-021-00113-7","url":null,"abstract":"<p><strong>Background: </strong>Australian health and medical research funders support substantial research efforts, and incentives within grant funding schemes influence researcher behaviour. We aimed to determine to what extent Australian health and medical funders incentivise responsible research practices.</p><p><strong>Methods: </strong>We conducted an audit of instructions from research grant and fellowship schemes. Eight national research grants and fellowships were purposively sampled to select schemes that awarded the largest amount of funds. The funding scheme instructions were assessed against 9 criteria to determine to what extent they incentivised these responsible research and reporting practices: (1) publicly register study protocols before starting data collection, (2) register analysis protocols before starting data analysis, (3) make study data openly available, (4) make analysis code openly available, (5) make research materials openly available, (6) discourage use of publication metrics, (7) conduct quality research (e.g. adhere to reporting guidelines), (8) collaborate with a statistician, and (9) adhere to other responsible research practices. Each criterion was answered using one of the following responses: \"Instructed\", \"Encouraged\", or \"No mention\".</p><p><strong>Results: </strong>Across the 8 schemes from 5 funders, applicants were instructed or encouraged to address a median of 4 (range 0 to 5) of the 9 criteria. Three criteria received no mention in any scheme (register analysis protocols, make analysis code open, collaborate with a statistician). Importantly, most incentives did not seem strong as applicants were only instructed to register study protocols, discourage use of publication metrics and conduct quality research. Other criteria were encouraged but were not required.</p><p><strong>Conclusions: </strong>Funders could strengthen the incentives for responsible research practices by requiring grant and fellowship applicants to implement these practices in their proposals. Administering institutions could be required to implement these practices to be eligible for funding. Strongly rewarding researchers for implementing robust research practices could lead to sustained improvements in the quality of health and medical research.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"11"},"PeriodicalIF":0.0,"publicationDate":"2021-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8328133/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39277405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-11DOI: 10.1186/s41073-021-00114-6
Kim Boesen, Anders Lykkemark Simonsen, Karsten Juhl Jørgensen, Peter C Gøtzsche
{"title":"Correction to: Cross-sectional study of medical advertisements in a national general medical journal: evidence, cost, and safe use of advertised versus comparative drugs.","authors":"Kim Boesen, Anders Lykkemark Simonsen, Karsten Juhl Jørgensen, Peter C Gøtzsche","doi":"10.1186/s41073-021-00114-6","DOIUrl":"https://doi.org/10.1186/s41073-021-00114-6","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"10"},"PeriodicalIF":0.0,"publicationDate":"2021-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-021-00114-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39086140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-02DOI: 10.1186/s41073-021-00112-8
Evan Mayo-Wilson, Sean Grant, Lauren Supplee, Sina Kianersi, Afsah Amin, Alex DeHaven, David Mellor
Background: The Transparency and Openness Promotion (TOP) Guidelines describe modular standards that journals can adopt to promote open science. The TOP Factor is a metric to describe the extent to which journals have adopted the TOP Guidelines in their policies. Systematic methods and rating instruments are needed to calculate the TOP Factor. Moreover, implementation of these open science policies depends on journal procedures and practices, for which TOP provides no standards or rating instruments.
Methods: We describe a process for assessing journal policies, procedures, and practices according to the TOP Guidelines. We developed this process as part of the Transparency of Research Underpinning Social Intervention Tiers (TRUST) Initiative to advance open science in the social intervention research ecosystem. We also provide new instruments for rating journal instructions to authors (policies), manuscript submission systems (procedures), and published articles (practices) according to standards in the TOP Guidelines. In addition, we describe how to determine the TOP Factor score for a journal, calculate reliability of journal ratings, and assess coherence among a journal's policies, procedures, and practices. As a demonstration of this process, we describe a protocol for studying approximately 345 influential journals that have published research used to inform evidence-based policy.
Discussion: The TRUST Process includes systematic methods and rating instruments for assessing and facilitating implementation of the TOP Guidelines by journals across disciplines. Our study of journals publishing influential social intervention research will provide a comprehensive account of whether these journals have policies, procedures, and practices that are consistent with standards for open science and thereby facilitate the publication of trustworthy findings to inform evidence-based policy. Through this demonstration, we expect to identify ways to refine the TOP Guidelines and the TOP Factor. Refinements could include: improving templates for adoption in journal instructions to authors, manuscript submission systems, and published articles; revising explanatory guidance intended to enhance the use, understanding, and dissemination of the TOP Guidelines; and clarifying the distinctions among different levels of implementation. Research materials are available on the Open Science Framework: https://osf.io/txyr3/ .
背景:透明度和公开性促进(TOP)指南》描述了期刊为促进开放科学而可以采用的模块标准。TOP Factor 是一种衡量标准,用来描述期刊在其政策中采用《透明度与公开性促进指南》的程度。计算 TOP 因子需要系统的方法和评级工具。此外,这些开放科学政策的实施取决于期刊的程序和实践,而 TOP 并没有提供这方面的标准或评级工具:方法:我们介绍了根据《顶级期刊指南》评估期刊政策、程序和实践的流程。我们开发了这一流程,作为社会干预层级研究透明度(TRUST)计划的一部分,以推动社会干预研究生态系统中的开放科学。我们还提供了新的工具,用于根据《TOP 指南》中的标准对期刊的作者须知(政策)、投稿系统(程序)和已发表文章(实践)进行评级。此外,我们还介绍了如何确定期刊的 TOP 因子得分,计算期刊评级的可靠性,以及评估期刊政策、程序和实践之间的一致性。作为该流程的演示,我们介绍了对约 345 种有影响力的期刊进行研究的方案,这些期刊发表的研究成果为循证政策提供了依据:TRUST 流程包括系统方法和评级工具,用于评估和促进各学科期刊实施《顶级期刊指南》。我们对发表有影响力的社会干预研究的期刊进行的研究将全面说明这些期刊是否拥有符合开放科学标准的政策、程序和实践,从而促进发表可信的研究成果,为循证政策提供依据。通过此次论证,我们有望找到完善《顶级期刊指南》和《顶级期刊因子》的方法。完善工作可包括:改进模板,以便在期刊的作者须知、投稿系统和发表的文章中采用;修订解释性指南,以加强对《最高学术标准指南》的使用、理解和传播;以及明确不同实施水平之间的区别。研究材料可在开放科学框架网站上查阅:https://osf.io/txyr3/ 。
{"title":"Evaluating implementation of the Transparency and Openness Promotion (TOP) guidelines: the TRUST process for rating journal policies, procedures, and practices.","authors":"Evan Mayo-Wilson, Sean Grant, Lauren Supplee, Sina Kianersi, Afsah Amin, Alex DeHaven, David Mellor","doi":"10.1186/s41073-021-00112-8","DOIUrl":"10.1186/s41073-021-00112-8","url":null,"abstract":"<p><strong>Background: </strong>The Transparency and Openness Promotion (TOP) Guidelines describe modular standards that journals can adopt to promote open science. The TOP Factor is a metric to describe the extent to which journals have adopted the TOP Guidelines in their policies. Systematic methods and rating instruments are needed to calculate the TOP Factor. Moreover, implementation of these open science policies depends on journal procedures and practices, for which TOP provides no standards or rating instruments.</p><p><strong>Methods: </strong>We describe a process for assessing journal policies, procedures, and practices according to the TOP Guidelines. We developed this process as part of the Transparency of Research Underpinning Social Intervention Tiers (TRUST) Initiative to advance open science in the social intervention research ecosystem. We also provide new instruments for rating journal instructions to authors (policies), manuscript submission systems (procedures), and published articles (practices) according to standards in the TOP Guidelines. In addition, we describe how to determine the TOP Factor score for a journal, calculate reliability of journal ratings, and assess coherence among a journal's policies, procedures, and practices. As a demonstration of this process, we describe a protocol for studying approximately 345 influential journals that have published research used to inform evidence-based policy.</p><p><strong>Discussion: </strong>The TRUST Process includes systematic methods and rating instruments for assessing and facilitating implementation of the TOP Guidelines by journals across disciplines. Our study of journals publishing influential social intervention research will provide a comprehensive account of whether these journals have policies, procedures, and practices that are consistent with standards for open science and thereby facilitate the publication of trustworthy findings to inform evidence-based policy. Through this demonstration, we expect to identify ways to refine the TOP Guidelines and the TOP Factor. Refinements could include: improving templates for adoption in journal instructions to authors, manuscript submission systems, and published articles; revising explanatory guidance intended to enhance the use, understanding, and dissemination of the TOP Guidelines; and clarifying the distinctions among different levels of implementation. Research materials are available on the Open Science Framework: https://osf.io/txyr3/ .</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2021-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8173977/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39055385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-10DOI: 10.1186/s41073-021-00111-9
Kim Boesen, Anders Lykkemark Simonsen, Karsten Juhl Jørgensen, Peter C Gøtzsche
Background: Healthcare professionals are exposed to advertisements for prescription drugs in medical journals. Such advertisements may increase prescriptions of new drugs at the expense of older treatments even when they have no added benefits, are more harmful, and are more expensive. The publication of medical advertisements therefore raises ethical questions related to editorial integrity.
Methods: We conducted a descriptive cross-sectional study of all medical advertisements published in the Journal of the Danish Medical Association in 2015. Drugs advertised 6 times or more were compared with older comparators: (1) comparative evidence of added benefit; (2) Defined Daily Dose cost; (3) regulatory safety announcements; and (4) completed and ongoing post-marketing studies 3 years after advertising.
Results: We found 158 medical advertisements for 35 prescription drugs published in 24 issues during 2015, with a median of 7 advertisements per issue (range 0 to 11). Four drug groups and 5 single drugs were advertised 6 times or more, for a total of 10 indications, and we made 14 comparisons with older treatments. We found: (1) 'no added benefit' in 4 (29%) of 14 comparisons, 'uncertain benefits' in 7 (50%), and 'no evidence' in 3 (21%) comparisons. In no comparison did we find evidence of 'substantial added benefit' for the new drug; (2) advertised drugs were 2 to 196 times (median 6) more expensive per Defined Daily Dose; (3) 11 safety announcements for five advertised drugs were issued compared to one announcement for one comparator drug; (4) 20 post-marketing studies (7 completed, 13 ongoing) were requested for the advertised drugs versus 10 studies (4 completed, 6 ongoing) for the comparator drugs, and 7 studies (2 completed, 5 ongoing) assessed both an advertised and a comparator drug at 3 year follow-up.
Conclusions and relevance: In this cross-sectional study of medical advertisements published in the Journal of the Danish Medical Association during 2015, the most advertised drugs did not have documented substantial added benefits over older treatments, whereas they were substantially more expensive. From January 2021, the Journal of the Danish Medical Association no longer publishes medical advertisements.
背景:医疗保健专业人员接触到医学杂志上的处方药广告。这样的广告可能会增加新药的处方,而牺牲旧的治疗方法,即使它们没有额外的好处,更有害,更昂贵。因此,医疗广告的出版引发了与编辑诚信有关的伦理问题。方法:我们对2015年发表在《丹麦医学会杂志》上的所有医疗广告进行了描述性横断面研究。广告6次或6次以上的药物与较老的比较者进行比较:(1)增加获益的比较证据;(2)限定日剂量费用;(三)监管安全公告;(4)广告后3年完成并正在进行的营销后研究。结果:2015年共24期共发现35种处方药158条医疗广告,平均每期7条(范围0 ~ 11)。4个药物组和5个单一药物广告6次及以上,共10个适应症,我们与老疗法进行了14次比较。我们发现:(1)在14项比较中,有4项(29%)为“无额外益处”,7项(50%)为“不确定益处”,3项(21%)为“无证据”。在没有比较的情况下,我们没有发现新药有“实质性的额外益处”的证据;(2)广告药品每限定日剂量贵2 - 196倍(中位数6);(3) 5种药品发布11个安全公告,1种比较药发布1个安全公告;(4) 20项上市后研究(7项已完成,13项正在进行)用于广告药物,10项研究(4项已完成,6项正在进行)用于比较药物,7项研究(2项已完成,5项正在进行)在3年随访期间评估了广告药物和比较药物。结论和相关性:在2015年发表在《丹麦医学协会杂志》(Journal of the Danish medical Association)上的医疗广告的横断面研究中,广告最多的药物并没有证明比旧疗法有实质性的额外益处,相反,它们的价格要贵得多。从2021年1月起,《丹麦医学会杂志》不再刊登医疗广告。
{"title":"Cross-sectional study of medical advertisements in a national general medical journal: evidence, cost, and safe use of advertised versus comparative drugs.","authors":"Kim Boesen, Anders Lykkemark Simonsen, Karsten Juhl Jørgensen, Peter C Gøtzsche","doi":"10.1186/s41073-021-00111-9","DOIUrl":"https://doi.org/10.1186/s41073-021-00111-9","url":null,"abstract":"<p><strong>Background: </strong>Healthcare professionals are exposed to advertisements for prescription drugs in medical journals. Such advertisements may increase prescriptions of new drugs at the expense of older treatments even when they have no added benefits, are more harmful, and are more expensive. The publication of medical advertisements therefore raises ethical questions related to editorial integrity.</p><p><strong>Methods: </strong>We conducted a descriptive cross-sectional study of all medical advertisements published in the Journal of the Danish Medical Association in 2015. Drugs advertised 6 times or more were compared with older comparators: (1) comparative evidence of added benefit; (2) Defined Daily Dose cost; (3) regulatory safety announcements; and (4) completed and ongoing post-marketing studies 3 years after advertising.</p><p><strong>Results: </strong>We found 158 medical advertisements for 35 prescription drugs published in 24 issues during 2015, with a median of 7 advertisements per issue (range 0 to 11). Four drug groups and 5 single drugs were advertised 6 times or more, for a total of 10 indications, and we made 14 comparisons with older treatments. We found: (1) 'no added benefit' in 4 (29%) of 14 comparisons, 'uncertain benefits' in 7 (50%), and 'no evidence' in 3 (21%) comparisons. In no comparison did we find evidence of 'substantial added benefit' for the new drug; (2) advertised drugs were 2 to 196 times (median 6) more expensive per Defined Daily Dose; (3) 11 safety announcements for five advertised drugs were issued compared to one announcement for one comparator drug; (4) 20 post-marketing studies (7 completed, 13 ongoing) were requested for the advertised drugs versus 10 studies (4 completed, 6 ongoing) for the comparator drugs, and 7 studies (2 completed, 5 ongoing) assessed both an advertised and a comparator drug at 3 year follow-up.</p><p><strong>Conclusions and relevance: </strong>In this cross-sectional study of medical advertisements published in the Journal of the Danish Medical Association during 2015, the most advertised drugs did not have documented substantial added benefits over older treatments, whereas they were substantially more expensive. From January 2021, the Journal of the Danish Medical Association no longer publishes medical advertisements.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"8"},"PeriodicalIF":0.0,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-021-00111-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38968548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-03DOI: 10.1186/s41073-021-00110-w
Tamarinde Haven, Joeri Tijdink, Brian Martinson, Lex Bouter, Frans Oort
Background: Concerns about research misbehavior in academic science have sparked interest in the factors that may explain research misbehavior. Often three clusters of factors are distinguished: individual factors, climate factors and publication factors. Our research question was: to what extent can individual, climate and publication factors explain the variance in frequently perceived research misbehaviors?
Methods: From May 2017 until July 2017, we conducted a survey study among academic researchers in Amsterdam. The survey included three measurement instruments that we previously reported individual results of and here we integrate these findings.
Results: One thousand two hundred ninety-eight researchers completed the survey (response rate: 17%). Results showed that individual, climate and publication factors combined explained 34% of variance in perceived frequency of research misbehavior. Individual factors explained 7%, climate factors explained 22% and publication factors 16%.
Conclusions: Our results suggest that the perceptions of the research climate play a substantial role in explaining variance in research misbehavior. This suggests that efforts to improve departmental norms might have a salutary effect on behavior.
{"title":"Explaining variance in perceived research misbehavior: results from a survey among academic researchers in Amsterdam.","authors":"Tamarinde Haven, Joeri Tijdink, Brian Martinson, Lex Bouter, Frans Oort","doi":"10.1186/s41073-021-00110-w","DOIUrl":"https://doi.org/10.1186/s41073-021-00110-w","url":null,"abstract":"<p><strong>Background: </strong>Concerns about research misbehavior in academic science have sparked interest in the factors that may explain research misbehavior. Often three clusters of factors are distinguished: individual factors, climate factors and publication factors. Our research question was: to what extent can individual, climate and publication factors explain the variance in frequently perceived research misbehaviors?</p><p><strong>Methods: </strong>From May 2017 until July 2017, we conducted a survey study among academic researchers in Amsterdam. The survey included three measurement instruments that we previously reported individual results of and here we integrate these findings.</p><p><strong>Results: </strong>One thousand two hundred ninety-eight researchers completed the survey (response rate: 17%). Results showed that individual, climate and publication factors combined explained 34% of variance in perceived frequency of research misbehavior. Individual factors explained 7%, climate factors explained 22% and publication factors 16%.</p><p><strong>Conclusions: </strong>Our results suggest that the perceptions of the research climate play a substantial role in explaining variance in research misbehavior. This suggests that efforts to improve departmental norms might have a salutary effect on behavior.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2021-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-021-00110-w","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38944409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-15DOI: 10.1186/s41073-021-00109-3
Elizabeth Wager, Sabine Kleinert
Background: Inaccurate, false or incomplete research publications may mislead readers including researchers and decision-makers. It is therefore important that such problems are identified and rectified promptly. This usually involves collaboration between the research institutions and academic journals involved, but these interactions can be problematic.
Methods: These recommendations were developed following discussions at World Conferences on Research Integrity in 2013 and 2017, and at a specially convened 3-day workshop in 2016 involving participants from 7 countries with expertise in publication ethics and research integrity. The recommendations aim to address issues surrounding cooperation and liaison between institutions (e.g. universities) and journals about possible and actual problems with the integrity of reported research arising before and after publication.
Results: The main recommendations are that research institutions should: 1) develop mechanisms for assessing the integrity of reported research (if concerns are raised) that are distinct from processes to determine whether individual researchers have committed misconduct; 2) release relevant sections of reports of research integrity or misconduct investigations to all journals that have published research that was investigated; 3) take responsibility for research performed under their auspices regardless of whether the researcher still works at that institution or how long ago the work was done; 4) work with funders to ensure essential research data is retained for at least 10 years. Journals should: 1) respond to institutions about research integrity cases in a timely manner; 2) have criteria for determining whether, and what type of, information and evidence relating to the integrity of research reports should be passed on to institutions; 3) pass on research integrity concerns to institutions, regardless of whether they intend to accept the work for publication; 4) retain peer review records for at least 10 years to enable the investigation of peer review manipulation or other inappropriate behaviour by authors or reviewers.
Conclusions: Various difficulties can prevent effective cooperation between academic journals and research institutions about research integrity concerns and hinder the correction of the research record if problems are discovered. While the issues and their solutions may vary across different settings, we encourage research institutions, journals and funders to consider how they might improve future collaboration and cooperation on research integrity cases.
{"title":"Cooperation & Liaison between Universities & Editors (CLUE): recommendations on best practice.","authors":"Elizabeth Wager, Sabine Kleinert","doi":"10.1186/s41073-021-00109-3","DOIUrl":"10.1186/s41073-021-00109-3","url":null,"abstract":"<p><strong>Background: </strong>Inaccurate, false or incomplete research publications may mislead readers including researchers and decision-makers. It is therefore important that such problems are identified and rectified promptly. This usually involves collaboration between the research institutions and academic journals involved, but these interactions can be problematic.</p><p><strong>Methods: </strong>These recommendations were developed following discussions at World Conferences on Research Integrity in 2013 and 2017, and at a specially convened 3-day workshop in 2016 involving participants from 7 countries with expertise in publication ethics and research integrity. The recommendations aim to address issues surrounding cooperation and liaison between institutions (e.g. universities) and journals about possible and actual problems with the integrity of reported research arising before and after publication.</p><p><strong>Results: </strong>The main recommendations are that research institutions should: 1) develop mechanisms for assessing the integrity of reported research (if concerns are raised) that are distinct from processes to determine whether individual researchers have committed misconduct; 2) release relevant sections of reports of research integrity or misconduct investigations to all journals that have published research that was investigated; 3) take responsibility for research performed under their auspices regardless of whether the researcher still works at that institution or how long ago the work was done; 4) work with funders to ensure essential research data is retained for at least 10 years. Journals should: 1) respond to institutions about research integrity cases in a timely manner; 2) have criteria for determining whether, and what type of, information and evidence relating to the integrity of research reports should be passed on to institutions; 3) pass on research integrity concerns to institutions, regardless of whether they intend to accept the work for publication; 4) retain peer review records for at least 10 years to enable the investigation of peer review manipulation or other inappropriate behaviour by authors or reviewers.</p><p><strong>Conclusions: </strong>Various difficulties can prevent effective cooperation between academic journals and research institutions about research integrity concerns and hinder the correction of the research record if problems are discovered. While the issues and their solutions may vary across different settings, we encourage research institutions, journals and funders to consider how they might improve future collaboration and cooperation on research integrity cases.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"6"},"PeriodicalIF":7.2,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8048029/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25590216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-01DOI: 10.1186/s41073-021-00108-4
Klaas Sijtsma, Wilco H M Emons, Nicholas H Steneck, Lex M Bouter
Background: A proposal to encourage the preregistration of research on research integrity was developed and adopted as the Amsterdam Agenda at the 5th World Conference on Research Integrity (Amsterdam, 2017). This paper reports on the degree to which abstracts of the 6th World Conference in Research Integrity (Hong Kong, 2019) reported on preregistered research.
Methods: Conference registration data on participants presenting a paper or a poster at 6th WCRI were made available to the research team. Because the data set was too small for inferential statistics this report is limited to a basic description of results and some recommendations that should be considered when taking further steps to improve preregistration.
Results: 19% of the 308 presenters preregistered their research. Of the 56 usable cases, less than half provided information on the six key elements of the Amsterdam Agenda. Others provided information that invalidated their data, such as an uninformative URL. There was no discernable difference between qualitative and quantitative research.
Conclusions: Some presenters at the WCRI have preregistered their research on research integrity, but further steps are needed to increase frequency and completeness of preregistration. One approach to increase preregistration would be to make it a requirement for research presented at the World Conferences on Research Integrity.
{"title":"Steps toward preregistration of research on research integrity.","authors":"Klaas Sijtsma, Wilco H M Emons, Nicholas H Steneck, Lex M Bouter","doi":"10.1186/s41073-021-00108-4","DOIUrl":"10.1186/s41073-021-00108-4","url":null,"abstract":"<p><strong>Background: </strong>A proposal to encourage the preregistration of research on research integrity was developed and adopted as the Amsterdam Agenda at the 5th World Conference on Research Integrity (Amsterdam, 2017). This paper reports on the degree to which abstracts of the 6th World Conference in Research Integrity (Hong Kong, 2019) reported on preregistered research.</p><p><strong>Methods: </strong>Conference registration data on participants presenting a paper or a poster at 6th WCRI were made available to the research team. Because the data set was too small for inferential statistics this report is limited to a basic description of results and some recommendations that should be considered when taking further steps to improve preregistration.</p><p><strong>Results: </strong>19% of the 308 presenters preregistered their research. Of the 56 usable cases, less than half provided information on the six key elements of the Amsterdam Agenda. Others provided information that invalidated their data, such as an uninformative URL. There was no discernable difference between qualitative and quantitative research.</p><p><strong>Conclusions: </strong>Some presenters at the WCRI have preregistered their research on research integrity, but further steps are needed to increase frequency and completeness of preregistration. One approach to increase preregistration would be to make it a requirement for research presented at the World Conferences on Research Integrity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7923522/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25425863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-16DOI: 10.1186/s41073-020-00107-x
Travis G Gerwing, Alyssa M Allen Gerwing, Chi-Yeung Choi, Stephanie Avery-Gomm, Jeff C Clements, Joshua A Rash
Our recent paper ( https://doi.org/10.1186/s41073-020-00096-x ) reported that 43% of reviewer comment sets (n=1491) shared with authors contained at least one unprofessional comment or an incomplete, inaccurate of unsubstantiated critique (IIUC). Publication of this work sparked an online (i.e., Twitter, Instagram, Facebook, and Reddit) conversation surrounding professionalism in peer review. We collected and analyzed these social media comments as they offered real-time responses to our work and provided insight into the views held by commenters and potential peer-reviewers that would be difficult to quantify using existing empirical tools (96 comments from July 24th to September 3rd, 2020). Overall, 75% of comments were positive, of which 59% were supportive and 16% shared similar personal experiences. However, a subset of negative comments emerged (22% of comments were negative and 6% were an unsubstantiated critique of the methodology), that provided potential insight into the reasons underlying unprofessional comments were made during the peer-review process. These comments were classified into three main themes: (1) forced niceness will adversely impact the peer-review process and allow for publication of poor-quality science (5% of online comments); (2) dismissing comments as not offensive to another person because they were not deemed personally offensive to the reader (6%); and (3) authors brought unprofessional comments upon themselves as they submitted substandard work (5%). Here, we argue against these themes as justifications for directing unprofessional comments towards authors during the peer review process. We argue that it is possible to be both critical and professional, and that no author deserves to be the recipient of demeaning ad hominem attacks regardless of supposed provocation. Suggesting otherwise only serves to propagate a toxic culture within peer review. While we previously postulated that establishing a peer-reviewer code of conduct could help improve the peer-review system, we now posit that priority should be given to repairing the negative cultural zeitgeist that exists in peer-review.
{"title":"Re-evaluation of solutions to the problem of unprofessionalism in peer review.","authors":"Travis G Gerwing, Alyssa M Allen Gerwing, Chi-Yeung Choi, Stephanie Avery-Gomm, Jeff C Clements, Joshua A Rash","doi":"10.1186/s41073-020-00107-x","DOIUrl":"https://doi.org/10.1186/s41073-020-00107-x","url":null,"abstract":"<p><p>Our recent paper ( https://doi.org/10.1186/s41073-020-00096-x ) reported that 43% of reviewer comment sets (n=1491) shared with authors contained at least one unprofessional comment or an incomplete, inaccurate of unsubstantiated critique (IIUC). Publication of this work sparked an online (i.e., Twitter, Instagram, Facebook, and Reddit) conversation surrounding professionalism in peer review. We collected and analyzed these social media comments as they offered real-time responses to our work and provided insight into the views held by commenters and potential peer-reviewers that would be difficult to quantify using existing empirical tools (96 comments from July 24th to September 3rd, 2020). Overall, 75% of comments were positive, of which 59% were supportive and 16% shared similar personal experiences. However, a subset of negative comments emerged (22% of comments were negative and 6% were an unsubstantiated critique of the methodology), that provided potential insight into the reasons underlying unprofessional comments were made during the peer-review process. These comments were classified into three main themes: (1) forced niceness will adversely impact the peer-review process and allow for publication of poor-quality science (5% of online comments); (2) dismissing comments as not offensive to another person because they were not deemed personally offensive to the reader (6%); and (3) authors brought unprofessional comments upon themselves as they submitted substandard work (5%). Here, we argue against these themes as justifications for directing unprofessional comments towards authors during the peer review process. We argue that it is possible to be both critical and professional, and that no author deserves to be the recipient of demeaning ad hominem attacks regardless of supposed provocation. Suggesting otherwise only serves to propagate a toxic culture within peer review. While we previously postulated that establishing a peer-reviewer code of conduct could help improve the peer-review system, we now posit that priority should be given to repairing the negative cultural zeitgeist that exists in peer-review.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"4"},"PeriodicalIF":0.0,"publicationDate":"2021-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00107-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25375244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1186/s41073-020-00106-y
Nick Kinney, Araba Wubah, Miguel Roig, Harold R Garner
Background: Scientists communicate progress and exchange information via publication and presentation at scientific meetings. We previously showed that text similarity analysis applied to Medline can identify and quantify plagiarism and duplicate publications in peer-reviewed biomedical journals. In the present study, we applied the same analysis to a large sample of conference abstracts.
Methods: We downloaded 144,149 abstracts from 207 national and international meetings of 63 biomedical conferences. Pairwise comparisons were made using eTBLAST: a text similarity engine. A domain expert then reviewed random samples of highly similar abstracts (1500 total) to estimate the extent of text overlap and possible plagiarism.
Results: Our main findings indicate that the vast majority of textual overlap occurred within the same meeting (2%) and between meetings of the same conference (3%), both of which were significantly higher than instances of plagiarism, which occurred in less than .5% of abstracts.
Conclusions: This analysis indicates that textual overlap in abstracts of papers presented at scientific meetings is one-tenth that of peer-reviewed publications, yet the plagiarism rate is approximately the same as previously measured in peer-reviewed publications. This latter finding underscores a need for monitoring scientific meeting submissions - as is now done when submitting manuscripts to peer-reviewed journals - to improve the integrity of scientific communications.
{"title":"Estimating the prevalence of text overlap in biomedical conference abstracts.","authors":"Nick Kinney, Araba Wubah, Miguel Roig, Harold R Garner","doi":"10.1186/s41073-020-00106-y","DOIUrl":"https://doi.org/10.1186/s41073-020-00106-y","url":null,"abstract":"<p><strong>Background: </strong>Scientists communicate progress and exchange information via publication and presentation at scientific meetings. We previously showed that text similarity analysis applied to Medline can identify and quantify plagiarism and duplicate publications in peer-reviewed biomedical journals. In the present study, we applied the same analysis to a large sample of conference abstracts.</p><p><strong>Methods: </strong>We downloaded 144,149 abstracts from 207 national and international meetings of 63 biomedical conferences. Pairwise comparisons were made using eTBLAST: a text similarity engine. A domain expert then reviewed random samples of highly similar abstracts (1500 total) to estimate the extent of text overlap and possible plagiarism.</p><p><strong>Results: </strong>Our main findings indicate that the vast majority of textual overlap occurred within the same meeting (2%) and between meetings of the same conference (3%), both of which were significantly higher than instances of plagiarism, which occurred in less than .5% of abstracts.</p><p><strong>Conclusions: </strong>This analysis indicates that textual overlap in abstracts of papers presented at scientific meetings is one-tenth that of peer-reviewed publications, yet the plagiarism rate is approximately the same as previously measured in peer-reviewed publications. This latter finding underscores a need for monitoring scientific meeting submissions - as is now done when submitting manuscripts to peer-reviewed journals - to improve the integrity of scientific communications.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00106-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25313727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}