Pub Date : 2021-12-01DOI: 10.1186/s41073-021-00117-3
Mohammad Hosseini, Shiva Sharifzad
Background: The current paper follows up on the results of an exploratory quantitative analysis that compared the publication and citation records of men and women researchers affiliated with the Faculty of Computing and Engineering at Dublin City University (DCU) in Ireland. Quantitative analysis of publications between 2013 and 2018 showed that women researchers had fewer publications, received fewer citations per person, and participated less often in international collaborations. Given the significance of publications for pursuing an academic career, we used qualitative methods to understand these differences and explore factors that, according to women researchers, have contributed to this disparity.
Methods: Sixteen women researchers from DCU's Faculty of Computing and Engineering were interviewed using a semi-structured questionnaire. Once interviews were transcribed and anonymised, they were coded by both authors in two rounds using an inductive approach.
Results: Interviewed women believed that their opportunities for research engagement and research funding, collaborations, publications and promotions are negatively impacted by gender roles, implicit gender biases, their own high professional standards, family responsibilities, nationality and negative perceptions of their expertise and accomplishments.
Conclusions: Our study has found that women in DCU's Faculty of Computing and Engineering face challenges that, according to those interviewed, negatively affect their engagement in various research activities, and, therefore, have contributed to their lower publication record. We suggest that while affirmative programmes aiming to correct disparities are necessary, they are more likely to improve organisational culture if they are implemented in parallel with bottom-up initiatives that engage all parties, including men researchers and non-academic partners, to inform and sensitise them about the significance of gender equity.
{"title":"Gender disparity in publication records: a qualitative study of women researchers in computing and engineering.","authors":"Mohammad Hosseini, Shiva Sharifzad","doi":"10.1186/s41073-021-00117-3","DOIUrl":"https://doi.org/10.1186/s41073-021-00117-3","url":null,"abstract":"<p><strong>Background: </strong>The current paper follows up on the results of an exploratory quantitative analysis that compared the publication and citation records of men and women researchers affiliated with the Faculty of Computing and Engineering at Dublin City University (DCU) in Ireland. Quantitative analysis of publications between 2013 and 2018 showed that women researchers had fewer publications, received fewer citations per person, and participated less often in international collaborations. Given the significance of publications for pursuing an academic career, we used qualitative methods to understand these differences and explore factors that, according to women researchers, have contributed to this disparity.</p><p><strong>Methods: </strong>Sixteen women researchers from DCU's Faculty of Computing and Engineering were interviewed using a semi-structured questionnaire. Once interviews were transcribed and anonymised, they were coded by both authors in two rounds using an inductive approach.</p><p><strong>Results: </strong>Interviewed women believed that their opportunities for research engagement and research funding, collaborations, publications and promotions are negatively impacted by gender roles, implicit gender biases, their own high professional standards, family responsibilities, nationality and negative perceptions of their expertise and accomplishments.</p><p><strong>Conclusions: </strong>Our study has found that women in DCU's Faculty of Computing and Engineering face challenges that, according to those interviewed, negatively affect their engagement in various research activities, and, therefore, have contributed to their lower publication record. We suggest that while affirmative programmes aiming to correct disparities are necessary, they are more likely to improve organisational culture if they are implemented in parallel with bottom-up initiatives that engage all parties, including men researchers and non-academic partners, to inform and sensitise them about the significance of gender equity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"15"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8632200/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39679575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1186/s41073-021-00119-1
Evan Mayo-Wilson, Meredith L Phillips, Avonne E Connor, Kelly J Vander Ley, Kevin Naaman, Mark Helfand
Background: The Patient-Centered Outcomes Research Institute (PCORI) is obligated to peer review and to post publicly "Final Research Reports" of all funded projects. PCORI peer review emphasizes adherence to PCORI's Methodology Standards and principles of ethical scientific communication. During the peer review process, reviewers and editors seek to ensure that results are presented objectively and interpreted appropriately, e.g., free of spin.
Methods: Two independent raters assessed PCORI peer review feedback sent to authors. We calculated the proportion of reports in which spin was identified during peer review, and the types of spin identified. We included reports submitted by April 2018 with at least one associated journal article. The same raters then assessed whether authors addressed reviewers' comments about spin. The raters also assessed whether spin identified during PCORI peer review was present in related journal articles.
Results: We included 64 PCORI-funded projects. Peer reviewers or editors identified spin in 55/64 (86%) submitted research reports. Types of spin included reporting bias (46/55; 84%), inappropriate interpretation (40/55; 73%), inappropriate extrapolation of results (15/55; 27%), and inappropriate attribution of causality (5/55; 9%). Authors addressed comments about spin related to 47/55 (85%) of the reports. Of 110 associated journal articles, PCORI comments about spin were potentially applicable to 44/110 (40%) articles, of which 27/44 (61%) contained the same spin that was identified in the PCORI research report. The proportion of articles with spin was similar for articles accepted before and after PCORI peer review (63% vs 58%).
Discussion: Just as spin is common in journal articles and press releases, we found that most reports submitted to PCORI included spin. While most spin was mitigated during the funder's peer review process, we found no evidence that review of PCORI reports influenced spin in journal articles. Funders could explore interventions aimed at reducing spin in published articles of studies they support.
背景:以患者为中心的结果研究所(PCORI)有义务对所有资助项目进行同行评审并公开发布“最终研究报告”。PCORI同行评审强调遵守PCORI的方法标准和伦理科学交流原则。在同行评审过程中,审稿人和编辑力求确保结果被客观地呈现并得到适当的解释,例如,不歪曲事实。方法:两名独立评价员对发给作者的PCORI同行评议反馈进行评估。我们计算了同行评议中确定自旋的报告的比例,以及确定的自旋类型。我们纳入了2018年4月之前提交的报告,其中至少有一篇相关期刊文章。然后,同样的评分者评估作者是否回应了审稿人关于spin的评论。评分者还评估了在PCORI同行评议中发现的自旋是否出现在相关的期刊文章中。结果:我们纳入了64个pcori资助的项目。同行审稿人或编辑在55/64(86%)提交的研究报告中发现了虚假报道。spin的类型包括报告偏差(46/55;84%),不恰当的解释(40/55;73%),结果外推不当(15/55;27%),以及因果关系归因不当(5/55;9%)。作者讨论了与47/55(85%)的报告相关的关于spin的评论。在110篇相关期刊文章中,PCORI关于自旋的评论可能适用于44/110(40%)篇文章,其中27/44(61%)篇文章包含与PCORI研究报告中确定的相同的自旋。在PCORI同行评议之前和之后接受的文章中,带有spin的文章比例相似(63% vs 58%)。讨论:正如spin在期刊文章和新闻稿中很常见一样,我们发现提交给PCORI的大多数报告都包含spin。虽然在资助者的同行评审过程中,大多数自旋得到了缓解,但我们发现没有证据表明对PCORI报告的评审影响了期刊文章的自旋。资助者可以探索旨在减少他们所支持的研究发表的文章中的虚假报道的干预措施。
{"title":"Peer review reduces spin in PCORI research reports.","authors":"Evan Mayo-Wilson, Meredith L Phillips, Avonne E Connor, Kelly J Vander Ley, Kevin Naaman, Mark Helfand","doi":"10.1186/s41073-021-00119-1","DOIUrl":"https://doi.org/10.1186/s41073-021-00119-1","url":null,"abstract":"<p><strong>Background: </strong>The Patient-Centered Outcomes Research Institute (PCORI) is obligated to peer review and to post publicly \"Final Research Reports\" of all funded projects. PCORI peer review emphasizes adherence to PCORI's Methodology Standards and principles of ethical scientific communication. During the peer review process, reviewers and editors seek to ensure that results are presented objectively and interpreted appropriately, e.g., free of spin.</p><p><strong>Methods: </strong>Two independent raters assessed PCORI peer review feedback sent to authors. We calculated the proportion of reports in which spin was identified during peer review, and the types of spin identified. We included reports submitted by April 2018 with at least one associated journal article. The same raters then assessed whether authors addressed reviewers' comments about spin. The raters also assessed whether spin identified during PCORI peer review was present in related journal articles.</p><p><strong>Results: </strong>We included 64 PCORI-funded projects. Peer reviewers or editors identified spin in 55/64 (86%) submitted research reports. Types of spin included reporting bias (46/55; 84%), inappropriate interpretation (40/55; 73%), inappropriate extrapolation of results (15/55; 27%), and inappropriate attribution of causality (5/55; 9%). Authors addressed comments about spin related to 47/55 (85%) of the reports. Of 110 associated journal articles, PCORI comments about spin were potentially applicable to 44/110 (40%) articles, of which 27/44 (61%) contained the same spin that was identified in the PCORI research report. The proportion of articles with spin was similar for articles accepted before and after PCORI peer review (63% vs 58%).</p><p><strong>Discussion: </strong>Just as spin is common in journal articles and press releases, we found that most reports submitted to PCORI included spin. While most spin was mitigated during the funder's peer review process, we found no evidence that review of PCORI reports influenced spin in journal articles. Funders could explore interventions aimed at reducing spin in published articles of studies they support.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"16"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8638354/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39768548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-18DOI: 10.1186/s41073-021-00116-4
Veli-Matti Karhulahti, Hans-Joachim Backe
Background: Open peer review practices are increasing in medicine and life sciences, but in social sciences and humanities (SSH) they are still rare. We aimed to map out how editors of respected SSH journals perceive open peer review, how they balance policy, ethics, and pragmatism in the review processes they oversee, and how they view their own power in the process.
Methods: We conducted 12 pre-registered semi-structured interviews with editors of respected SSH journals. Interviews consisted of 21 questions and lasted an average of 67 min. Interviews were transcribed, descriptively coded, and organized into code families.
Results: SSH editors saw anonymized peer review benefits to outweigh those of open peer review. They considered anonymized peer review the "gold standard" that authors and editors are expected to follow to respect institutional policies; moreover, anonymized review was also perceived as ethically superior due to the protection it provides, and more pragmatic due to eased seeking of reviewers. Finally, editors acknowledged their power in the publication process and reported strategies for keeping their work as unbiased as possible.
Conclusions: Editors of SSH journals preferred the benefits of anonymized peer review over open peer and acknowledged the power they hold in the publication process during which authors are almost completely disclosed to editorial bodies. We recommend journals to communicate the transparency elements of their manuscript review processes by listing all bodies who contributed to the decision on every review stage.
{"title":"Transparency of peer review: a semi-structured interview study with chief editors from social sciences and humanities.","authors":"Veli-Matti Karhulahti, Hans-Joachim Backe","doi":"10.1186/s41073-021-00116-4","DOIUrl":"https://doi.org/10.1186/s41073-021-00116-4","url":null,"abstract":"<p><strong>Background: </strong>Open peer review practices are increasing in medicine and life sciences, but in social sciences and humanities (SSH) they are still rare. We aimed to map out how editors of respected SSH journals perceive open peer review, how they balance policy, ethics, and pragmatism in the review processes they oversee, and how they view their own power in the process.</p><p><strong>Methods: </strong>We conducted 12 pre-registered semi-structured interviews with editors of respected SSH journals. Interviews consisted of 21 questions and lasted an average of 67 min. Interviews were transcribed, descriptively coded, and organized into code families.</p><p><strong>Results: </strong>SSH editors saw anonymized peer review benefits to outweigh those of open peer review. They considered anonymized peer review the \"gold standard\" that authors and editors are expected to follow to respect institutional policies; moreover, anonymized review was also perceived as ethically superior due to the protection it provides, and more pragmatic due to eased seeking of reviewers. Finally, editors acknowledged their power in the publication process and reported strategies for keeping their work as unbiased as possible.</p><p><strong>Conclusions: </strong>Editors of SSH journals preferred the benefits of anonymized peer review over open peer and acknowledged the power they hold in the publication process during which authors are almost completely disclosed to editorial bodies. We recommend journals to communicate the transparency elements of their manuscript review processes by listing all bodies who contributed to the decision on every review stage.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2021-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8598274/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39721579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-14DOI: 10.1186/s41073-021-00118-2
Balazs Aczel, Barnabas Szaszi, Alex O Holcombe
Background: The amount and value of researchers' peer review work is critical for academia and journal publishing. However, this labor is under-recognized, its magnitude is unknown, and alternative ways of organizing peer review labor are rarely considered.
Methods: Using publicly available data, we provide an estimate of researchers' time and the salary-based contribution to the journal peer review system.
Results: We found that the total time reviewers globally worked on peer reviews was over 100 million hours in 2020, equivalent to over 15 thousand years. The estimated monetary value of the time US-based reviewers spent on reviews was over 1.5 billion USD in 2020. For China-based reviewers, the estimate is over 600 million USD, and for UK-based, close to 400 million USD.
Conclusions: By design, our results are very likely to be under-estimates as they reflect only a portion of the total number of journals worldwide. The numbers highlight the enormous amount of work and time that researchers provide to the publication system, and the importance of considering alternative ways of structuring, and paying for, peer review. We foster this process by discussing some alternative models that aim to boost the benefits of peer review, thus improving its cost-benefit ratio.
{"title":"A billion-dollar donation: estimating the cost of researchers' time spent on peer review.","authors":"Balazs Aczel, Barnabas Szaszi, Alex O Holcombe","doi":"10.1186/s41073-021-00118-2","DOIUrl":"https://doi.org/10.1186/s41073-021-00118-2","url":null,"abstract":"<p><strong>Background: </strong>The amount and value of researchers' peer review work is critical for academia and journal publishing. However, this labor is under-recognized, its magnitude is unknown, and alternative ways of organizing peer review labor are rarely considered.</p><p><strong>Methods: </strong>Using publicly available data, we provide an estimate of researchers' time and the salary-based contribution to the journal peer review system.</p><p><strong>Results: </strong>We found that the total time reviewers globally worked on peer reviews was over 100 million hours in 2020, equivalent to over 15 thousand years. The estimated monetary value of the time US-based reviewers spent on reviews was over 1.5 billion USD in 2020. For China-based reviewers, the estimate is over 600 million USD, and for UK-based, close to 400 million USD.</p><p><strong>Conclusions: </strong>By design, our results are very likely to be under-estimates as they reflect only a portion of the total number of journals worldwide. The numbers highlight the enormous amount of work and time that researchers provide to the publication system, and the importance of considering alternative ways of structuring, and paying for, peer review. We foster this process by discussing some alternative models that aim to boost the benefits of peer review, thus improving its cost-benefit ratio.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"14"},"PeriodicalIF":0.0,"publicationDate":"2021-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8591820/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39622221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-30DOI: 10.1186/s41073-021-00115-5
Jan-Ole Hesselberg, Knut Inge Fostervold, Pål Ulleberg, Ida Svege
Background: Vast sums are distributed based on grant peer review, but studies show that interrater reliability is often low. In this study, we tested the effect of receiving two short individual feedback reports compared to one short general feedback report on the agreement between reviewers.
Methods: A total of 42 reviewers at the Norwegian Foundation Dam were randomly assigned to receive either a general feedback report or an individual feedback report. The general feedback group received one report before the start of the reviews that contained general information about the previous call in which the reviewers participated. In the individual feedback group, the reviewers received two reports, one before the review period (based on the previous call) and one during the period (based on the current call). In the individual feedback group, the reviewers were presented with detailed information on their scoring compared with the review committee as a whole, both before and during the review period. The main outcomes were the proportion of agreement in the eligibility assessment and the average difference in scores between pairs of reviewers assessing the same proposal. The outcomes were measured in 2017 and after the feedback was provided in 2018.
Results: A total of 2398 paired reviews were included in the analysis. There was a significant difference between the two groups in the proportion of absolute agreement on whether the proposal was eligible for the funding programme, with the general feedback group demonstrating a higher rate of agreement. There was no difference between the two groups in terms of the average score difference. However, the agreement regarding the proposal score remained critically low for both groups.
Conclusions: We did not observe changes in proposal score agreement between 2017 and 2018 in reviewers receiving different feedback. The low levels of agreement remain a major concern in grant peer review, and research to identify contributing factors as well as the development and testing of interventions to increase agreement rates are still needed.
Trial registration: The study was preregistered at OSF.io/n4fq3 .
{"title":"Individual versus general structured feedback to improve agreement in grant peer review: a randomized controlled trial.","authors":"Jan-Ole Hesselberg, Knut Inge Fostervold, Pål Ulleberg, Ida Svege","doi":"10.1186/s41073-021-00115-5","DOIUrl":"10.1186/s41073-021-00115-5","url":null,"abstract":"<p><strong>Background: </strong>Vast sums are distributed based on grant peer review, but studies show that interrater reliability is often low. In this study, we tested the effect of receiving two short individual feedback reports compared to one short general feedback report on the agreement between reviewers.</p><p><strong>Methods: </strong>A total of 42 reviewers at the Norwegian Foundation Dam were randomly assigned to receive either a general feedback report or an individual feedback report. The general feedback group received one report before the start of the reviews that contained general information about the previous call in which the reviewers participated. In the individual feedback group, the reviewers received two reports, one before the review period (based on the previous call) and one during the period (based on the current call). In the individual feedback group, the reviewers were presented with detailed information on their scoring compared with the review committee as a whole, both before and during the review period. The main outcomes were the proportion of agreement in the eligibility assessment and the average difference in scores between pairs of reviewers assessing the same proposal. The outcomes were measured in 2017 and after the feedback was provided in 2018.</p><p><strong>Results: </strong>A total of 2398 paired reviews were included in the analysis. There was a significant difference between the two groups in the proportion of absolute agreement on whether the proposal was eligible for the funding programme, with the general feedback group demonstrating a higher rate of agreement. There was no difference between the two groups in terms of the average score difference. However, the agreement regarding the proposal score remained critically low for both groups.</p><p><strong>Conclusions: </strong>We did not observe changes in proposal score agreement between 2017 and 2018 in reviewers receiving different feedback. The low levels of agreement remain a major concern in grant peer review, and research to identify contributing factors as well as the development and testing of interventions to increase agreement rates are still needed.</p><p><strong>Trial registration: </strong>The study was preregistered at OSF.io/n4fq3 .</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"12"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8485516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39474032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-02DOI: 10.1186/s41073-021-00113-7
Joanna Diong, Cynthia M Kroeger, Katherine J Reynolds, Adrian Barnett, Lisa A Bero
Background: Australian health and medical research funders support substantial research efforts, and incentives within grant funding schemes influence researcher behaviour. We aimed to determine to what extent Australian health and medical funders incentivise responsible research practices.
Methods: We conducted an audit of instructions from research grant and fellowship schemes. Eight national research grants and fellowships were purposively sampled to select schemes that awarded the largest amount of funds. The funding scheme instructions were assessed against 9 criteria to determine to what extent they incentivised these responsible research and reporting practices: (1) publicly register study protocols before starting data collection, (2) register analysis protocols before starting data analysis, (3) make study data openly available, (4) make analysis code openly available, (5) make research materials openly available, (6) discourage use of publication metrics, (7) conduct quality research (e.g. adhere to reporting guidelines), (8) collaborate with a statistician, and (9) adhere to other responsible research practices. Each criterion was answered using one of the following responses: "Instructed", "Encouraged", or "No mention".
Results: Across the 8 schemes from 5 funders, applicants were instructed or encouraged to address a median of 4 (range 0 to 5) of the 9 criteria. Three criteria received no mention in any scheme (register analysis protocols, make analysis code open, collaborate with a statistician). Importantly, most incentives did not seem strong as applicants were only instructed to register study protocols, discourage use of publication metrics and conduct quality research. Other criteria were encouraged but were not required.
Conclusions: Funders could strengthen the incentives for responsible research practices by requiring grant and fellowship applicants to implement these practices in their proposals. Administering institutions could be required to implement these practices to be eligible for funding. Strongly rewarding researchers for implementing robust research practices could lead to sustained improvements in the quality of health and medical research.
{"title":"Strengthening the incentives for responsible research practices in Australian health and medical research funding.","authors":"Joanna Diong, Cynthia M Kroeger, Katherine J Reynolds, Adrian Barnett, Lisa A Bero","doi":"10.1186/s41073-021-00113-7","DOIUrl":"10.1186/s41073-021-00113-7","url":null,"abstract":"<p><strong>Background: </strong>Australian health and medical research funders support substantial research efforts, and incentives within grant funding schemes influence researcher behaviour. We aimed to determine to what extent Australian health and medical funders incentivise responsible research practices.</p><p><strong>Methods: </strong>We conducted an audit of instructions from research grant and fellowship schemes. Eight national research grants and fellowships were purposively sampled to select schemes that awarded the largest amount of funds. The funding scheme instructions were assessed against 9 criteria to determine to what extent they incentivised these responsible research and reporting practices: (1) publicly register study protocols before starting data collection, (2) register analysis protocols before starting data analysis, (3) make study data openly available, (4) make analysis code openly available, (5) make research materials openly available, (6) discourage use of publication metrics, (7) conduct quality research (e.g. adhere to reporting guidelines), (8) collaborate with a statistician, and (9) adhere to other responsible research practices. Each criterion was answered using one of the following responses: \"Instructed\", \"Encouraged\", or \"No mention\".</p><p><strong>Results: </strong>Across the 8 schemes from 5 funders, applicants were instructed or encouraged to address a median of 4 (range 0 to 5) of the 9 criteria. Three criteria received no mention in any scheme (register analysis protocols, make analysis code open, collaborate with a statistician). Importantly, most incentives did not seem strong as applicants were only instructed to register study protocols, discourage use of publication metrics and conduct quality research. Other criteria were encouraged but were not required.</p><p><strong>Conclusions: </strong>Funders could strengthen the incentives for responsible research practices by requiring grant and fellowship applicants to implement these practices in their proposals. Administering institutions could be required to implement these practices to be eligible for funding. Strongly rewarding researchers for implementing robust research practices could lead to sustained improvements in the quality of health and medical research.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"11"},"PeriodicalIF":0.0,"publicationDate":"2021-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8328133/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39277405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-11DOI: 10.1186/s41073-021-00114-6
Kim Boesen, Anders Lykkemark Simonsen, Karsten Juhl Jørgensen, Peter C Gøtzsche
{"title":"Correction to: Cross-sectional study of medical advertisements in a national general medical journal: evidence, cost, and safe use of advertised versus comparative drugs.","authors":"Kim Boesen, Anders Lykkemark Simonsen, Karsten Juhl Jørgensen, Peter C Gøtzsche","doi":"10.1186/s41073-021-00114-6","DOIUrl":"https://doi.org/10.1186/s41073-021-00114-6","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"10"},"PeriodicalIF":0.0,"publicationDate":"2021-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-021-00114-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39086140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-02DOI: 10.1186/s41073-021-00112-8
Evan Mayo-Wilson, Sean Grant, Lauren Supplee, Sina Kianersi, Afsah Amin, Alex DeHaven, David Mellor
Background: The Transparency and Openness Promotion (TOP) Guidelines describe modular standards that journals can adopt to promote open science. The TOP Factor is a metric to describe the extent to which journals have adopted the TOP Guidelines in their policies. Systematic methods and rating instruments are needed to calculate the TOP Factor. Moreover, implementation of these open science policies depends on journal procedures and practices, for which TOP provides no standards or rating instruments.
Methods: We describe a process for assessing journal policies, procedures, and practices according to the TOP Guidelines. We developed this process as part of the Transparency of Research Underpinning Social Intervention Tiers (TRUST) Initiative to advance open science in the social intervention research ecosystem. We also provide new instruments for rating journal instructions to authors (policies), manuscript submission systems (procedures), and published articles (practices) according to standards in the TOP Guidelines. In addition, we describe how to determine the TOP Factor score for a journal, calculate reliability of journal ratings, and assess coherence among a journal's policies, procedures, and practices. As a demonstration of this process, we describe a protocol for studying approximately 345 influential journals that have published research used to inform evidence-based policy.
Discussion: The TRUST Process includes systematic methods and rating instruments for assessing and facilitating implementation of the TOP Guidelines by journals across disciplines. Our study of journals publishing influential social intervention research will provide a comprehensive account of whether these journals have policies, procedures, and practices that are consistent with standards for open science and thereby facilitate the publication of trustworthy findings to inform evidence-based policy. Through this demonstration, we expect to identify ways to refine the TOP Guidelines and the TOP Factor. Refinements could include: improving templates for adoption in journal instructions to authors, manuscript submission systems, and published articles; revising explanatory guidance intended to enhance the use, understanding, and dissemination of the TOP Guidelines; and clarifying the distinctions among different levels of implementation. Research materials are available on the Open Science Framework: https://osf.io/txyr3/ .
背景:透明度和公开性促进(TOP)指南》描述了期刊为促进开放科学而可以采用的模块标准。TOP Factor 是一种衡量标准,用来描述期刊在其政策中采用《透明度与公开性促进指南》的程度。计算 TOP 因子需要系统的方法和评级工具。此外,这些开放科学政策的实施取决于期刊的程序和实践,而 TOP 并没有提供这方面的标准或评级工具:方法:我们介绍了根据《顶级期刊指南》评估期刊政策、程序和实践的流程。我们开发了这一流程,作为社会干预层级研究透明度(TRUST)计划的一部分,以推动社会干预研究生态系统中的开放科学。我们还提供了新的工具,用于根据《TOP 指南》中的标准对期刊的作者须知(政策)、投稿系统(程序)和已发表文章(实践)进行评级。此外,我们还介绍了如何确定期刊的 TOP 因子得分,计算期刊评级的可靠性,以及评估期刊政策、程序和实践之间的一致性。作为该流程的演示,我们介绍了对约 345 种有影响力的期刊进行研究的方案,这些期刊发表的研究成果为循证政策提供了依据:TRUST 流程包括系统方法和评级工具,用于评估和促进各学科期刊实施《顶级期刊指南》。我们对发表有影响力的社会干预研究的期刊进行的研究将全面说明这些期刊是否拥有符合开放科学标准的政策、程序和实践,从而促进发表可信的研究成果,为循证政策提供依据。通过此次论证,我们有望找到完善《顶级期刊指南》和《顶级期刊因子》的方法。完善工作可包括:改进模板,以便在期刊的作者须知、投稿系统和发表的文章中采用;修订解释性指南,以加强对《最高学术标准指南》的使用、理解和传播;以及明确不同实施水平之间的区别。研究材料可在开放科学框架网站上查阅:https://osf.io/txyr3/ 。
{"title":"Evaluating implementation of the Transparency and Openness Promotion (TOP) guidelines: the TRUST process for rating journal policies, procedures, and practices.","authors":"Evan Mayo-Wilson, Sean Grant, Lauren Supplee, Sina Kianersi, Afsah Amin, Alex DeHaven, David Mellor","doi":"10.1186/s41073-021-00112-8","DOIUrl":"10.1186/s41073-021-00112-8","url":null,"abstract":"<p><strong>Background: </strong>The Transparency and Openness Promotion (TOP) Guidelines describe modular standards that journals can adopt to promote open science. The TOP Factor is a metric to describe the extent to which journals have adopted the TOP Guidelines in their policies. Systematic methods and rating instruments are needed to calculate the TOP Factor. Moreover, implementation of these open science policies depends on journal procedures and practices, for which TOP provides no standards or rating instruments.</p><p><strong>Methods: </strong>We describe a process for assessing journal policies, procedures, and practices according to the TOP Guidelines. We developed this process as part of the Transparency of Research Underpinning Social Intervention Tiers (TRUST) Initiative to advance open science in the social intervention research ecosystem. We also provide new instruments for rating journal instructions to authors (policies), manuscript submission systems (procedures), and published articles (practices) according to standards in the TOP Guidelines. In addition, we describe how to determine the TOP Factor score for a journal, calculate reliability of journal ratings, and assess coherence among a journal's policies, procedures, and practices. As a demonstration of this process, we describe a protocol for studying approximately 345 influential journals that have published research used to inform evidence-based policy.</p><p><strong>Discussion: </strong>The TRUST Process includes systematic methods and rating instruments for assessing and facilitating implementation of the TOP Guidelines by journals across disciplines. Our study of journals publishing influential social intervention research will provide a comprehensive account of whether these journals have policies, procedures, and practices that are consistent with standards for open science and thereby facilitate the publication of trustworthy findings to inform evidence-based policy. Through this demonstration, we expect to identify ways to refine the TOP Guidelines and the TOP Factor. Refinements could include: improving templates for adoption in journal instructions to authors, manuscript submission systems, and published articles; revising explanatory guidance intended to enhance the use, understanding, and dissemination of the TOP Guidelines; and clarifying the distinctions among different levels of implementation. Research materials are available on the Open Science Framework: https://osf.io/txyr3/ .</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2021-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8173977/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39055385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-10DOI: 10.1186/s41073-021-00111-9
Kim Boesen, Anders Lykkemark Simonsen, Karsten Juhl Jørgensen, Peter C Gøtzsche
Background: Healthcare professionals are exposed to advertisements for prescription drugs in medical journals. Such advertisements may increase prescriptions of new drugs at the expense of older treatments even when they have no added benefits, are more harmful, and are more expensive. The publication of medical advertisements therefore raises ethical questions related to editorial integrity.
Methods: We conducted a descriptive cross-sectional study of all medical advertisements published in the Journal of the Danish Medical Association in 2015. Drugs advertised 6 times or more were compared with older comparators: (1) comparative evidence of added benefit; (2) Defined Daily Dose cost; (3) regulatory safety announcements; and (4) completed and ongoing post-marketing studies 3 years after advertising.
Results: We found 158 medical advertisements for 35 prescription drugs published in 24 issues during 2015, with a median of 7 advertisements per issue (range 0 to 11). Four drug groups and 5 single drugs were advertised 6 times or more, for a total of 10 indications, and we made 14 comparisons with older treatments. We found: (1) 'no added benefit' in 4 (29%) of 14 comparisons, 'uncertain benefits' in 7 (50%), and 'no evidence' in 3 (21%) comparisons. In no comparison did we find evidence of 'substantial added benefit' for the new drug; (2) advertised drugs were 2 to 196 times (median 6) more expensive per Defined Daily Dose; (3) 11 safety announcements for five advertised drugs were issued compared to one announcement for one comparator drug; (4) 20 post-marketing studies (7 completed, 13 ongoing) were requested for the advertised drugs versus 10 studies (4 completed, 6 ongoing) for the comparator drugs, and 7 studies (2 completed, 5 ongoing) assessed both an advertised and a comparator drug at 3 year follow-up.
Conclusions and relevance: In this cross-sectional study of medical advertisements published in the Journal of the Danish Medical Association during 2015, the most advertised drugs did not have documented substantial added benefits over older treatments, whereas they were substantially more expensive. From January 2021, the Journal of the Danish Medical Association no longer publishes medical advertisements.
背景:医疗保健专业人员接触到医学杂志上的处方药广告。这样的广告可能会增加新药的处方,而牺牲旧的治疗方法,即使它们没有额外的好处,更有害,更昂贵。因此,医疗广告的出版引发了与编辑诚信有关的伦理问题。方法:我们对2015年发表在《丹麦医学会杂志》上的所有医疗广告进行了描述性横断面研究。广告6次或6次以上的药物与较老的比较者进行比较:(1)增加获益的比较证据;(2)限定日剂量费用;(三)监管安全公告;(4)广告后3年完成并正在进行的营销后研究。结果:2015年共24期共发现35种处方药158条医疗广告,平均每期7条(范围0 ~ 11)。4个药物组和5个单一药物广告6次及以上,共10个适应症,我们与老疗法进行了14次比较。我们发现:(1)在14项比较中,有4项(29%)为“无额外益处”,7项(50%)为“不确定益处”,3项(21%)为“无证据”。在没有比较的情况下,我们没有发现新药有“实质性的额外益处”的证据;(2)广告药品每限定日剂量贵2 - 196倍(中位数6);(3) 5种药品发布11个安全公告,1种比较药发布1个安全公告;(4) 20项上市后研究(7项已完成,13项正在进行)用于广告药物,10项研究(4项已完成,6项正在进行)用于比较药物,7项研究(2项已完成,5项正在进行)在3年随访期间评估了广告药物和比较药物。结论和相关性:在2015年发表在《丹麦医学协会杂志》(Journal of the Danish medical Association)上的医疗广告的横断面研究中,广告最多的药物并没有证明比旧疗法有实质性的额外益处,相反,它们的价格要贵得多。从2021年1月起,《丹麦医学会杂志》不再刊登医疗广告。
{"title":"Cross-sectional study of medical advertisements in a national general medical journal: evidence, cost, and safe use of advertised versus comparative drugs.","authors":"Kim Boesen, Anders Lykkemark Simonsen, Karsten Juhl Jørgensen, Peter C Gøtzsche","doi":"10.1186/s41073-021-00111-9","DOIUrl":"https://doi.org/10.1186/s41073-021-00111-9","url":null,"abstract":"<p><strong>Background: </strong>Healthcare professionals are exposed to advertisements for prescription drugs in medical journals. Such advertisements may increase prescriptions of new drugs at the expense of older treatments even when they have no added benefits, are more harmful, and are more expensive. The publication of medical advertisements therefore raises ethical questions related to editorial integrity.</p><p><strong>Methods: </strong>We conducted a descriptive cross-sectional study of all medical advertisements published in the Journal of the Danish Medical Association in 2015. Drugs advertised 6 times or more were compared with older comparators: (1) comparative evidence of added benefit; (2) Defined Daily Dose cost; (3) regulatory safety announcements; and (4) completed and ongoing post-marketing studies 3 years after advertising.</p><p><strong>Results: </strong>We found 158 medical advertisements for 35 prescription drugs published in 24 issues during 2015, with a median of 7 advertisements per issue (range 0 to 11). Four drug groups and 5 single drugs were advertised 6 times or more, for a total of 10 indications, and we made 14 comparisons with older treatments. We found: (1) 'no added benefit' in 4 (29%) of 14 comparisons, 'uncertain benefits' in 7 (50%), and 'no evidence' in 3 (21%) comparisons. In no comparison did we find evidence of 'substantial added benefit' for the new drug; (2) advertised drugs were 2 to 196 times (median 6) more expensive per Defined Daily Dose; (3) 11 safety announcements for five advertised drugs were issued compared to one announcement for one comparator drug; (4) 20 post-marketing studies (7 completed, 13 ongoing) were requested for the advertised drugs versus 10 studies (4 completed, 6 ongoing) for the comparator drugs, and 7 studies (2 completed, 5 ongoing) assessed both an advertised and a comparator drug at 3 year follow-up.</p><p><strong>Conclusions and relevance: </strong>In this cross-sectional study of medical advertisements published in the Journal of the Danish Medical Association during 2015, the most advertised drugs did not have documented substantial added benefits over older treatments, whereas they were substantially more expensive. From January 2021, the Journal of the Danish Medical Association no longer publishes medical advertisements.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"8"},"PeriodicalIF":0.0,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-021-00111-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38968548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-03DOI: 10.1186/s41073-021-00110-w
Tamarinde Haven, Joeri Tijdink, Brian Martinson, Lex Bouter, Frans Oort
Background: Concerns about research misbehavior in academic science have sparked interest in the factors that may explain research misbehavior. Often three clusters of factors are distinguished: individual factors, climate factors and publication factors. Our research question was: to what extent can individual, climate and publication factors explain the variance in frequently perceived research misbehaviors?
Methods: From May 2017 until July 2017, we conducted a survey study among academic researchers in Amsterdam. The survey included three measurement instruments that we previously reported individual results of and here we integrate these findings.
Results: One thousand two hundred ninety-eight researchers completed the survey (response rate: 17%). Results showed that individual, climate and publication factors combined explained 34% of variance in perceived frequency of research misbehavior. Individual factors explained 7%, climate factors explained 22% and publication factors 16%.
Conclusions: Our results suggest that the perceptions of the research climate play a substantial role in explaining variance in research misbehavior. This suggests that efforts to improve departmental norms might have a salutary effect on behavior.
{"title":"Explaining variance in perceived research misbehavior: results from a survey among academic researchers in Amsterdam.","authors":"Tamarinde Haven, Joeri Tijdink, Brian Martinson, Lex Bouter, Frans Oort","doi":"10.1186/s41073-021-00110-w","DOIUrl":"https://doi.org/10.1186/s41073-021-00110-w","url":null,"abstract":"<p><strong>Background: </strong>Concerns about research misbehavior in academic science have sparked interest in the factors that may explain research misbehavior. Often three clusters of factors are distinguished: individual factors, climate factors and publication factors. Our research question was: to what extent can individual, climate and publication factors explain the variance in frequently perceived research misbehaviors?</p><p><strong>Methods: </strong>From May 2017 until July 2017, we conducted a survey study among academic researchers in Amsterdam. The survey included three measurement instruments that we previously reported individual results of and here we integrate these findings.</p><p><strong>Results: </strong>One thousand two hundred ninety-eight researchers completed the survey (response rate: 17%). Results showed that individual, climate and publication factors combined explained 34% of variance in perceived frequency of research misbehavior. Individual factors explained 7%, climate factors explained 22% and publication factors 16%.</p><p><strong>Conclusions: </strong>Our results suggest that the perceptions of the research climate play a substantial role in explaining variance in research misbehavior. This suggests that efforts to improve departmental norms might have a salutary effect on behavior.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2021-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-021-00110-w","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38944409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}