Pub Date : 2019-05-07eCollection Date: 2019-01-01DOI: 10.1186/s41073-019-0068-4
Suzanne Day, Wei Wu, Robin Mason, Paula A Rochon
Background: Important sex and gender differences have been found in research on diabetes complications and treatment. Reporting on whether and how sex and gender impact research findings is crucial for developing tailored diabetes care strategies. To analyze the extent to which this information is available in current diabetes research, we examined original investigations on diabetes for the integration of sex and gender in study reporting.
Methods: We examined original investigations on diabetes published between January 1 and December 31, 2015, in the top five general medicine journals and top five diabetes-specific journals (by 2015 impact factor). Data were extracted on sex and gender integration across seven article sections: title, abstract, introduction, methods, results, discussion, and limitations.
Results: We identified 155 original investigations on diabetes, including 115 randomized controlled trials (RCTs) and 40 observational studies. Sex and gender were rarely incorporated in article titles, abstracts and introductions. Most methods sections did not describe plans for sex/gender analyses; 47 (30.3%) articles described plans to control for sex/gender in the analysis and 12 (7.7%) described plans to stratify results by sex/gender. While most articles (151, 97.4%) reported the sex/gender of study participants, only 10 (6.5%) of all articles reported all study outcomes separately by sex/gender. Discussion of sex-related issues was incorporated into 21 (13.5%) original investigations; however, just 1 (0.6%) discussed gender-related issues. Comparison by journal type (general medicine vs. diabetes specific) yielded only minor differences from the overall integration results. In contrast, RCTs performed more poorly on multiple sex/gender assessment metrics compared to observational studies.
Conclusions: Sex and gender are poorly integrated in current diabetes original investigations, suggesting that substantial improvements in sex and gender data reporting are needed to inform the evidence to support sex- and gender-specific diabetes care.
{"title":"Measuring the data gap: inclusion of sex and gender reporting in diabetes research.","authors":"Suzanne Day, Wei Wu, Robin Mason, Paula A Rochon","doi":"10.1186/s41073-019-0068-4","DOIUrl":"https://doi.org/10.1186/s41073-019-0068-4","url":null,"abstract":"<p><strong>Background: </strong>Important sex and gender differences have been found in research on diabetes complications and treatment. Reporting on whether and how sex and gender impact research findings is crucial for developing tailored diabetes care strategies. To analyze the extent to which this information is available in current diabetes research, we examined original investigations on diabetes for the integration of sex and gender in study reporting.</p><p><strong>Methods: </strong>We examined original investigations on diabetes published between January 1 and December 31, 2015, in the top five general medicine journals and top five diabetes-specific journals (by 2015 impact factor). Data were extracted on sex and gender integration across seven article sections: title, abstract, introduction, methods, results, discussion, and limitations.</p><p><strong>Results: </strong>We identified 155 original investigations on diabetes, including 115 randomized controlled trials (RCTs) and 40 observational studies. Sex and gender were rarely incorporated in article titles, abstracts and introductions. Most methods sections did not describe plans for sex/gender analyses; 47 (30.3%) articles described plans to control for sex/gender in the analysis and 12 (7.7%) described plans to stratify results by sex/gender. While most articles (151, 97.4%) reported the sex/gender of study participants, only 10 (6.5%) of all articles reported all study outcomes separately by sex/gender. Discussion of sex-related issues was incorporated into 21 (13.5%) original investigations; however, just 1 (0.6%) discussed gender-related issues. Comparison by journal type (general medicine vs. diabetes specific) yielded only minor differences from the overall integration results. In contrast, RCTs performed more poorly on multiple sex/gender assessment metrics compared to observational studies.</p><p><strong>Conclusions: </strong>Sex and gender are poorly integrated in current diabetes original investigations, suggesting that substantial improvements in sex and gender data reporting are needed to inform the evidence to support sex- and gender-specific diabetes care.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0068-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37233205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-23eCollection Date: 2019-01-01DOI: 10.1186/s41073-019-0067-5
S Bressers, H van den Elzen, C Gräwe, D van den Oetelaar, P H A Postma, S K Schoustra
Background: Reducing the number of animals used in experiments has become a priority for the governments of many countries. For these reductions to occur, animal-free alternatives must be made more available and, crucially, must be embraced by researchers.
Methods: We conducted an international online survey for academics in the field of animal science (N = 367) to explore researchers' attitudes towards the implementation of animal-free innovations. Through this survey, we address three key questions. The first question is whether scientists who use animals in their research consider governmental goals for animal-free innovations achievable and whether they would support such goals. Secondly, responders were asked to rank the importance of ten roadblocks that could hamper the implementation of animal-free innovations. Finally, responders were asked whether they would migrate (either themselves or their research) if increased animal research regulations in their country of residence restricted their research.
Results: While nearly half (40%) of the responders support governmental goals, the majority (71%) of researchers did not consider such goals achievable in their field within the near future. In terms of roadblocks for implementation of animal-free methods, ~ 80% of the responders considered 'reliability' as important, making it the most highly ranked roadblock. However, all other roadblocks were reported by most responders as somewhat important, suggesting that they must also be considered when addressing animal-free innovations. Importantly, a majority reported that they would consider migration to another country in response to a restrictive animal research policy. Thus, governments must consider the risk of researchers migrating to other institutes, states or countries, leading to a 'brain-drain' if policies are too strict or suitable animal-free alternatives are not available.
Conclusion: Our findings suggest that development and implementation of animal-free innovations are hampered by multiple factors. We outline three pillars concerning education, governmental influence and data sharing, the implementation of which may help to overcome these roadblocks to animal-free innovations.
{"title":"Policy driven changes in animal research practices: mapping researchers' attitudes towards animal-free innovations using the Netherlands as an example.","authors":"S Bressers, H van den Elzen, C Gräwe, D van den Oetelaar, P H A Postma, S K Schoustra","doi":"10.1186/s41073-019-0067-5","DOIUrl":"https://doi.org/10.1186/s41073-019-0067-5","url":null,"abstract":"<p><strong>Background: </strong>Reducing the number of animals used in experiments has become a priority for the governments of many countries. For these reductions to occur, animal-free alternatives must be made more available and, crucially, must be embraced by researchers.</p><p><strong>Methods: </strong>We conducted an international online survey for academics in the field of animal science (<i>N</i> = 367) to explore researchers' attitudes towards the implementation of animal-free innovations. Through this survey, we address three key questions. The first question is whether scientists who use animals in their research consider governmental goals for animal-free innovations achievable and whether they would support such goals. Secondly, responders were asked to rank the importance of ten roadblocks that could hamper the implementation of animal-free innovations. Finally, responders were asked whether they would migrate (either themselves or their research) if increased animal research regulations in their country of residence restricted their research.</p><p><strong>Results: </strong>While nearly half (40%) of the responders support governmental goals, the majority (71%) of researchers did not consider such goals achievable in their field within the near future. In terms of roadblocks for implementation of animal-free methods, ~ 80% of the responders considered 'reliability' as important, making it the most highly ranked roadblock. However, all other roadblocks were reported by most responders as somewhat important, suggesting that they must also be considered when addressing animal-free innovations. Importantly, a majority reported that they would consider migration to another country in response to a restrictive animal research policy. Thus, governments must consider the risk of researchers migrating to other institutes, states or countries, leading to a 'brain-drain' if policies are too strict or suitable animal-free alternatives are not available.</p><p><strong>Conclusion: </strong>Our findings suggest that development and implementation of animal-free innovations are hampered by multiple factors. We outline three pillars concerning education, governmental influence and data sharing, the implementation of which may help to overcome these roadblocks to animal-free innovations.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0067-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37187339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-09eCollection Date: 2019-01-01DOI: 10.1186/s41073-019-0066-6
Tamarinde L Haven, Marije Esther Evalien de Goede, Joeri K Tijdink, Frans Jeroen Oort
Background: The emphasis on impact factors and the quantity of publications intensifies competition between researchers. This competition was traditionally considered an incentive to produce high-quality work, but there are unwanted side-effects of this competition like publication pressure. To measure the effect of publication pressure on researchers, the Publication Pressure Questionnaire (PPQ) was developed. Upon using the PPQ, some issues came to light that motivated a revision.
Method: We constructed two new subscales based on work stress models using the facet method. We administered the revised PPQ (PPQr) to a convenience sample together with the Maslach Burnout Inventory (MBI) and the Work Design Questionnaire (WDQ). To assess which items best measured publication pressure, we carried out a principal component analysis (PCA). Reliability was sufficient when Cronbach's alpha > 0.7. Finally, we administered the PPQr in a larger, independent sample of researchers to check the reliability of the revised version.
Results: Three components were identified as 'stress', 'attitude', and 'resources'. We selected 3 × 6 = 18 items with high loadings in the three-component solution. Based on the convenience sample, Cronbach's alphas were 0.83 for stress, 0.80 for attitude, and 0.76 for resources. We checked the validity of the PPQr by inspecting the correlations with the MBI and the WDQ. Stress correlated 0.62 with MBI's emotional exhaustion. Resources correlated 0.50 with relevant WDQ subscales. To assess the internal structure of the PPQr in the independent reliability sample, we conducted the principal component analysis. The three-component solution explains 50% of the variance. Cronbach's alphas were 0.80, 0.78, and 0.75 for stress, attitude, and resources, respectively.
Conclusion: We conclude that the PPQr is a valid and reliable instrument to measure publication pressure in academic researchers from all disciplinary fields. The PPQr strongly relates to burnout and could also be beneficial for policy makers and research institutions to assess the degree of publication pressure in their institute.
{"title":"Personally perceived publication pressure: revising the Publication Pressure Questionnaire (PPQ) by using work stress models.","authors":"Tamarinde L Haven, Marije Esther Evalien de Goede, Joeri K Tijdink, Frans Jeroen Oort","doi":"10.1186/s41073-019-0066-6","DOIUrl":"https://doi.org/10.1186/s41073-019-0066-6","url":null,"abstract":"<p><strong>Background: </strong>The emphasis on impact factors and the quantity of publications intensifies competition between researchers. This competition was traditionally considered an incentive to produce high-quality work, but there are unwanted side-effects of this competition like publication pressure. To measure the effect of publication pressure on researchers, the Publication Pressure Questionnaire (PPQ) was developed. Upon using the PPQ, some issues came to light that motivated a revision.</p><p><strong>Method: </strong>We constructed two new subscales based on work stress models using the facet method. We administered the revised PPQ (PPQr) to a convenience sample together with the Maslach Burnout Inventory (MBI) and the Work Design Questionnaire (WDQ). To assess which items best measured publication pressure, we carried out a principal component analysis (PCA). Reliability was sufficient when Cronbach's alpha > 0.7. Finally, we administered the PPQr in a larger, independent sample of researchers to check the reliability of the revised version.</p><p><strong>Results: </strong>Three components were identified as 'stress', 'attitude', and 'resources'. We selected 3 × 6 = 18 items with high loadings in the three-component solution. Based on the convenience sample, Cronbach's alphas were 0.83 for stress, 0.80 for attitude, and 0.76 for resources. We checked the validity of the PPQr by inspecting the correlations with the MBI and the WDQ. Stress correlated 0.62 with MBI's emotional exhaustion. Resources correlated 0.50 with relevant WDQ subscales. To assess the internal structure of the PPQr in the independent reliability sample, we conducted the principal component analysis. The three-component solution explains 50% of the variance. Cronbach's alphas were 0.80, 0.78, and 0.75 for stress, attitude, and resources, respectively.</p><p><strong>Conclusion: </strong>We conclude that the PPQr is a valid and reliable instrument to measure publication pressure in academic researchers from all disciplinary fields. The PPQr strongly relates to burnout and could also be beneficial for policy makers and research institutions to assess the degree of publication pressure in their institute.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0066-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37347931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-29eCollection Date: 2019-01-01DOI: 10.1186/s41073-019-0065-7
M J E Urlings, B Duyx, G M H Swaen, L M Bouter, M P Zeegers
Introduction: Bisphenol A is highly debated and studied in relation to a variety of health outcomes. This large variation in the literature makes BPA a topic that is prone to selective use of literature, in order to underpin one's own findings and opinion. Over time, selective use of literature, by means of citations, can lead to a skewed knowledge development and a biased scientific consensus. In this study, we assess which factors drive citation and whether this results in the overrepresentation of harmful health effects of BPA.
Methods: A citation network analysis was performed to test various determinants of citation. A systematic search identified all relevant publications on the human health effect of BPA. Data were extracted on potential determinants of selective citation, such as study outcome, study design, sample size, journal impact factor, authority of the author, self-citation, and funding source. We applied random effect logistic regression to assess whether these determinants influence the likelihood of citation.
Results: One hundred sixty-nine publications on BPA were identified, with 12,432 potential citation pathways of which 808 citations occurred. The network consisted of 63 cross-sectional studies, 34 cohort studies, 29 case-control studies, 35 narrative reviews, and 8 systematic reviews. Positive studies have a 1.5 times greater chance of being cited compared to negative studies. Additionally, the authority of the author and self-citation are consistently found to be positively associated with the likelihood of being cited. Overall, the network seems to be highly influenced by two highly cited publications, whereas 60 out of 169 publications received no citations.
Conclusion: In the literature on BPA, citation is mostly driven by positive study outcome and author-related factors, such as high authority within the network. Interpreting the impact of these factors and the big influence of a few highly cited publications, it can be questioned to which extent the knowledge development in human literature on BPA is actually evidence-based.
{"title":"Selective citation in scientific literature on the human health effects of bisphenol A.","authors":"M J E Urlings, B Duyx, G M H Swaen, L M Bouter, M P Zeegers","doi":"10.1186/s41073-019-0065-7","DOIUrl":"https://doi.org/10.1186/s41073-019-0065-7","url":null,"abstract":"<p><strong>Introduction: </strong>Bisphenol A is highly debated and studied in relation to a variety of health outcomes. This large variation in the literature makes BPA a topic that is prone to selective use of literature, in order to underpin one's own findings and opinion. Over time, selective use of literature, by means of citations, can lead to a skewed knowledge development and a biased scientific consensus. In this study, we assess which factors drive citation and whether this results in the overrepresentation of harmful health effects of BPA.</p><p><strong>Methods: </strong>A citation network analysis was performed to test various determinants of citation. A systematic search identified all relevant publications on the human health effect of BPA. Data were extracted on potential determinants of selective citation, such as study outcome, study design, sample size, journal impact factor, authority of the author, self-citation, and funding source. We applied random effect logistic regression to assess whether these determinants influence the likelihood of citation.</p><p><strong>Results: </strong>One hundred sixty-nine publications on BPA were identified, with 12,432 potential citation pathways of which 808 citations occurred. The network consisted of 63 cross-sectional studies, 34 cohort studies, 29 case-control studies, 35 narrative reviews, and 8 systematic reviews. Positive studies have a 1.5 times greater chance of being cited compared to negative studies. Additionally, the authority of the author and self-citation are consistently found to be positively associated with the likelihood of being cited. Overall, the network seems to be highly influenced by two highly cited publications, whereas 60 out of 169 publications received no citations.</p><p><strong>Conclusion: </strong>In the literature on BPA, citation is mostly driven by positive study outcome and author-related factors, such as high authority within the network. Interpreting the impact of these factors and the big influence of a few highly cited publications, it can be questioned to which extent the knowledge development in human literature on BPA is actually evidence-based.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0065-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37144238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-26eCollection Date: 2019-01-01DOI: 10.1186/s41073-019-0064-8
Christopher Baethge, Sandra Goldbeck-Wood, Stephan Mertens
Background: Narrative reviews are the commonest type of articles in the medical literature. However, unlike systematic reviews and randomized controlled trials (RCT) articles, for which formal instruments exist to evaluate quality, there is currently no instrument available to assess the quality of narrative reviews. In response to this gap, we developed SANRA, the Scale for the Assessment of Narrative Review Articles.
Methods: A team of three experienced journal editors modified or deleted items in an earlier SANRA version based on face validity, item-total correlations, and reliability scores from previous tests. We deleted an item which addressed a manuscript's writing and accessibility due to poor inter-rater reliability. The six items which form the revised scale are rated from 0 (low standard) to 2 (high standard) and cover the following topics: explanation of (1) the importance and (2) the aims of the review, (3) literature search and (4) referencing and presentation of (5) evidence level and (6) relevant endpoint data. For all items, we developed anchor definitions and examples to guide users in filling out the form. The revised scale was tested by the same editors (blinded to each other's ratings) in a group of 30 consecutive non-systematic review manuscripts submitted to a general medical journal.
Results: Raters confirmed that completing the scale is feasible in everyday editorial work. The mean sum score across all 30 manuscripts was 6.0 out of 12 possible points (SD 2.6, range 1-12). Corrected item-total correlations ranged from 0.33 (item 3) to 0.58 (item 6), and Cronbach's alpha was 0.68 (internal consistency). The intra-class correlation coefficient (average measure) was 0.77 [95% CI 0.57, 0.88] (inter-rater reliability). Raters often disagreed on items 1 and 4.
Conclusions: SANRA's feasibility, inter-rater reliability, homogeneity of items, and internal consistency are sufficient for a scale of six items. Further field testing, particularly of validity, is desirable. We recommend rater training based on the "explanations and instructions" document provided with SANRA. In editorial decision-making, SANRA may complement journal-specific evaluation of manuscripts-pertaining to, e.g., audience, originality or difficulty-and may contribute to improving the standard of non-systematic reviews.
背景:叙述性综述是医学文献中最常见的文章类型。然而,与系统评价和随机对照试验(RCT)文章不同的是,这些文章有正式的工具来评估质量,目前还没有工具来评估叙述性评价的质量。针对这一差距,我们开发了SANRA,即叙述性评论文章评估量表。方法:由三名经验丰富的期刊编辑组成的团队根据面孔效度、项目总相关性和先前测试的信度分数修改或删除早期SANRA版本中的项目。我们删除了一个项目,该项目涉及手稿的写作和可访问性,因为评分者之间的可靠性差。修订后量表的六个项目从0(低标准)到2(高标准),涵盖以下主题:解释(1)综述的重要性和(2)目的,(3)文献检索和(4)引用和呈现(5)证据水平和(6)相关终点数据。对于所有项目,我们开发了锚定义和示例来指导用户填写表单。修订后的量表由相同的编辑(对彼此的评分不知情)在一组30个连续提交给普通医学杂志的非系统评论手稿中进行测试。结果:评分者确认完成该量表在日常编辑工作中是可行的。所有30篇稿件的平均总得分为6.0分(SD 2.6,范围1-12)。修正后的项目-总量相关性从0.33(项目3)到0.58(项目6)不等,Cronbach's alpha为0.68(内部一致性)。组内相关系数(平均测量值)为0.77 [95% CI 0.57, 0.88](组间信度)。评分者通常对第1项和第4项持不同意见。结论:SANRA量表的可行性、量表间信度、量表的同质性和量表内部的一致性足以编制一个六项的量表。需要进一步的实地测试,特别是有效性测试。我们建议根据SANRA提供的“说明和说明”文件进行评级培训。在编辑决策中,SANRA可以补充期刊对稿件的特定评估——例如,与读者、原创性或难度有关——并可能有助于提高非系统评论的标准。
{"title":"SANRA-a scale for the quality assessment of narrative review articles.","authors":"Christopher Baethge, Sandra Goldbeck-Wood, Stephan Mertens","doi":"10.1186/s41073-019-0064-8","DOIUrl":"https://doi.org/10.1186/s41073-019-0064-8","url":null,"abstract":"<p><strong>Background: </strong>Narrative reviews are the commonest type of articles in the medical literature. However, unlike systematic reviews and randomized controlled trials (RCT) articles, for which formal instruments exist to evaluate quality, there is currently no instrument available to assess the quality of narrative reviews. In response to this gap, we developed SANRA, the Scale for the Assessment of Narrative Review Articles.</p><p><strong>Methods: </strong>A team of three experienced journal editors modified or deleted items in an earlier SANRA version based on face validity, item-total correlations, and reliability scores from previous tests. We deleted an item which addressed a manuscript's writing and accessibility due to poor inter-rater reliability. The six items which form the revised scale are rated from 0 (low standard) to 2 (high standard) and cover the following topics: explanation of (1) the importance and (2) the aims of the review, (3) literature search and (4) referencing and presentation of (5) evidence level and (6) relevant endpoint data. For all items, we developed anchor definitions and examples to guide users in filling out the form. The revised scale was tested by the same editors (blinded to each other's ratings) in a group of 30 consecutive non-systematic review manuscripts submitted to a general medical journal.</p><p><strong>Results: </strong>Raters confirmed that completing the scale is feasible in everyday editorial work. The mean sum score across all 30 manuscripts was 6.0 out of 12 possible points (SD 2.6, range 1-12). Corrected item-total correlations ranged from 0.33 (item 3) to 0.58 (item 6), and Cronbach's alpha was 0.68 (internal consistency). The intra-class correlation coefficient (average measure) was 0.77 [95% CI 0.57, 0.88] (inter-rater reliability). Raters often disagreed on items 1 and 4.</p><p><strong>Conclusions: </strong>SANRA's feasibility, inter-rater reliability, homogeneity of items, and internal consistency are sufficient for a scale of six items. Further field testing, particularly of validity, is desirable. We recommend rater training based on the \"explanations and instructions\" document provided with SANRA. In editorial decision-making, SANRA may complement journal-specific evaluation of manuscripts-pertaining to, e.g., audience, originality or difficulty-and may contribute to improving the standard of non-systematic reviews.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0064-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37309816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. F. D. Carneiro, Victor G. S. Queiroz, T. Moulin, Carlos A. M. Carvalho, C. Haas, Danielle Rayêe, D. Henshall, Evandro A. De-Souza, F. E. Amorim, Flávia Z. Boos, G. Guercio, Igor R. Costa, K. Hajdu, L. V. van Egmond, M. Modrák, Pedro B. Tan, Richard J. Abdill, S. Burgess, Sylvia F. S. Guerra, V. T. Bortoluzzi, O. Amaral
Background Preprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader’s ability to independently interpret data and reproduce findings. Methods In this observational study, we initially compared independent samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. After that, we performed paired comparisons between preprints from bioRxiv to their own peer-reviewed versions in journals. Results Peer-reviewed articles had, on average, higher quality of reporting than preprints, although the difference was small, with absolute differences of 5.0% [95% CI 1.4, 8.6] and 4.7% [95% CI 2.4, 7.0] of reported items in the independent samples and paired sample comparison, respectively. There were larger differences favoring peer-reviewed articles in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information. Changes in reporting from preprints to peer-reviewed versions did not correlate with the impact factor of the publication venue or with the time lag from bioRxiv to journal publication. Conclusions Our results suggest that, on average, publication in a peer-reviewed journal is associated with improvement in quality of reporting. They also show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions.
预印本在生命科学领域的使用正在迅速增长;然而,与已发表的文章相比,预印本的相对质量仍然存在问题。报告的完整性是衡量质量的一个客观维度,因为透明度可以提高读者独立解释数据和重现发现的能力。在这项观察性研究中,我们首先比较了2016年发表在bioRxiv和pubmed索引期刊上的文章的独立样本,采用质量报告问卷。之后,我们对bioRxiv的预印本和他们在期刊上的同行评审版本进行了配对比较。结果同行评议文章的报告质量平均高于预印本,尽管差异很小,在独立样本和配对样本比较中,报告项目的绝对差异分别为5.0% [95% CI 1.4, 8.6]和4.7% [95% CI 2.4, 7.0]。在标题和摘要展示主要发现的清晰程度以及找到相关报告信息的难易程度等主观评分方面,支持同行评议文章的差异更大。从预印本到同行评议版本的报告变化与发表地点的影响因子或从bioRxiv到期刊发表的时间滞后无关。我们的研究结果表明,平均而言,在同行评议的期刊上发表论文与报告质量的提高有关。他们还表明,在生命科学领域,预印本报告的质量与同行评议文章的质量处于相似的范围内,尽管平均水平略低,这支持了预印本应被视为有效科学贡献的观点。
{"title":"Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature","authors":"C. F. D. Carneiro, Victor G. S. Queiroz, T. Moulin, Carlos A. M. Carvalho, C. Haas, Danielle Rayêe, D. Henshall, Evandro A. De-Souza, F. E. Amorim, Flávia Z. Boos, G. Guercio, Igor R. Costa, K. Hajdu, L. V. van Egmond, M. Modrák, Pedro B. Tan, Richard J. Abdill, S. Burgess, Sylvia F. S. Guerra, V. T. Bortoluzzi, O. Amaral","doi":"10.1101/581892","DOIUrl":"https://doi.org/10.1101/581892","url":null,"abstract":"Background Preprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader’s ability to independently interpret data and reproduce findings. Methods In this observational study, we initially compared independent samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. After that, we performed paired comparisons between preprints from bioRxiv to their own peer-reviewed versions in journals. Results Peer-reviewed articles had, on average, higher quality of reporting than preprints, although the difference was small, with absolute differences of 5.0% [95% CI 1.4, 8.6] and 4.7% [95% CI 2.4, 7.0] of reported items in the independent samples and paired sample comparison, respectively. There were larger differences favoring peer-reviewed articles in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information. Changes in reporting from preprints to peer-reviewed versions did not correlate with the impact factor of the publication venue or with the time lag from bioRxiv to journal publication. Conclusions Our results suggest that, on average, publication in a peer-reviewed journal is associated with improvement in quality of reporting. They also show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions.","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41784313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-27DOI: 10.1186/s41073-019-0063-9
Tony Ross-Hellauer, Edit Görögh
Open peer review (OPR) is moving into the mainstream, but it is often poorly understood and surveys of researcher attitudes show important barriers to implementation. As more journals move to implement and experiment with the myriad of innovations covered by this term, there is a clear need for best practice guidelines to guide implementation. This brief article aims to address this knowledge gap, reporting work based on an interactive stakeholder workshop to create best-practice guidelines for editors and journals who wish to transition to OPR. Although the advice is aimed mainly at editors and publishers of scientific journals, since this is the area in which OPR is at its most mature, many of the principles may also be applicable for the implementation of OPR in other areas (e.g., books, conference submissions).
{"title":"Guidelines for open peer review implementation.","authors":"Tony Ross-Hellauer, Edit Görögh","doi":"10.1186/s41073-019-0063-9","DOIUrl":"10.1186/s41073-019-0063-9","url":null,"abstract":"<p><p>Open peer review (OPR) is moving into the mainstream, but it is often poorly understood and surveys of researcher attitudes show important barriers to implementation. As more journals move to implement and experiment with the myriad of innovations covered by this term, there is a clear need for best practice guidelines to guide implementation. This brief article aims to address this knowledge gap, reporting work based on an interactive stakeholder workshop to create best-practice guidelines for editors and journals who wish to transition to OPR. Although the advice is aimed mainly at editors and publishers of scientific journals, since this is the area in which OPR is at its most mature, many of the principles may also be applicable for the implementation of OPR in other areas (e.g., books, conference submissions).</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0063-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37045643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-19DOI: 10.1186/s41073-019-0062-x
Andrew Grey, Mark Bolland, Greg Gamble, Alison Avenell
Background: Academic institutions play important roles in protecting and preserving research integrity. Concerns have been expressed about the objectivity, adequacy and transparency of institutional investigations of potentially compromised research integrity. We assessed the reports provided to us of investigations by three academic institutions of a large body of overlapping research with potentially compromised integrity.
Methods: In 2017, we raised concerns with four academic institutions about the integrity of > 200 publications co-authored by an overlapping set of researchers. Each institution initiated an investigation. By November 2018, three had reported to us the results of their investigations, but only one report was publicly available. Two investigators independently assessed each available report using a published 26-item checklist designed to determine the quality and adequacy of institutional investigations of research integrity. Each assessor recorded additional comments ad hoc.
Results: Concerns raised with the institutions were overlapping, wide-ranging and included those which were both general and publication-specific. The number of potentially affected publications at individual institutions ranged from 34 to 200. The duration of investigation by the three institutions which provided reports was 8-17 months. These investigations covered 14%, 15% and 77%, respectively, of potentially affected publications. Between-assessor agreement using the quality checklist was 0.68, 0.72 and 0.65 for each report. Only 4/78 individual checklist items were addressed adequately: a further 14 could not be assessed. Each report was graded inadequate overall. Reports failed to address publication-specific concerns and focussed more strongly on determining research misconduct than evaluating the integrity of publications.
Conclusions: Our analyses identify important deficiencies in the quality and reporting of institutional investigation of concerns about the integrity of a large body of research reported by an overlapping set of researchers. They reinforce disquiet about the ability of institutions to rigorously and objectively oversee integrity of research conducted by their own employees.
{"title":"Quality of reports of investigations of research integrity by academic institutions.","authors":"Andrew Grey, Mark Bolland, Greg Gamble, Alison Avenell","doi":"10.1186/s41073-019-0062-x","DOIUrl":"10.1186/s41073-019-0062-x","url":null,"abstract":"<p><strong>Background: </strong>Academic institutions play important roles in protecting and preserving research integrity. Concerns have been expressed about the objectivity, adequacy and transparency of institutional investigations of potentially compromised research integrity. We assessed the reports provided to us of investigations by three academic institutions of a large body of overlapping research with potentially compromised integrity.</p><p><strong>Methods: </strong>In 2017, we raised concerns with four academic institutions about the integrity of > 200 publications co-authored by an overlapping set of researchers. Each institution initiated an investigation. By November 2018, three had reported to us the results of their investigations, but only one report was publicly available. Two investigators independently assessed each available report using a published 26-item checklist designed to determine the quality and adequacy of institutional investigations of research integrity. Each assessor recorded additional comments ad hoc.</p><p><strong>Results: </strong>Concerns raised with the institutions were overlapping, wide-ranging and included those which were both general and publication-specific. The number of potentially affected publications at individual institutions ranged from 34 to 200. The duration of investigation by the three institutions which provided reports was 8-17 months. These investigations covered 14%, 15% and 77%, respectively, of potentially affected publications. Between-assessor agreement using the quality checklist was 0.68, 0.72 and 0.65 for each report. Only 4/78 individual checklist items were addressed adequately: a further 14 could not be assessed. Each report was graded inadequate overall. Reports failed to address publication-specific concerns and focussed more strongly on determining research misconduct than evaluating the integrity of publications.</p><p><strong>Conclusions: </strong>Our analyses identify important deficiencies in the quality and reporting of institutional investigation of concerns about the integrity of a large body of research reported by an overlapping set of researchers. They reinforce disquiet about the ability of institutions to rigorously and objectively oversee integrity of research conducted by their own employees.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0062-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37173168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-16eCollection Date: 2019-01-01DOI: 10.1186/s41073-018-0061-3
Eric Badu, Paul Okyere, Diane Bell, Naomi Gyamfi, Maxwell Peprah Opoku, Peter Agyei-Baffour, Anthony Kwaku Edusei
Introduction: The abstracts of a conference are important for informing the participants about the results that are communicated. However, there is poor reporting in conference abstracts in disability research. This paper aims to assess the reporting in the abstracts presented at the 5th African Network for Evidence-to-Action in Disability (AfriNEAD) Conference in Ghana.
Methods: This descriptive study extracted information from the abstracts presented at the 5th AfriNEAD Conference. Three reviewers independently reviewed all the included abstracts using a predefined data extraction form. Descriptive statistics were used to analyze the extracted information, using Stata version 15.
Results: Of the 76 abstracts assessed, 54 met the inclusion criteria, while 22 were excluded. More than half of all the included abstracts (32/54; 59.26%) were studies conducted in Ghana. Some of the included abstracts did not report on the study design (37/54; 68.5%), the type of analysis performed (30/54; 55.56%), the sampling (27/54; 50%), and the sample size (18/54; 33.33%). Almost all the included abstracts did not report the age distribution and the gender of the participants.
Conclusion: The study findings confirm that there is poor reporting of methods and findings in conference abstracts. Future conference organizers should critically examine abstracts to ensure that these issues are adequately addressed, so that findings are effectively communicated to participants.
会议摘要对于告知与会者会议的结果是非常重要的。然而,会议摘要对残疾研究的报道却很少。本文旨在评估在加纳举行的第五届非洲残疾证据行动网络(AfriNEAD)会议上提交的摘要报告。方法:本描述性研究从第5届非洲会议上发表的摘要中提取信息。三位审稿人使用预定义的数据提取表单独立审查所有包含的摘要。描述性统计用于分析提取的信息,使用Stata version 15。结果:76篇综述中,54篇符合纳入标准,22篇被排除。超过一半的收录摘要(32/54;59.26%)为在加纳进行的研究。一些纳入的摘要没有报道研究设计(37/54;68.5%),所进行的分析类型(30/54;55.56%),抽样(27/54;50%),样本量(18/54;33.33%)。几乎所有纳入的摘要都没有报告参与者的年龄分布和性别。结论:研究结果证实了会议摘要中对方法和结果的报道不足。未来的会议组织者应该严格审查摘要,以确保这些问题得到充分解决,以便将研究结果有效地传达给与会者。
{"title":"Reporting in the abstracts presented at the 5th AfriNEAD (African Network for Evidence-to-Action in Disability) Conference in Ghana.","authors":"Eric Badu, Paul Okyere, Diane Bell, Naomi Gyamfi, Maxwell Peprah Opoku, Peter Agyei-Baffour, Anthony Kwaku Edusei","doi":"10.1186/s41073-018-0061-3","DOIUrl":"https://doi.org/10.1186/s41073-018-0061-3","url":null,"abstract":"<p><strong>Introduction: </strong>The abstracts of a conference are important for informing the participants about the results that are communicated. However, there is poor reporting in conference abstracts in disability research. This paper aims to assess the reporting in the abstracts presented at the 5th African Network for Evidence-to-Action in Disability (AfriNEAD) Conference in Ghana.</p><p><strong>Methods: </strong>This descriptive study extracted information from the abstracts presented at the 5th AfriNEAD Conference. Three reviewers independently reviewed all the included abstracts using a predefined data extraction form. Descriptive statistics were used to analyze the extracted information, using Stata version 15.</p><p><strong>Results: </strong>Of the 76 abstracts assessed, 54 met the inclusion criteria, while 22 were excluded. More than half of all the included abstracts (32/54; 59.26%) were studies conducted in Ghana. Some of the included abstracts did not report on the study design (37/54; 68.5%), the type of analysis performed (30/54; 55.56%), the sampling (27/54; 50%), and the sample size (18/54; 33.33%). Almost all the included abstracts did not report the age distribution and the gender of the participants.</p><p><strong>Conclusion: </strong>The study findings confirm that there is poor reporting of methods and findings in conference abstracts. Future conference organizers should critically examine abstracts to ensure that these issues are adequately addressed, so that findings are effectively communicated to participants.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-018-0061-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36939596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-09eCollection Date: 2019-01-01DOI: 10.1186/s41073-018-0060-4
Rik Peels
A large number of scientists and several news platforms have, over the last few years, been speaking of a replication crisis in various academic disciplines, especially the biomedical and social sciences. This paper answers the novel question of whether we should also pursue replication in the humanities. First, I create more conceptual clarity by defining, in addition to the term "humanities," various key terms in the debate on replication, such as "reproduction" and "replicability." In doing so, I pay attention to what is supposed to be the object of replication: certain studies, particular inferences, of specific results. After that, I spell out three reasons for thinking that replication in the humanities is not possible and argue that they are unconvincing. Subsequently, I give a more detailed case for thinking that replication in the humanities is possible. Finally, I explain why such replication in the humanities is not only possible, but also desirable.
{"title":"Replicability and replication in the humanities.","authors":"Rik Peels","doi":"10.1186/s41073-018-0060-4","DOIUrl":"10.1186/s41073-018-0060-4","url":null,"abstract":"<p><p>A large number of scientists and several news platforms have, over the last few years, been speaking of a replication crisis in various academic disciplines, especially the biomedical and social sciences. This paper answers the novel question of whether we should also pursue replication in the humanities. First, I create more conceptual clarity by defining, in addition to the term \"humanities,\" various key terms in the debate on replication, such as \"reproduction\" and \"replicability.\" In doing so, I pay attention to what is supposed to be the object of replication: certain studies, particular inferences, of specific results. After that, I spell out three reasons for thinking that replication in the humanities is not possible and argue that they are unconvincing. Subsequently, I give a more detailed case for thinking that replication in the humanities is possible. Finally, I explain why such replication in the humanities is not only possible, but also desirable.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6348612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36918266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}