Pub Date : 2019-06-05eCollection Date: 2019-01-01DOI: 10.1186/s41073-019-0070-x
Cate Foster, Elizabeth Wager, Jackie Marchington, Mina Patel, Steve Banner, Nina C Kennard, Antonia Panayi, Rianne Stacey
Research that has been sponsored by pharmaceutical, medical device and biotechnology companies is often presented at scientific and medical conferences. However, practices vary between organizations and it can be difficult to follow both individual conference requirements and good publication practice guidelines. Until now, no specific guidelines or recommendations have been available to describe best practice for conference presentations. This document was developed by a working group of publication professionals and uploaded to PeerJ Preprints for consultation prior to publication; an additional 67 medical societies, medical conference sites and conference companies were also asked to comment. The resulting recommendations aim to complement current good publication practice and authorship guidelines, outline the general principles of best practice for conference presentations and provide recommendations around authorship, contributorship, financial transparency, prior publication and copyright, to conference organizers, authors and industry professionals. While the authors of this document recognize that individual conference guidelines should be respected, they urge organizers to consider authorship criteria and data transparency when designing submission sites and setting parameters around word/character count and content for abstracts. It is also important to recognize that conference presentations have different limitations to full journal publications, for example, in the case of limited audiences that necessitate refocused abstracts, or where lead authors do not speak the local language, and these have been acknowledged accordingly. The authors also recognize the need for further clarity regarding copyright of previously published abstracts and have made recommendations to assist with best practice. By following Good Practice for Conference Abstracts and Presentations: GPCAP recommendations, industry professionals, authors and conference organizers will improve consistency, transparency and integrity of publications submitted to conferences worldwide.
{"title":"Good Practice for Conference Abstracts and Presentations: GPCAP.","authors":"Cate Foster, Elizabeth Wager, Jackie Marchington, Mina Patel, Steve Banner, Nina C Kennard, Antonia Panayi, Rianne Stacey","doi":"10.1186/s41073-019-0070-x","DOIUrl":"10.1186/s41073-019-0070-x","url":null,"abstract":"<p><p>Research that has been sponsored by pharmaceutical, medical device and biotechnology companies is often presented at scientific and medical conferences. However, practices vary between organizations and it can be difficult to follow both individual conference requirements and good publication practice guidelines. Until now, no specific guidelines or recommendations have been available to describe best practice for conference presentations. This document was developed by a working group of publication professionals and uploaded to PeerJ Preprints for consultation prior to publication; an additional 67 medical societies, medical conference sites and conference companies were also asked to comment. The resulting recommendations aim to complement current good publication practice and authorship guidelines, outline the general principles of best practice for conference presentations and provide recommendations around authorship, contributorship, financial transparency, prior publication and copyright, to conference organizers, authors and industry professionals. While the authors of this document recognize that individual conference guidelines should be respected, they urge organizers to consider authorship criteria and data transparency when designing submission sites and setting parameters around word/character count and content for abstracts. It is also important to recognize that conference presentations have different limitations to full journal publications, for example, in the case of limited audiences that necessitate refocused abstracts, or where lead authors do not speak the local language, and these have been acknowledged accordingly. The authors also recognize the need for further clarity regarding copyright of previously published abstracts and have made recommendations to assist with best practice. By following Good Practice for Conference Abstracts and Presentations: GPCAP recommendations, industry professionals, authors and conference organizers will improve consistency, transparency and integrity of publications submitted to conferences worldwide.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"11"},"PeriodicalIF":0.0,"publicationDate":"2019-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0070-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37315202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-30eCollection Date: 2019-01-01DOI: 10.1186/s41073-019-0071-9
Zubin Master, Evelyn Tenenbaum
A recent commentary argued for arbitration to resolve authorship disputes within academic research settings explaining that current mechanisms to resolve conflicts result in unclear outcomes and institutional power vested in senior investigators could compromise fairness. We argue here that arbitration is not a suitable means to resolve disputes among researchers in academia because it remains unclear who will assume the costs of arbitration, the rules of evidence do not apply to arbitration, and decisions are binding and very difficult to appeal. Instead of arbitration, we advocate for peer-based approaches involving a peer review committee and research ethics consultation to help resolve authorship disagreements. We describe the composition of an institutional peer review committee to address authorship disputes. Both of these mechanisms are found, or can be formed, within academic institutions and offer several advantages to researchers who are likely to shy away from legalistic processes and gravitate towards those handled by their peers. Peer-based approaches are cheaper than arbitration and the experts involved have knowledge about academic publishing and the culture of research in the specific field. Decisions by knowledgeable and neutral experts could reduce bias, have greater authority, and could be appealed. Not only can peer-based approaches be leveraged to resolve authorship disagreements, but they may also enhance collegiality and promote a healthy team environment.
{"title":"The advantages of peer review over arbitration for resolving authorship disputes.","authors":"Zubin Master, Evelyn Tenenbaum","doi":"10.1186/s41073-019-0071-9","DOIUrl":"https://doi.org/10.1186/s41073-019-0071-9","url":null,"abstract":"<p><p>A recent commentary argued for arbitration to resolve authorship disputes within academic research settings explaining that current mechanisms to resolve conflicts result in unclear outcomes and institutional power vested in senior investigators could compromise fairness. We argue here that arbitration is not a suitable means to resolve disputes among researchers in academia because it remains unclear who will assume the costs of arbitration, the rules of evidence do not apply to arbitration, and decisions are binding and very difficult to appeal. Instead of arbitration, we advocate for peer-based approaches involving a peer review committee and research ethics consultation to help resolve authorship disagreements. We describe the composition of an institutional peer review committee to address authorship disputes. Both of these mechanisms are found, or can be formed, within academic institutions and offer several advantages to researchers who are likely to shy away from legalistic processes and gravitate towards those handled by their peers. Peer-based approaches are cheaper than arbitration and the experts involved have knowledge about academic publishing and the culture of research in the specific field. Decisions by knowledgeable and neutral experts could reduce bias, have greater authority, and could be appealed. Not only can peer-based approaches be leveraged to resolve authorship disagreements, but they may also enhance collegiality and promote a healthy team environment.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"10"},"PeriodicalIF":0.0,"publicationDate":"2019-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0071-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37302995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-07eCollection Date: 2019-01-01DOI: 10.1186/s41073-019-0068-4
Suzanne Day, Wei Wu, Robin Mason, Paula A Rochon
Background: Important sex and gender differences have been found in research on diabetes complications and treatment. Reporting on whether and how sex and gender impact research findings is crucial for developing tailored diabetes care strategies. To analyze the extent to which this information is available in current diabetes research, we examined original investigations on diabetes for the integration of sex and gender in study reporting.
Methods: We examined original investigations on diabetes published between January 1 and December 31, 2015, in the top five general medicine journals and top five diabetes-specific journals (by 2015 impact factor). Data were extracted on sex and gender integration across seven article sections: title, abstract, introduction, methods, results, discussion, and limitations.
Results: We identified 155 original investigations on diabetes, including 115 randomized controlled trials (RCTs) and 40 observational studies. Sex and gender were rarely incorporated in article titles, abstracts and introductions. Most methods sections did not describe plans for sex/gender analyses; 47 (30.3%) articles described plans to control for sex/gender in the analysis and 12 (7.7%) described plans to stratify results by sex/gender. While most articles (151, 97.4%) reported the sex/gender of study participants, only 10 (6.5%) of all articles reported all study outcomes separately by sex/gender. Discussion of sex-related issues was incorporated into 21 (13.5%) original investigations; however, just 1 (0.6%) discussed gender-related issues. Comparison by journal type (general medicine vs. diabetes specific) yielded only minor differences from the overall integration results. In contrast, RCTs performed more poorly on multiple sex/gender assessment metrics compared to observational studies.
Conclusions: Sex and gender are poorly integrated in current diabetes original investigations, suggesting that substantial improvements in sex and gender data reporting are needed to inform the evidence to support sex- and gender-specific diabetes care.
{"title":"Measuring the data gap: inclusion of sex and gender reporting in diabetes research.","authors":"Suzanne Day, Wei Wu, Robin Mason, Paula A Rochon","doi":"10.1186/s41073-019-0068-4","DOIUrl":"https://doi.org/10.1186/s41073-019-0068-4","url":null,"abstract":"<p><strong>Background: </strong>Important sex and gender differences have been found in research on diabetes complications and treatment. Reporting on whether and how sex and gender impact research findings is crucial for developing tailored diabetes care strategies. To analyze the extent to which this information is available in current diabetes research, we examined original investigations on diabetes for the integration of sex and gender in study reporting.</p><p><strong>Methods: </strong>We examined original investigations on diabetes published between January 1 and December 31, 2015, in the top five general medicine journals and top five diabetes-specific journals (by 2015 impact factor). Data were extracted on sex and gender integration across seven article sections: title, abstract, introduction, methods, results, discussion, and limitations.</p><p><strong>Results: </strong>We identified 155 original investigations on diabetes, including 115 randomized controlled trials (RCTs) and 40 observational studies. Sex and gender were rarely incorporated in article titles, abstracts and introductions. Most methods sections did not describe plans for sex/gender analyses; 47 (30.3%) articles described plans to control for sex/gender in the analysis and 12 (7.7%) described plans to stratify results by sex/gender. While most articles (151, 97.4%) reported the sex/gender of study participants, only 10 (6.5%) of all articles reported all study outcomes separately by sex/gender. Discussion of sex-related issues was incorporated into 21 (13.5%) original investigations; however, just 1 (0.6%) discussed gender-related issues. Comparison by journal type (general medicine vs. diabetes specific) yielded only minor differences from the overall integration results. In contrast, RCTs performed more poorly on multiple sex/gender assessment metrics compared to observational studies.</p><p><strong>Conclusions: </strong>Sex and gender are poorly integrated in current diabetes original investigations, suggesting that substantial improvements in sex and gender data reporting are needed to inform the evidence to support sex- and gender-specific diabetes care.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2019-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0068-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37233205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-23eCollection Date: 2019-01-01DOI: 10.1186/s41073-019-0067-5
S Bressers, H van den Elzen, C Gräwe, D van den Oetelaar, P H A Postma, S K Schoustra
Background: Reducing the number of animals used in experiments has become a priority for the governments of many countries. For these reductions to occur, animal-free alternatives must be made more available and, crucially, must be embraced by researchers.
Methods: We conducted an international online survey for academics in the field of animal science (N = 367) to explore researchers' attitudes towards the implementation of animal-free innovations. Through this survey, we address three key questions. The first question is whether scientists who use animals in their research consider governmental goals for animal-free innovations achievable and whether they would support such goals. Secondly, responders were asked to rank the importance of ten roadblocks that could hamper the implementation of animal-free innovations. Finally, responders were asked whether they would migrate (either themselves or their research) if increased animal research regulations in their country of residence restricted their research.
Results: While nearly half (40%) of the responders support governmental goals, the majority (71%) of researchers did not consider such goals achievable in their field within the near future. In terms of roadblocks for implementation of animal-free methods, ~ 80% of the responders considered 'reliability' as important, making it the most highly ranked roadblock. However, all other roadblocks were reported by most responders as somewhat important, suggesting that they must also be considered when addressing animal-free innovations. Importantly, a majority reported that they would consider migration to another country in response to a restrictive animal research policy. Thus, governments must consider the risk of researchers migrating to other institutes, states or countries, leading to a 'brain-drain' if policies are too strict or suitable animal-free alternatives are not available.
Conclusion: Our findings suggest that development and implementation of animal-free innovations are hampered by multiple factors. We outline three pillars concerning education, governmental influence and data sharing, the implementation of which may help to overcome these roadblocks to animal-free innovations.
{"title":"Policy driven changes in animal research practices: mapping researchers' attitudes towards animal-free innovations using the Netherlands as an example.","authors":"S Bressers, H van den Elzen, C Gräwe, D van den Oetelaar, P H A Postma, S K Schoustra","doi":"10.1186/s41073-019-0067-5","DOIUrl":"https://doi.org/10.1186/s41073-019-0067-5","url":null,"abstract":"<p><strong>Background: </strong>Reducing the number of animals used in experiments has become a priority for the governments of many countries. For these reductions to occur, animal-free alternatives must be made more available and, crucially, must be embraced by researchers.</p><p><strong>Methods: </strong>We conducted an international online survey for academics in the field of animal science (<i>N</i> = 367) to explore researchers' attitudes towards the implementation of animal-free innovations. Through this survey, we address three key questions. The first question is whether scientists who use animals in their research consider governmental goals for animal-free innovations achievable and whether they would support such goals. Secondly, responders were asked to rank the importance of ten roadblocks that could hamper the implementation of animal-free innovations. Finally, responders were asked whether they would migrate (either themselves or their research) if increased animal research regulations in their country of residence restricted their research.</p><p><strong>Results: </strong>While nearly half (40%) of the responders support governmental goals, the majority (71%) of researchers did not consider such goals achievable in their field within the near future. In terms of roadblocks for implementation of animal-free methods, ~ 80% of the responders considered 'reliability' as important, making it the most highly ranked roadblock. However, all other roadblocks were reported by most responders as somewhat important, suggesting that they must also be considered when addressing animal-free innovations. Importantly, a majority reported that they would consider migration to another country in response to a restrictive animal research policy. Thus, governments must consider the risk of researchers migrating to other institutes, states or countries, leading to a 'brain-drain' if policies are too strict or suitable animal-free alternatives are not available.</p><p><strong>Conclusion: </strong>Our findings suggest that development and implementation of animal-free innovations are hampered by multiple factors. We outline three pillars concerning education, governmental influence and data sharing, the implementation of which may help to overcome these roadblocks to animal-free innovations.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"8"},"PeriodicalIF":0.0,"publicationDate":"2019-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0067-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37187339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-09eCollection Date: 2019-01-01DOI: 10.1186/s41073-019-0066-6
Tamarinde L Haven, Marije Esther Evalien de Goede, Joeri K Tijdink, Frans Jeroen Oort
Background: The emphasis on impact factors and the quantity of publications intensifies competition between researchers. This competition was traditionally considered an incentive to produce high-quality work, but there are unwanted side-effects of this competition like publication pressure. To measure the effect of publication pressure on researchers, the Publication Pressure Questionnaire (PPQ) was developed. Upon using the PPQ, some issues came to light that motivated a revision.
Method: We constructed two new subscales based on work stress models using the facet method. We administered the revised PPQ (PPQr) to a convenience sample together with the Maslach Burnout Inventory (MBI) and the Work Design Questionnaire (WDQ). To assess which items best measured publication pressure, we carried out a principal component analysis (PCA). Reliability was sufficient when Cronbach's alpha > 0.7. Finally, we administered the PPQr in a larger, independent sample of researchers to check the reliability of the revised version.
Results: Three components were identified as 'stress', 'attitude', and 'resources'. We selected 3 × 6 = 18 items with high loadings in the three-component solution. Based on the convenience sample, Cronbach's alphas were 0.83 for stress, 0.80 for attitude, and 0.76 for resources. We checked the validity of the PPQr by inspecting the correlations with the MBI and the WDQ. Stress correlated 0.62 with MBI's emotional exhaustion. Resources correlated 0.50 with relevant WDQ subscales. To assess the internal structure of the PPQr in the independent reliability sample, we conducted the principal component analysis. The three-component solution explains 50% of the variance. Cronbach's alphas were 0.80, 0.78, and 0.75 for stress, attitude, and resources, respectively.
Conclusion: We conclude that the PPQr is a valid and reliable instrument to measure publication pressure in academic researchers from all disciplinary fields. The PPQr strongly relates to burnout and could also be beneficial for policy makers and research institutions to assess the degree of publication pressure in their institute.
{"title":"Personally perceived publication pressure: revising the Publication Pressure Questionnaire (PPQ) by using work stress models.","authors":"Tamarinde L Haven, Marije Esther Evalien de Goede, Joeri K Tijdink, Frans Jeroen Oort","doi":"10.1186/s41073-019-0066-6","DOIUrl":"https://doi.org/10.1186/s41073-019-0066-6","url":null,"abstract":"<p><strong>Background: </strong>The emphasis on impact factors and the quantity of publications intensifies competition between researchers. This competition was traditionally considered an incentive to produce high-quality work, but there are unwanted side-effects of this competition like publication pressure. To measure the effect of publication pressure on researchers, the Publication Pressure Questionnaire (PPQ) was developed. Upon using the PPQ, some issues came to light that motivated a revision.</p><p><strong>Method: </strong>We constructed two new subscales based on work stress models using the facet method. We administered the revised PPQ (PPQr) to a convenience sample together with the Maslach Burnout Inventory (MBI) and the Work Design Questionnaire (WDQ). To assess which items best measured publication pressure, we carried out a principal component analysis (PCA). Reliability was sufficient when Cronbach's alpha > 0.7. Finally, we administered the PPQr in a larger, independent sample of researchers to check the reliability of the revised version.</p><p><strong>Results: </strong>Three components were identified as 'stress', 'attitude', and 'resources'. We selected 3 × 6 = 18 items with high loadings in the three-component solution. Based on the convenience sample, Cronbach's alphas were 0.83 for stress, 0.80 for attitude, and 0.76 for resources. We checked the validity of the PPQr by inspecting the correlations with the MBI and the WDQ. Stress correlated 0.62 with MBI's emotional exhaustion. Resources correlated 0.50 with relevant WDQ subscales. To assess the internal structure of the PPQr in the independent reliability sample, we conducted the principal component analysis. The three-component solution explains 50% of the variance. Cronbach's alphas were 0.80, 0.78, and 0.75 for stress, attitude, and resources, respectively.</p><p><strong>Conclusion: </strong>We conclude that the PPQr is a valid and reliable instrument to measure publication pressure in academic researchers from all disciplinary fields. The PPQr strongly relates to burnout and could also be beneficial for policy makers and research institutions to assess the degree of publication pressure in their institute.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2019-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0066-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37347931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-29eCollection Date: 2019-01-01DOI: 10.1186/s41073-019-0065-7
M J E Urlings, B Duyx, G M H Swaen, L M Bouter, M P Zeegers
Introduction: Bisphenol A is highly debated and studied in relation to a variety of health outcomes. This large variation in the literature makes BPA a topic that is prone to selective use of literature, in order to underpin one's own findings and opinion. Over time, selective use of literature, by means of citations, can lead to a skewed knowledge development and a biased scientific consensus. In this study, we assess which factors drive citation and whether this results in the overrepresentation of harmful health effects of BPA.
Methods: A citation network analysis was performed to test various determinants of citation. A systematic search identified all relevant publications on the human health effect of BPA. Data were extracted on potential determinants of selective citation, such as study outcome, study design, sample size, journal impact factor, authority of the author, self-citation, and funding source. We applied random effect logistic regression to assess whether these determinants influence the likelihood of citation.
Results: One hundred sixty-nine publications on BPA were identified, with 12,432 potential citation pathways of which 808 citations occurred. The network consisted of 63 cross-sectional studies, 34 cohort studies, 29 case-control studies, 35 narrative reviews, and 8 systematic reviews. Positive studies have a 1.5 times greater chance of being cited compared to negative studies. Additionally, the authority of the author and self-citation are consistently found to be positively associated with the likelihood of being cited. Overall, the network seems to be highly influenced by two highly cited publications, whereas 60 out of 169 publications received no citations.
Conclusion: In the literature on BPA, citation is mostly driven by positive study outcome and author-related factors, such as high authority within the network. Interpreting the impact of these factors and the big influence of a few highly cited publications, it can be questioned to which extent the knowledge development in human literature on BPA is actually evidence-based.
{"title":"Selective citation in scientific literature on the human health effects of bisphenol A.","authors":"M J E Urlings, B Duyx, G M H Swaen, L M Bouter, M P Zeegers","doi":"10.1186/s41073-019-0065-7","DOIUrl":"https://doi.org/10.1186/s41073-019-0065-7","url":null,"abstract":"<p><strong>Introduction: </strong>Bisphenol A is highly debated and studied in relation to a variety of health outcomes. This large variation in the literature makes BPA a topic that is prone to selective use of literature, in order to underpin one's own findings and opinion. Over time, selective use of literature, by means of citations, can lead to a skewed knowledge development and a biased scientific consensus. In this study, we assess which factors drive citation and whether this results in the overrepresentation of harmful health effects of BPA.</p><p><strong>Methods: </strong>A citation network analysis was performed to test various determinants of citation. A systematic search identified all relevant publications on the human health effect of BPA. Data were extracted on potential determinants of selective citation, such as study outcome, study design, sample size, journal impact factor, authority of the author, self-citation, and funding source. We applied random effect logistic regression to assess whether these determinants influence the likelihood of citation.</p><p><strong>Results: </strong>One hundred sixty-nine publications on BPA were identified, with 12,432 potential citation pathways of which 808 citations occurred. The network consisted of 63 cross-sectional studies, 34 cohort studies, 29 case-control studies, 35 narrative reviews, and 8 systematic reviews. Positive studies have a 1.5 times greater chance of being cited compared to negative studies. Additionally, the authority of the author and self-citation are consistently found to be positively associated with the likelihood of being cited. Overall, the network seems to be highly influenced by two highly cited publications, whereas 60 out of 169 publications received no citations.</p><p><strong>Conclusion: </strong>In the literature on BPA, citation is mostly driven by positive study outcome and author-related factors, such as high authority within the network. Interpreting the impact of these factors and the big influence of a few highly cited publications, it can be questioned to which extent the knowledge development in human literature on BPA is actually evidence-based.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"6"},"PeriodicalIF":0.0,"publicationDate":"2019-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0065-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37144238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-26eCollection Date: 2019-01-01DOI: 10.1186/s41073-019-0064-8
Christopher Baethge, Sandra Goldbeck-Wood, Stephan Mertens
Background: Narrative reviews are the commonest type of articles in the medical literature. However, unlike systematic reviews and randomized controlled trials (RCT) articles, for which formal instruments exist to evaluate quality, there is currently no instrument available to assess the quality of narrative reviews. In response to this gap, we developed SANRA, the Scale for the Assessment of Narrative Review Articles.
Methods: A team of three experienced journal editors modified or deleted items in an earlier SANRA version based on face validity, item-total correlations, and reliability scores from previous tests. We deleted an item which addressed a manuscript's writing and accessibility due to poor inter-rater reliability. The six items which form the revised scale are rated from 0 (low standard) to 2 (high standard) and cover the following topics: explanation of (1) the importance and (2) the aims of the review, (3) literature search and (4) referencing and presentation of (5) evidence level and (6) relevant endpoint data. For all items, we developed anchor definitions and examples to guide users in filling out the form. The revised scale was tested by the same editors (blinded to each other's ratings) in a group of 30 consecutive non-systematic review manuscripts submitted to a general medical journal.
Results: Raters confirmed that completing the scale is feasible in everyday editorial work. The mean sum score across all 30 manuscripts was 6.0 out of 12 possible points (SD 2.6, range 1-12). Corrected item-total correlations ranged from 0.33 (item 3) to 0.58 (item 6), and Cronbach's alpha was 0.68 (internal consistency). The intra-class correlation coefficient (average measure) was 0.77 [95% CI 0.57, 0.88] (inter-rater reliability). Raters often disagreed on items 1 and 4.
Conclusions: SANRA's feasibility, inter-rater reliability, homogeneity of items, and internal consistency are sufficient for a scale of six items. Further field testing, particularly of validity, is desirable. We recommend rater training based on the "explanations and instructions" document provided with SANRA. In editorial decision-making, SANRA may complement journal-specific evaluation of manuscripts-pertaining to, e.g., audience, originality or difficulty-and may contribute to improving the standard of non-systematic reviews.
背景:叙述性综述是医学文献中最常见的文章类型。然而,与系统评价和随机对照试验(RCT)文章不同的是,这些文章有正式的工具来评估质量,目前还没有工具来评估叙述性评价的质量。针对这一差距,我们开发了SANRA,即叙述性评论文章评估量表。方法:由三名经验丰富的期刊编辑组成的团队根据面孔效度、项目总相关性和先前测试的信度分数修改或删除早期SANRA版本中的项目。我们删除了一个项目,该项目涉及手稿的写作和可访问性,因为评分者之间的可靠性差。修订后量表的六个项目从0(低标准)到2(高标准),涵盖以下主题:解释(1)综述的重要性和(2)目的,(3)文献检索和(4)引用和呈现(5)证据水平和(6)相关终点数据。对于所有项目,我们开发了锚定义和示例来指导用户填写表单。修订后的量表由相同的编辑(对彼此的评分不知情)在一组30个连续提交给普通医学杂志的非系统评论手稿中进行测试。结果:评分者确认完成该量表在日常编辑工作中是可行的。所有30篇稿件的平均总得分为6.0分(SD 2.6,范围1-12)。修正后的项目-总量相关性从0.33(项目3)到0.58(项目6)不等,Cronbach's alpha为0.68(内部一致性)。组内相关系数(平均测量值)为0.77 [95% CI 0.57, 0.88](组间信度)。评分者通常对第1项和第4项持不同意见。结论:SANRA量表的可行性、量表间信度、量表的同质性和量表内部的一致性足以编制一个六项的量表。需要进一步的实地测试,特别是有效性测试。我们建议根据SANRA提供的“说明和说明”文件进行评级培训。在编辑决策中,SANRA可以补充期刊对稿件的特定评估——例如,与读者、原创性或难度有关——并可能有助于提高非系统评论的标准。
{"title":"SANRA-a scale for the quality assessment of narrative review articles.","authors":"Christopher Baethge, Sandra Goldbeck-Wood, Stephan Mertens","doi":"10.1186/s41073-019-0064-8","DOIUrl":"https://doi.org/10.1186/s41073-019-0064-8","url":null,"abstract":"<p><strong>Background: </strong>Narrative reviews are the commonest type of articles in the medical literature. However, unlike systematic reviews and randomized controlled trials (RCT) articles, for which formal instruments exist to evaluate quality, there is currently no instrument available to assess the quality of narrative reviews. In response to this gap, we developed SANRA, the Scale for the Assessment of Narrative Review Articles.</p><p><strong>Methods: </strong>A team of three experienced journal editors modified or deleted items in an earlier SANRA version based on face validity, item-total correlations, and reliability scores from previous tests. We deleted an item which addressed a manuscript's writing and accessibility due to poor inter-rater reliability. The six items which form the revised scale are rated from 0 (low standard) to 2 (high standard) and cover the following topics: explanation of (1) the importance and (2) the aims of the review, (3) literature search and (4) referencing and presentation of (5) evidence level and (6) relevant endpoint data. For all items, we developed anchor definitions and examples to guide users in filling out the form. The revised scale was tested by the same editors (blinded to each other's ratings) in a group of 30 consecutive non-systematic review manuscripts submitted to a general medical journal.</p><p><strong>Results: </strong>Raters confirmed that completing the scale is feasible in everyday editorial work. The mean sum score across all 30 manuscripts was 6.0 out of 12 possible points (SD 2.6, range 1-12). Corrected item-total correlations ranged from 0.33 (item 3) to 0.58 (item 6), and Cronbach's alpha was 0.68 (internal consistency). The intra-class correlation coefficient (average measure) was 0.77 [95% CI 0.57, 0.88] (inter-rater reliability). Raters often disagreed on items 1 and 4.</p><p><strong>Conclusions: </strong>SANRA's feasibility, inter-rater reliability, homogeneity of items, and internal consistency are sufficient for a scale of six items. Further field testing, particularly of validity, is desirable. We recommend rater training based on the \"explanations and instructions\" document provided with SANRA. In editorial decision-making, SANRA may complement journal-specific evaluation of manuscripts-pertaining to, e.g., audience, originality or difficulty-and may contribute to improving the standard of non-systematic reviews.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0064-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37309816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. F. D. Carneiro, Victor G. S. Queiroz, T. Moulin, Carlos A. M. Carvalho, C. Haas, Danielle Rayêe, D. Henshall, Evandro A. De-Souza, F. E. Amorim, Flávia Z. Boos, G. Guercio, Igor R. Costa, K. Hajdu, L. V. van Egmond, M. Modrák, Pedro B. Tan, Richard J. Abdill, S. Burgess, Sylvia F. S. Guerra, V. T. Bortoluzzi, O. Amaral
Background Preprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader’s ability to independently interpret data and reproduce findings. Methods In this observational study, we initially compared independent samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. After that, we performed paired comparisons between preprints from bioRxiv to their own peer-reviewed versions in journals. Results Peer-reviewed articles had, on average, higher quality of reporting than preprints, although the difference was small, with absolute differences of 5.0% [95% CI 1.4, 8.6] and 4.7% [95% CI 2.4, 7.0] of reported items in the independent samples and paired sample comparison, respectively. There were larger differences favoring peer-reviewed articles in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information. Changes in reporting from preprints to peer-reviewed versions did not correlate with the impact factor of the publication venue or with the time lag from bioRxiv to journal publication. Conclusions Our results suggest that, on average, publication in a peer-reviewed journal is associated with improvement in quality of reporting. They also show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions.
预印本在生命科学领域的使用正在迅速增长;然而,与已发表的文章相比,预印本的相对质量仍然存在问题。报告的完整性是衡量质量的一个客观维度,因为透明度可以提高读者独立解释数据和重现发现的能力。在这项观察性研究中,我们首先比较了2016年发表在bioRxiv和pubmed索引期刊上的文章的独立样本,采用质量报告问卷。之后,我们对bioRxiv的预印本和他们在期刊上的同行评审版本进行了配对比较。结果同行评议文章的报告质量平均高于预印本,尽管差异很小,在独立样本和配对样本比较中,报告项目的绝对差异分别为5.0% [95% CI 1.4, 8.6]和4.7% [95% CI 2.4, 7.0]。在标题和摘要展示主要发现的清晰程度以及找到相关报告信息的难易程度等主观评分方面,支持同行评议文章的差异更大。从预印本到同行评议版本的报告变化与发表地点的影响因子或从bioRxiv到期刊发表的时间滞后无关。我们的研究结果表明,平均而言,在同行评议的期刊上发表论文与报告质量的提高有关。他们还表明,在生命科学领域,预印本报告的质量与同行评议文章的质量处于相似的范围内,尽管平均水平略低,这支持了预印本应被视为有效科学贡献的观点。
{"title":"Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature","authors":"C. F. D. Carneiro, Victor G. S. Queiroz, T. Moulin, Carlos A. M. Carvalho, C. Haas, Danielle Rayêe, D. Henshall, Evandro A. De-Souza, F. E. Amorim, Flávia Z. Boos, G. Guercio, Igor R. Costa, K. Hajdu, L. V. van Egmond, M. Modrák, Pedro B. Tan, Richard J. Abdill, S. Burgess, Sylvia F. S. Guerra, V. T. Bortoluzzi, O. Amaral","doi":"10.1101/581892","DOIUrl":"https://doi.org/10.1101/581892","url":null,"abstract":"Background Preprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader’s ability to independently interpret data and reproduce findings. Methods In this observational study, we initially compared independent samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. After that, we performed paired comparisons between preprints from bioRxiv to their own peer-reviewed versions in journals. Results Peer-reviewed articles had, on average, higher quality of reporting than preprints, although the difference was small, with absolute differences of 5.0% [95% CI 1.4, 8.6] and 4.7% [95% CI 2.4, 7.0] of reported items in the independent samples and paired sample comparison, respectively. There were larger differences favoring peer-reviewed articles in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information. Changes in reporting from preprints to peer-reviewed versions did not correlate with the impact factor of the publication venue or with the time lag from bioRxiv to journal publication. Conclusions Our results suggest that, on average, publication in a peer-reviewed journal is associated with improvement in quality of reporting. They also show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions.","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41784313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-27DOI: 10.1186/s41073-019-0063-9
Tony Ross-Hellauer, Edit Görögh
Open peer review (OPR) is moving into the mainstream, but it is often poorly understood and surveys of researcher attitudes show important barriers to implementation. As more journals move to implement and experiment with the myriad of innovations covered by this term, there is a clear need for best practice guidelines to guide implementation. This brief article aims to address this knowledge gap, reporting work based on an interactive stakeholder workshop to create best-practice guidelines for editors and journals who wish to transition to OPR. Although the advice is aimed mainly at editors and publishers of scientific journals, since this is the area in which OPR is at its most mature, many of the principles may also be applicable for the implementation of OPR in other areas (e.g., books, conference submissions).
{"title":"Guidelines for open peer review implementation.","authors":"Tony Ross-Hellauer, Edit Görögh","doi":"10.1186/s41073-019-0063-9","DOIUrl":"10.1186/s41073-019-0063-9","url":null,"abstract":"<p><p>Open peer review (OPR) is moving into the mainstream, but it is often poorly understood and surveys of researcher attitudes show important barriers to implementation. As more journals move to implement and experiment with the myriad of innovations covered by this term, there is a clear need for best practice guidelines to guide implementation. This brief article aims to address this knowledge gap, reporting work based on an interactive stakeholder workshop to create best-practice guidelines for editors and journals who wish to transition to OPR. Although the advice is aimed mainly at editors and publishers of scientific journals, since this is the area in which OPR is at its most mature, many of the principles may also be applicable for the implementation of OPR in other areas (e.g., books, conference submissions).</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"4"},"PeriodicalIF":0.0,"publicationDate":"2019-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0063-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37045643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-19DOI: 10.1186/s41073-019-0062-x
Andrew Grey, Mark Bolland, Greg Gamble, Alison Avenell
Background: Academic institutions play important roles in protecting and preserving research integrity. Concerns have been expressed about the objectivity, adequacy and transparency of institutional investigations of potentially compromised research integrity. We assessed the reports provided to us of investigations by three academic institutions of a large body of overlapping research with potentially compromised integrity.
Methods: In 2017, we raised concerns with four academic institutions about the integrity of > 200 publications co-authored by an overlapping set of researchers. Each institution initiated an investigation. By November 2018, three had reported to us the results of their investigations, but only one report was publicly available. Two investigators independently assessed each available report using a published 26-item checklist designed to determine the quality and adequacy of institutional investigations of research integrity. Each assessor recorded additional comments ad hoc.
Results: Concerns raised with the institutions were overlapping, wide-ranging and included those which were both general and publication-specific. The number of potentially affected publications at individual institutions ranged from 34 to 200. The duration of investigation by the three institutions which provided reports was 8-17 months. These investigations covered 14%, 15% and 77%, respectively, of potentially affected publications. Between-assessor agreement using the quality checklist was 0.68, 0.72 and 0.65 for each report. Only 4/78 individual checklist items were addressed adequately: a further 14 could not be assessed. Each report was graded inadequate overall. Reports failed to address publication-specific concerns and focussed more strongly on determining research misconduct than evaluating the integrity of publications.
Conclusions: Our analyses identify important deficiencies in the quality and reporting of institutional investigation of concerns about the integrity of a large body of research reported by an overlapping set of researchers. They reinforce disquiet about the ability of institutions to rigorously and objectively oversee integrity of research conducted by their own employees.
{"title":"Quality of reports of investigations of research integrity by academic institutions.","authors":"Andrew Grey, Mark Bolland, Greg Gamble, Alison Avenell","doi":"10.1186/s41073-019-0062-x","DOIUrl":"10.1186/s41073-019-0062-x","url":null,"abstract":"<p><strong>Background: </strong>Academic institutions play important roles in protecting and preserving research integrity. Concerns have been expressed about the objectivity, adequacy and transparency of institutional investigations of potentially compromised research integrity. We assessed the reports provided to us of investigations by three academic institutions of a large body of overlapping research with potentially compromised integrity.</p><p><strong>Methods: </strong>In 2017, we raised concerns with four academic institutions about the integrity of > 200 publications co-authored by an overlapping set of researchers. Each institution initiated an investigation. By November 2018, three had reported to us the results of their investigations, but only one report was publicly available. Two investigators independently assessed each available report using a published 26-item checklist designed to determine the quality and adequacy of institutional investigations of research integrity. Each assessor recorded additional comments ad hoc.</p><p><strong>Results: </strong>Concerns raised with the institutions were overlapping, wide-ranging and included those which were both general and publication-specific. The number of potentially affected publications at individual institutions ranged from 34 to 200. The duration of investigation by the three institutions which provided reports was 8-17 months. These investigations covered 14%, 15% and 77%, respectively, of potentially affected publications. Between-assessor agreement using the quality checklist was 0.68, 0.72 and 0.65 for each report. Only 4/78 individual checklist items were addressed adequately: a further 14 could not be assessed. Each report was graded inadequate overall. Reports failed to address publication-specific concerns and focussed more strongly on determining research misconduct than evaluating the integrity of publications.</p><p><strong>Conclusions: </strong>Our analyses identify important deficiencies in the quality and reporting of institutional investigation of concerns about the integrity of a large body of research reported by an overlapping set of researchers. They reinforce disquiet about the ability of institutions to rigorously and objectively oversee integrity of research conducted by their own employees.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0062-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37173168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}