Pub Date : 2020-07-14eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-00095-y
Markus Konkol, Daniel Nüst, Laura Goulier
Background: The trend toward open science increases the pressure on authors to provide access to the source code and data they used to compute the results reported in their scientific papers. Since sharing materials reproducibly is challenging, several projects have developed solutions to support the release of executable analyses alongside articles.
Methods: We reviewed 11 applications that can assist researchers in adhering to reproducibility principles. The applications were found through a literature search and interactions with the reproducible research community. An application was included in our analysis if it (i) was actively maintained at the time the data for this paper was collected, (ii) supports the publication of executable code and data, (iii) is connected to the scholarly publication process. By investigating the software documentation and published articles, we compared the applications across 19 criteria, such as deployment options and features that support authors in creating and readers in studying executable papers.
Results: From the 11 applications, eight allow publishers to self-host the system for free, whereas three provide paid services. Authors can submit an executable analysis using Jupyter Notebooks or R Markdown documents (10 applications support these formats). All approaches provide features to assist readers in studying the materials, e.g., one-click reproducible results or tools for manipulating the analysis parameters. Six applications allow for modifying materials after publication.
Conclusions: The applications support authors to publish reproducible research predominantly with literate programming. Concerning readers, most applications provide user interfaces to inspect and manipulate the computational analysis. The next step is to investigate the gaps identified in this review, such as the costs publishers have to expect when hosting an application, the consideration of sensitive data, and impacts on the review process.
{"title":"Publishing computational research - a review of infrastructures for reproducible and transparent scholarly communication.","authors":"Markus Konkol, Daniel Nüst, Laura Goulier","doi":"10.1186/s41073-020-00095-y","DOIUrl":"10.1186/s41073-020-00095-y","url":null,"abstract":"<p><strong>Background: </strong>The trend toward open science increases the pressure on authors to provide access to the source code and data they used to compute the results reported in their scientific papers. Since sharing materials reproducibly is challenging, several projects have developed solutions to support the release of executable analyses alongside articles.</p><p><strong>Methods: </strong>We reviewed 11 applications that can assist researchers in adhering to reproducibility principles. The applications were found through a literature search and interactions with the reproducible research community. An application was included in our analysis if it <b>(i)</b> was actively maintained at the time the data for this paper was collected, <b>(ii)</b> supports the publication of executable code and data, <b>(iii)</b> is connected to the scholarly publication process. By investigating the software documentation and published articles, we compared the applications across 19 criteria, such as deployment options and features that support authors in creating and readers in studying executable papers.</p><p><strong>Results: </strong>From the 11 applications, eight allow publishers to self-host the system for free, whereas three provide paid services. Authors can submit an executable analysis using Jupyter Notebooks or R Markdown documents (10 applications support these formats). All approaches provide features to assist readers in studying the materials, e.g., one-click reproducible results or tools for manipulating the analysis parameters. Six applications allow for modifying materials after publication.</p><p><strong>Conclusions: </strong>The applications support authors to publish reproducible research predominantly with literate programming. Concerning readers, most applications provide user interfaces to inspect and manipulate the computational analysis. The next step is to investigate the gaps identified in this review, such as the costs publishers have to expect when hosting an application, the consideration of sensitive data, and impacts on the review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"10"},"PeriodicalIF":0.0,"publicationDate":"2020-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00095-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38177048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-26eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-00094-z
Lonni Besançon, Niklas Rönnberg, Jonas Löwgren, Jonathan P Tennant, Matthew Cooper
Background: Our aim is to highlight the benefits and limitations of open and non-anonymized peer review. Our argument is based on the literature and on responses to a survey on the reviewing process of alt.chi, a more or less open review track within the so-called Computer Human Interaction (CHI) conference, the predominant conference in the field of human-computer interaction. This track currently is the only implementation of an open peer review process in the field of human-computer interaction while, with the recent increase in interest in open scientific practices, open review is now being considered and used in other fields.
Methods: We ran an online survey with 30 responses from alt.chi authors and reviewers, collecting quantitative data using multiple-choice questions and Likert scales. Qualitative data were collected using open questions.
Results: Our main quantitative result is that respondents are more positive to open and non-anonymous reviewing for alt.chi than for other parts of the CHI conference. The qualitative data specifically highlight the benefits of open and transparent academic discussions. The data and scripts are available on https://osf.io/vuw7h/, and the figures and follow-up work on http://tiny.cc/OpenReviews.
Conclusion: While the benefits are quite clear and the system is generally well-liked by alt.chi participants, they remain reluctant to see it used in other venues. This concurs with a number of recent studies that suggest a divergence between support for a more open review process and its practical implementation.
{"title":"Open up: a survey on open and non-anonymized peer reviewing.","authors":"Lonni Besançon, Niklas Rönnberg, Jonas Löwgren, Jonathan P Tennant, Matthew Cooper","doi":"10.1186/s41073-020-00094-z","DOIUrl":"10.1186/s41073-020-00094-z","url":null,"abstract":"<p><strong>Background: </strong>Our aim is to highlight the benefits and limitations of open and non-anonymized peer review. Our argument is based on the literature and on responses to a survey on the reviewing process of alt.chi, a more or less open review track within the so-called Computer Human Interaction (CHI) conference, the predominant conference in the field of human-computer interaction. This track currently is the only implementation of an open peer review process in the field of human-computer interaction while, with the recent increase in interest in open scientific practices, open review is now being considered and used in other fields.</p><p><strong>Methods: </strong>We ran an online survey with 30 responses from alt.chi authors and reviewers, collecting quantitative data using multiple-choice questions and Likert scales. Qualitative data were collected using open questions.</p><p><strong>Results: </strong>Our main quantitative result is that respondents are more positive to open and non-anonymous reviewing for alt.chi than for other parts of the CHI conference. The qualitative data specifically highlight the benefits of open and transparent academic discussions. The data and scripts are available on https://osf.io/vuw7h/, and the figures and follow-up work on http://tiny.cc/OpenReviews.</p><p><strong>Conclusion: </strong>While the benefits are quite clear and the system is generally well-liked by alt.chi participants, they remain reluctant to see it used in other venues. This concurs with a number of recent studies that suggest a divergence between support for a more open review process and its practical implementation.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"8"},"PeriodicalIF":7.2,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7318523/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38109832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-15eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-00093-0
Stephen A Gallo, Karen B Schmaling, Lisa A Thompson, Scott R Glisson
Background: Funding agencies have long used panel discussion in the peer review of research grant proposals as a way to utilize a set of expertise and perspectives in making funding decisions. Little research has examined the quality of panel discussions and how effectively they are facilitated.
Methods: Here, we present a mixed-method analysis of data from a survey of reviewers focused on their perceptions of the quality, effectiveness, and influence of panel discussion from their last peer review experience.
Results: Reviewers indicated that panel discussions were viewed favorably in terms of participation, clarifying differing opinions, informing unassigned reviewers, and chair facilitation. However, some reviewers mentioned issues with panel discussions, including an uneven focus, limited participation from unassigned reviewers, and short discussion times. Most reviewers felt the discussions affected the review outcome, helped in choosing the best science, and were generally fair and balanced. However, those who felt the discussion did not affect the outcome were also more likely to evaluate panel communication negatively, and several reviewers mentioned potential sources of bias related to the discussion. While respondents strongly acknowledged the importance of the chair in ensuring appropriate facilitation of the discussion to influence scoring and to limit the influence of potential sources of bias from the discussion on scoring, nearly a third of respondents did not find the chair of their most recent panel to have performed these roles effectively.
Conclusions: It is likely that improving chair training in the management of discussion as well as creating review procedures that are informed by the science of leadership and team communication would improve review processes and proposal review reliability.
{"title":"Grant reviewer perceptions of the quality, effectiveness, and influence of panel discussion.","authors":"Stephen A Gallo, Karen B Schmaling, Lisa A Thompson, Scott R Glisson","doi":"10.1186/s41073-020-00093-0","DOIUrl":"https://doi.org/10.1186/s41073-020-00093-0","url":null,"abstract":"<p><strong>Background: </strong>Funding agencies have long used panel discussion in the peer review of research grant proposals as a way to utilize a set of expertise and perspectives in making funding decisions. Little research has examined the quality of panel discussions and how effectively they are facilitated.</p><p><strong>Methods: </strong>Here, we present a mixed-method analysis of data from a survey of reviewers focused on their perceptions of the quality, effectiveness, and influence of panel discussion from their last peer review experience.</p><p><strong>Results: </strong>Reviewers indicated that panel discussions were viewed favorably in terms of participation, clarifying differing opinions, informing unassigned reviewers, and chair facilitation. However, some reviewers mentioned issues with panel discussions, including an uneven focus, limited participation from unassigned reviewers, and short discussion times. Most reviewers felt the discussions affected the review outcome, helped in choosing the best science, and were generally fair and balanced. However, those who felt the discussion did not affect the outcome were also more likely to evaluate panel communication negatively, and several reviewers mentioned potential sources of bias related to the discussion. While respondents strongly acknowledged the importance of the chair in ensuring appropriate facilitation of the discussion to influence scoring and to limit the influence of potential sources of bias from the discussion on scoring, nearly a third of respondents did not find the chair of their most recent panel to have performed these roles effectively.</p><p><strong>Conclusions: </strong>It is likely that improving chair training in the management of discussion as well as creating review procedures that are informed by the science of leadership and team communication would improve review processes and proposal review reliability.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2020-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00093-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37986771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-30eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-00092-1
Jonathan P Tennant, Tony Ross-Hellauer
Peer review is embedded in the core of our knowledge generation systems, perceived as a method for establishing quality or scholarly legitimacy for research, while also often distributing academic prestige and standing on individuals. Despite its critical importance, it curiously remains poorly understood in a number of dimensions. In order to address this, we have analysed peer review to assess where the major gaps in our theoretical and empirical understanding of it lie. We identify core themes including editorial responsibility, the subjectivity and bias of reviewers, the function and quality of peer review, and the social and epistemic implications of peer review. The high-priority gaps are focused around increased accountability and justification in decision-making processes for editors and developing a deeper, empirical understanding of the social impact of peer review. Addressing this at the bare minimum will require the design of a consensus for a minimal set of standards for what constitutes peer review, and the development of a shared data infrastructure to support this. Such a field requires sustained funding and commitment from publishers and research funders, who both have a commitment to uphold the integrity of the published scholarly record. We use this to present a guide for the future of peer review, and the development of a new research discipline based on the study of peer review.
{"title":"The limitations to our understanding of peer review.","authors":"Jonathan P Tennant, Tony Ross-Hellauer","doi":"10.1186/s41073-020-00092-1","DOIUrl":"10.1186/s41073-020-00092-1","url":null,"abstract":"<p><p>Peer review is embedded in the core of our knowledge generation systems, perceived as a method for establishing quality or scholarly legitimacy for research, while also often distributing academic prestige and standing on individuals. Despite its critical importance, it curiously remains poorly understood in a number of dimensions. In order to address this, we have analysed peer review to assess where the major gaps in our theoretical and empirical understanding of it lie. We identify core themes including editorial responsibility, the subjectivity and bias of reviewers, the function and quality of peer review, and the social and epistemic implications of peer review. The high-priority gaps are focused around increased accountability and justification in decision-making processes for editors and developing a deeper, empirical understanding of the social impact of peer review. Addressing this at the bare minimum will require the design of a consensus for a minimal set of standards for what constitutes peer review, and the development of a shared data infrastructure to support this. Such a field requires sustained funding and commitment from publishers and research funders, who both have a commitment to uphold the integrity of the published scholarly record. We use this to present a guide for the future of peer review, and the development of a new research discipline based on the study of peer review.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"6"},"PeriodicalIF":0.0,"publicationDate":"2020-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7191707/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37901685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.1186/s41073-020-0090-6
Adriane Gomes, D. Custódio, Lara Coelho, Marina Marques, R. Sanda, Tânia Araújo, M. Gallas, E. F. Silveira
{"title":"Proceedings from the V Brazilian Meeting on Research Integrity, Science and Publication Ethics (V BRISPE)","authors":"Adriane Gomes, D. Custódio, Lara Coelho, Marina Marques, R. Sanda, Tânia Araújo, M. Gallas, E. F. Silveira","doi":"10.1186/s41073-020-0090-6","DOIUrl":"https://doi.org/10.1186/s41073-020-0090-6","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-0090-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47693162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-28eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-0091-5
Shelby Rauh, Trevor Torgerson, Austin L Johnson, Jonathan Pollard, Daniel Tritz, Matt Vassar
Background: The objective of this study was to evaluate the nature and extent of reproducible and transparent research practices in neurology publications.
Methods: The NLM catalog was used to identify MEDLINE-indexed neurology journals. A PubMed search of these journals was conducted to retrieve publications over a 5-year period from 2014 to 2018. A random sample of publications was extracted. Two authors conducted data extraction in a blinded, duplicate fashion using a pilot-tested Google form. This form prompted data extractors to determine whether publications provided access to items such as study materials, raw data, analysis scripts, and protocols. In addition, we determined if the publication was included in a replication study or systematic review, was preregistered, had a conflict of interest declaration, specified funding sources, and was open access.
Results: Our search identified 223,932 publications meeting the inclusion criteria, from which 400 were randomly sampled. Only 389 articles were accessible, yielding 271 publications with empirical data for analysis. Our results indicate that 9.4% provided access to materials, 9.2% provided access to raw data, 0.7% provided access to the analysis scripts, 0.7% linked the protocol, and 3.7% were preregistered. A third of sampled publications lacked funding or conflict of interest statements. No publications from our sample were included in replication studies, but a fifth were cited in a systematic review or meta-analysis.
Conclusions: Currently, published neurology research does not consistently provide information needed for reproducibility. The implications of poor research reporting can both affect patient care and increase research waste. Collaborative intervention by authors, peer reviewers, journals, and funding sources is needed to mitigate this problem.
{"title":"Reproducible and transparent research practices in published neurology research.","authors":"Shelby Rauh, Trevor Torgerson, Austin L Johnson, Jonathan Pollard, Daniel Tritz, Matt Vassar","doi":"10.1186/s41073-020-0091-5","DOIUrl":"10.1186/s41073-020-0091-5","url":null,"abstract":"<p><strong>Background: </strong>The objective of this study was to evaluate the nature and extent of reproducible and transparent research practices in neurology publications.</p><p><strong>Methods: </strong>The NLM catalog was used to identify MEDLINE-indexed neurology journals. A PubMed search of these journals was conducted to retrieve publications over a 5-year period from 2014 to 2018. A random sample of publications was extracted. Two authors conducted data extraction in a blinded, duplicate fashion using a pilot-tested Google form. This form prompted data extractors to determine whether publications provided access to items such as study materials, raw data, analysis scripts, and protocols. In addition, we determined if the publication was included in a replication study or systematic review, was preregistered, had a conflict of interest declaration, specified funding sources, and was open access.</p><p><strong>Results: </strong>Our search identified 223,932 publications meeting the inclusion criteria, from which 400 were randomly sampled. Only 389 articles were accessible, yielding 271 publications with empirical data for analysis. Our results indicate that 9.4% provided access to materials, 9.2% provided access to raw data, 0.7% provided access to the analysis scripts, 0.7% linked the protocol, and 3.7% were preregistered. A third of sampled publications lacked funding or conflict of interest statements. No publications from our sample were included in replication studies, but a fifth were cited in a systematic review or meta-analysis.</p><p><strong>Conclusions: </strong>Currently, published neurology research does not consistently provide information needed for reproducibility. The implications of poor research reporting can both affect patient care and increase research waste. Collaborative intervention by authors, peer reviewers, journals, and funding sources is needed to mitigate this problem.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2020-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7049215/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37729304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-03eCollection Date: 2020-01-01DOI: 10.1186/s41073-019-0089-z
Mengyao Liu, Vernon Choy, Philip Clarke, Adrian Barnett, Tony Blakely, Lucy Pomeroy
Background: The Health Research Council of New Zealand is the first major government funding agency to use a lottery to allocate research funding for their Explorer Grant scheme. This is a somewhat controversial approach because, despite the documented problems of peer review, many researchers believe that funding should be allocated solely using peer review, and peer review is used almost ubiquitously by funding agencies around the world. Given the rarity of alternative funding schemes, there is interest in hearing from the first cohort of researchers to ever experience a lottery. Additionally, the Health Research Council of New Zealand wanted to hear from applicants about the acceptability of the randomisation process and anonymity of applicants.
Methods: This paper presents the results of a survey of Health Research Council applicants from 2013 to 2019. The survey asked about the acceptability of using a lottery and if the lottery meant researchers took a different approach to their application.
Results: The overall response rate was 39% (126 of 325 invites), with 30% (76 of 251) from applicants in the years 2013 to 2018, and 68% (50 of 74) for those in the year 2019 who were not aware of the funding result. There was agreement that randomisation is an acceptable method for allocating Explorer Grant funds with 63% (n = 79) in favour and 25% (n = 32) against. There was less support for allocating funds randomly for other grant types with only 40% (n = 50) in favour and 37% (n = 46) against. Support for a lottery was higher amongst those that had won funding. Multiple respondents stated that they supported a lottery when ineligible applications had been excluded and outstanding applications funded, so that the remaining applications were truly equal. Most applicants reported that the lottery did not change the time they spent preparing their application.
Conclusions: The Health Research Council's experience through the Explorer Grant scheme supports further uptake of a modified lottery.
{"title":"The acceptability of using a lottery to allocate research funding: a survey of applicants.","authors":"Mengyao Liu, Vernon Choy, Philip Clarke, Adrian Barnett, Tony Blakely, Lucy Pomeroy","doi":"10.1186/s41073-019-0089-z","DOIUrl":"https://doi.org/10.1186/s41073-019-0089-z","url":null,"abstract":"<p><strong>Background: </strong>The Health Research Council of New Zealand is the first major government funding agency to use a lottery to allocate research funding for their Explorer Grant scheme. This is a somewhat controversial approach because, despite the documented problems of peer review, many researchers believe that funding should be allocated solely using peer review, and peer review is used almost ubiquitously by funding agencies around the world. Given the rarity of alternative funding schemes, there is interest in hearing from the first cohort of researchers to ever experience a lottery. Additionally, the Health Research Council of New Zealand wanted to hear from applicants about the acceptability of the randomisation process and anonymity of applicants.</p><p><strong>Methods: </strong>This paper presents the results of a survey of Health Research Council applicants from 2013 to 2019. The survey asked about the acceptability of using a lottery and if the lottery meant researchers took a different approach to their application.</p><p><strong>Results: </strong>The overall response rate was 39% (126 of 325 invites), with 30% (76 of 251) from applicants in the years 2013 to 2018, and 68% (50 of 74) for those in the year 2019 who were not aware of the funding result. There was agreement that randomisation is an acceptable method for allocating Explorer Grant funds with 63% (<i>n</i> = 79) in favour and 25% (<i>n</i> = 32) against. There was less support for allocating funds randomly for other grant types with only 40% (<i>n</i> = 50) in favour and 37% (<i>n</i> = 46) against. Support for a lottery was higher amongst those that had won funding. Multiple respondents stated that they supported a lottery when ineligible applications had been excluded and outstanding applications funded, so that the remaining applications were truly equal. Most applicants reported that the lottery did not change the time they spent preparing their application.</p><p><strong>Conclusions: </strong>The Health Research Council's experience through the Explorer Grant scheme supports further uptake of a modified lottery.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2020-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0089-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37615119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-15eCollection Date: 2020-01-01DOI: 10.1186/s41073-019-0088-0
Mark Skopec, Hamdi Issa, Julie Reed, Matthew Harris
Background: Descriptive studies examining publication rates and citation counts demonstrate a geographic skew toward high-income countries (HIC), and research from low- or middle-income countries (LMICs) is generally underrepresented. This has been suggested to be due in part to reviewers' and editors' preference toward HIC sources; however, in the absence of controlled studies, it is impossible to assert whether there is bias or whether variations in the quality or relevance of the articles being reviewed explains the geographic divide. This study synthesizes the evidence from randomized and controlled studies that explore geographic bias in the peer review process.
Methods: A systematic review was conducted to identify research studies that explicitly explore the role of geographic bias in the assessment of the quality of research articles. Only randomized and controlled studies were included in the review. Five databases were searched to locate relevant articles. A narrative synthesis of included articles was performed to identify common findings.
Results: The systematic literature search yielded 3501 titles from which 12 full texts were reviewed, and a further eight were identified through searching reference lists of the full texts. Of these articles, only three were randomized and controlled studies that examined variants of geographic bias. One study found that abstracts attributed to HIC sources elicited a higher review score regarding relevance of the research and likelihood to recommend the research to a colleague, than did abstracts attributed to LIC sources. Another study found that the predicted odds of acceptance for a submission to a computer science conference were statistically significantly higher for submissions from a "Top University." Two of the studies showed the presence of geographic bias between articles from "high" or "low" prestige institutions.
Conclusions: Two of the three included studies identified that geographic bias in some form was impacting on peer review; however, further robust, experimental evidence is needed to adequately inform practice surrounding this topic. Reviewers and researchers should nonetheless be aware of whether author and institutional characteristics are interfering in their judgement of research.
{"title":"The role of geographic bias in knowledge diffusion: a systematic review and narrative synthesis.","authors":"Mark Skopec, Hamdi Issa, Julie Reed, Matthew Harris","doi":"10.1186/s41073-019-0088-0","DOIUrl":"10.1186/s41073-019-0088-0","url":null,"abstract":"<p><strong>Background: </strong>Descriptive studies examining publication rates and citation counts demonstrate a geographic skew toward high-income countries (HIC), and research from low- or middle-income countries (LMICs) is generally underrepresented. This has been suggested to be due in part to reviewers' and editors' preference toward HIC sources; however, in the absence of controlled studies, it is impossible to assert whether there is bias or whether variations in the quality or relevance of the articles being reviewed explains the geographic divide. This study synthesizes the evidence from randomized and controlled studies that explore geographic bias in the peer review process.</p><p><strong>Methods: </strong>A systematic review was conducted to identify research studies that explicitly explore the role of geographic bias in the assessment of the quality of research articles. Only randomized and controlled studies were included in the review. Five databases were searched to locate relevant articles. A narrative synthesis of included articles was performed to identify common findings.</p><p><strong>Results: </strong>The systematic literature search yielded 3501 titles from which 12 full texts were reviewed, and a further eight were identified through searching reference lists of the full texts. Of these articles, only three were randomized and controlled studies that examined variants of geographic bias. One study found that abstracts attributed to HIC sources elicited a higher review score regarding relevance of the research and likelihood to recommend the research to a colleague, than did abstracts attributed to LIC sources. Another study found that the predicted odds of acceptance for a submission to a computer science conference were statistically significantly higher for submissions from a \"Top University.\" Two of the studies showed the presence of geographic bias between articles from \"high\" or \"low\" prestige institutions.</p><p><strong>Conclusions: </strong>Two of the three included studies identified that geographic bias in some form was impacting on peer review; however, further robust, experimental evidence is needed to adequately inform practice surrounding this topic. Reviewers and researchers should nonetheless be aware of whether author and institutional characteristics are interfering in their judgement of research.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2020-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0088-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37559003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-03eCollection Date: 2020-01-01DOI: 10.1186/s41073-019-0087-1
Taeho Greg Rhee, Tijana Stanic, Joseph S Ross
Objectives: To compare changes in the number and amount of payments received by orthopedic and non-orthopedic surgeons from industry between 2014 and 2017.
Methods: Using the Centers for Medicare and Medicaid Services (CMS) Open Payment database from 2014 to 2017, we conducted a retrospective cohort study of industry payments to surgeons, including general payments and research payments.
Results: Among orthopedic surgeons, the total number of general payments decreased from 248,698 in 2014 to 241,966 in 2017, but their total value increased from $97.1 million in 2014 to $110.2 million in 2017. Among non-orthopedic surgeons, the total number decreased from 604,884 in 2014 to 582,490 in 2017, while the total value remained stable at approximately $159 million. Between 2014 and 2017, there was a differential increase in the median number of general payments received by non-orthopedic when compared to orthopedic surgeons (incidence rate ratio, 1.09; 95% CI, 1.08-1.09; p < 0.001), but a differential decline in the median value of general payments (- 8.9%; 95% CI, - 9.5%, - 8.4%; p < 0.001). Findings were consistent when stratified by nature of payment. In contrast, between 2014 and 2017, there was neither a differential change in the median number nor median value of research payments received by non-orthopedics.
Conclusion: Examination of a natural experiment of prior public disclosure of payments to orthopedic surgeons suggests that the Physician Payment Sunshine Act was associated with an increase in the number, but a decline in the value, of general payments received by non-orthopedic surgeons, but not on research payments received.
{"title":"Impact of US industry payment disclosure laws on payments to surgeons: a natural experiment.","authors":"Taeho Greg Rhee, Tijana Stanic, Joseph S Ross","doi":"10.1186/s41073-019-0087-1","DOIUrl":"10.1186/s41073-019-0087-1","url":null,"abstract":"<p><strong>Objectives: </strong>To compare changes in the number and amount of payments received by orthopedic and non-orthopedic surgeons from industry between 2014 and 2017.</p><p><strong>Methods: </strong>Using the Centers for Medicare and Medicaid Services (CMS) Open Payment database from 2014 to 2017, we conducted a retrospective cohort study of industry payments to surgeons, including general payments and research payments.</p><p><strong>Results: </strong>Among orthopedic surgeons, the total number of general payments decreased from 248,698 in 2014 to 241,966 in 2017, but their total value increased from $97.1 million in 2014 to $110.2 million in 2017. Among non-orthopedic surgeons, the total number decreased from 604,884 in 2014 to 582,490 in 2017, while the total value remained stable at approximately $159 million. Between 2014 and 2017, there was a differential increase in the median number of general payments received by non-orthopedic when compared to orthopedic surgeons (incidence rate ratio, 1.09; 95% CI, 1.08-1.09; <i>p</i> < 0.001), but a differential decline in the median value of general payments (- 8.9%; 95% CI, - 9.5%, - 8.4%; <i>p</i> < 0.001). Findings were consistent when stratified by nature of payment. In contrast, between 2014 and 2017, there was neither a differential change in the median number nor median value of research payments received by non-orthopedics.</p><p><strong>Conclusion: </strong>Examination of a natural experiment of prior public disclosure of payments to orthopedic surgeons suggests that the Physician Payment Sunshine Act was associated with an increase in the number, but a decline in the value, of general payments received by non-orthopedic surgeons, but not on research payments received.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2020-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6942346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37521219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-19eCollection Date: 2019-01-01DOI: 10.1186/s41073-019-0084-4
Pauline A J Steegmans, Nicola Di Girolamo, Reint A Meursinge Reynders
Background: Titles and abstracts are the most read sections of biomedical papers. It is therefore important that abstracts transparently report both the beneficial and adverse effects of health care interventions and do not mislead the reader. Misleading reporting, interpretation, or extrapolation of study results is called "spin". In this study, we will assess whether adverse effects of orthodontic interventions were reported or considered in the abstracts of both Cochrane and non-Cochrane reviews and whether spin was identified and what type of spin.
Methods: Eligibility criteria were defined for the type of study designs, participants, interventions, outcomes, and settings. We will include systematic reviews of clinical orthodontic interventions published in the five leading orthodontic journals and in the Cochrane Database. Empty reviews will be excluded. We will manually search eligible reviews published between 1 August 2009 and 31 July 2019. Data collection forms were developed a priori. All study selection and data extraction procedures will be conducted by two reviewers independently. Our main outcomes will be the prevalence of reported or considered adverse effects of orthodontic interventions in the abstract of systematic reviews and the prevalence of "spin" related to these adverse effects. We will also record the prevalence of three subtypes of spin, i.e., misleading reporting, misleading interpretation, and misleading extrapolation-related spin. All statistics will be calculated for the following groups: (1) all journals individually, (2) all journals together, and (3) the five leading orthodontic journals and the Cochrane Database of Systematic Reviews separately. Generalized linear models will be developed to compare the various groups.
Discussion: We expect that our results will raise the awareness of the importance of reporting and considering of adverse effects and the presence of the phenomenon of spin related to these effects in abstracts of systematic reviews of orthodontic interventions. This is important, because an incomplete and inadequate reporting, interpretation, or extrapolation of findings on adverse effects in abstracts of systematic reviews can mislead readers and could lead to inadequate clinical practice. Our findings could result in policy implications for making judgments about the acceptance for publication of systematic reviews of orthodontic interventions.
{"title":"Spin in the reporting, interpretation, and extrapolation of adverse effects of orthodontic interventions: protocol for a cross-sectional study of systematic reviews.","authors":"Pauline A J Steegmans, Nicola Di Girolamo, Reint A Meursinge Reynders","doi":"10.1186/s41073-019-0084-4","DOIUrl":"https://doi.org/10.1186/s41073-019-0084-4","url":null,"abstract":"<p><strong>Background: </strong>Titles and abstracts are the most read sections of biomedical papers. It is therefore important that abstracts transparently report both the beneficial and adverse effects of health care interventions and do not mislead the reader. Misleading reporting, interpretation, or extrapolation of study results is called \"spin\". In this study, we will assess whether adverse effects of orthodontic interventions were reported or considered in the abstracts of both Cochrane and non-Cochrane reviews and whether spin was identified and what type of spin.</p><p><strong>Methods: </strong>Eligibility criteria were defined for the type of study designs, participants, interventions, outcomes, and settings. We will include systematic reviews of clinical orthodontic interventions published in the five leading orthodontic journals and in the Cochrane Database. Empty reviews will be excluded. We will manually search eligible reviews published between 1 August 2009 and 31 July 2019. Data collection forms were developed a priori. All study selection and data extraction procedures will be conducted by two reviewers independently. Our main outcomes will be the prevalence of reported or considered adverse effects of orthodontic interventions in the abstract of systematic reviews and the prevalence of \"spin\" related to these adverse effects. We will also record the prevalence of three subtypes of spin, i.e., misleading reporting, misleading interpretation, and misleading extrapolation-related spin. All statistics will be calculated for the following groups: (1) all journals individually, (2) all journals together, and (3) the five leading orthodontic journals and the Cochrane Database of Systematic Reviews separately. Generalized linear models will be developed to compare the various groups.</p><p><strong>Discussion: </strong>We expect that our results will raise the awareness of the importance of reporting and considering of adverse effects and the presence of the phenomenon of spin related to these effects in abstracts of systematic reviews of orthodontic interventions. This is important, because an incomplete and inadequate reporting, interpretation, or extrapolation of findings on adverse effects in abstracts of systematic reviews can mislead readers and could lead to inadequate clinical practice. Our findings could result in policy implications for making judgments about the acceptance for publication of systematic reviews of orthodontic interventions.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"27"},"PeriodicalIF":0.0,"publicationDate":"2019-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0084-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37502287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}