Pub Date : 2020-09-17eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-00099-8
Mohammad Hosseini, Martin Paul Eve, Bert Gordijn, Cameron Neylon
Background: Inaccurate citations are erroneous quotations or instances of paraphrasing of previously published material that mislead readers about the claims of the cited source. They are often unaddressed due to underreporting, the inability of peer reviewers and editors to detect them, and editors' reluctance to publish corrections about them. In this paper, we propose a new tool that could be used to tackle their circulation.
Methods: We provide a review of available data about inaccurate citations and analytically explore current ways of reporting and dealing with these inaccuracies. Consequently, we make a distinction between publication (i.e., first occurrence) and circulation (i.e., reuse) of inaccurate citations. Sloppy reading of published items, literature ambiguity and insufficient quality control in the editorial process are identified as factors that contribute to the publication of inaccurate citations. However, reiteration or copy-pasting without checking the validity of citations, paralleled with lack of resources/motivation to report/correct inaccurate citations contribute to their circulation.
Results and discussion: We propose the development of an online annotation tool called "MyCites" as means with which to mark and map inaccurate citations. This tool allows ORCID users to annotate citations and alert authors (of the cited and citing articles) and also editors of journals where inaccurate citations are published. Each marked citation would travel with the digital version of the document (persistent identifiers) and be visible on websites that host peer-reviewed articles (journals' websites, Pubmed, etc.). In the future development of MyCites, challenges such as the conditions of correct/incorrect-ness and parties that should adjudicate that, and, the issue of dealing with incorrect reports need to be addressed.
{"title":"MyCites: a proposal to mark and report inaccurate citations in scholarly publications.","authors":"Mohammad Hosseini, Martin Paul Eve, Bert Gordijn, Cameron Neylon","doi":"10.1186/s41073-020-00099-8","DOIUrl":"https://doi.org/10.1186/s41073-020-00099-8","url":null,"abstract":"<p><strong>Background: </strong>Inaccurate citations are erroneous quotations or instances of paraphrasing of previously published material that mislead readers about the claims of the cited source. They are often unaddressed due to underreporting, the inability of peer reviewers and editors to detect them, and editors' reluctance to publish corrections about them. In this paper, we propose a new tool that could be used to tackle their circulation.</p><p><strong>Methods: </strong>We provide a review of available data about inaccurate citations and analytically explore current ways of reporting and dealing with these inaccuracies. Consequently, we make a distinction between publication (i.e., first occurrence) and circulation (i.e., reuse) of inaccurate citations. Sloppy reading of published items, literature ambiguity and insufficient quality control in the editorial process are identified as factors that contribute to the publication of inaccurate citations. However, reiteration or copy-pasting without checking the validity of citations, paralleled with lack of resources/motivation to report/correct inaccurate citations contribute to their circulation.</p><p><strong>Results and discussion: </strong>We propose the development of an online annotation tool called \"MyCites\" as means with which to mark and map inaccurate citations. This tool allows ORCID users to annotate citations and alert authors (of the cited and citing articles) and also editors of journals where inaccurate citations are published. Each marked citation would travel with the digital version of the document (persistent identifiers) and be visible on websites that host peer-reviewed articles (journals' websites, Pubmed, etc.). In the future development of MyCites, challenges such as the conditions of correct/incorrect-ness and parties that should adjudicate that, and, the issue of dealing with incorrect reports need to be addressed.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00099-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38509509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-31eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-00098-9
Dennis M Gorman, Alva O Ferdinand
Background: The rigor and integrity of the published research in nutrition studies has come into serious question in recent years. Concerns focus on the use of flexible data analysis practices and selective reporting and the failure of peer review journals to identify and correct these practices. In response, it has been proposed that journals employ editorial procedures designed to improve the transparency of published research.
Objective: The present study examines the adoption of editorial procedures designed to improve the reporting of empirical studies in the field of nutrition and dietetics research.
Design: The instructions for authors of 43 journals included in Quartiles 1 and 2 of the Clarivate Analytics' 2018 Journal Citation Report category Nutrition and Dietetics were reviewed. For journals that published original research, conflict of interest disclosure, recommendation of reporting guidelines, registration of clinical trials, registration of other types of studies, encouraging data sharing, and use of the Registered Reports were assessed. For journals that only published reviews, all of the procedures except clinical trial registration were assessed.
Results: Thirty-three journals published original research and 10 published only reviews. Conflict of interest disclosure was required by all 33 original research journals. Use of guidelines, trial registration and encouragement of data sharing were mentioned by 30, 27 and 25 journals, respectively. Registration of other studies was required by eight and none offered Registered Reports as a publication option at the time of the review. All 10 review journals required conflict of interest disclosure, four recommended data sharing and three the use of guidelines. None mentioned the other two procedures.
Conclusions: While nutrition journals have adopted a number of procedures designed to improve the reporting of research findings, their limited effects likely result from the mechanisms through which they influence analytic flexibility and selective reporting and the extent to which they are properly implemented and enforced by journals.
{"title":"High impact nutrition and dietetics journals' use of publication procedures to increase research transparency.","authors":"Dennis M Gorman, Alva O Ferdinand","doi":"10.1186/s41073-020-00098-9","DOIUrl":"10.1186/s41073-020-00098-9","url":null,"abstract":"<p><strong>Background: </strong>The rigor and integrity of the published research in nutrition studies has come into serious question in recent years. Concerns focus on the use of flexible data analysis practices and selective reporting and the failure of peer review journals to identify and correct these practices. In response, it has been proposed that journals employ editorial procedures designed to improve the transparency of published research.</p><p><strong>Objective: </strong>The present study examines the adoption of editorial procedures designed to improve the reporting of empirical studies in the field of nutrition and dietetics research.</p><p><strong>Design: </strong>The instructions for authors of 43 journals included in Quartiles 1 and 2 of the Clarivate Analytics' 2018 Journal Citation Report category <i>Nutrition and Dietetics</i> were reviewed. For journals that published original research, conflict of interest disclosure, recommendation of reporting guidelines, registration of clinical trials, registration of other types of studies, encouraging data sharing, and use of the Registered Reports were assessed<i>.</i> For journals that only published reviews, all of the procedures except clinical trial registration were assessed.</p><p><strong>Results: </strong>Thirty-three journals published original research and 10 published only reviews. Conflict of interest disclosure was required by all 33 original research journals. Use of guidelines, trial registration and encouragement of data sharing were mentioned by 30, 27 and 25 journals, respectively. Registration of other studies was required by eight and none offered Registered Reports as a publication option at the time of the review. All 10 review journals required conflict of interest disclosure, four recommended data sharing and three the use of guidelines. None mentioned the other two procedures.</p><p><strong>Conclusions: </strong>While nutrition journals have adopted a number of procedures designed to improve the reporting of research findings, their limited effects likely result from the mechanisms through which they influence analytic flexibility and selective reporting and the extent to which they are properly implemented and enforced by journals.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"12"},"PeriodicalIF":7.2,"publicationDate":"2020-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7457801/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38343158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-05eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-00097-w
Serge P J M Horbach, Willem Halffman
Background: Triggered by a series of controversies and diversifying expectations of editorial practices, several innovative peer review procedures and supporting technologies have been proposed. However, adoption of these new initiatives seems slow. This raises questions about the wider conditions for peer review change and about the considerations that inform decisions to innovate. We set out to study the structure of commercial publishers' editorial process, to reveal how the benefits of peer review innovations are understood, and to describe the considerations that inform the implementation of innovations.
Methods: We carried out field visits to the editorial office of two large academic publishers housing the editorial staff of several hundreds of journals, to study their editorial process, and interviewed editors not affiliated with large publishers. Field notes were transcribed and analysed using coding software.
Results: At the publishers we analysed, the decision-making structure seems to show both clear patterns of hierarchy and layering of the different editorial practices. While information about new initiatives circulates widely, their implementation depends on assessment of stakeholder's wishes, impact on reputation, efficiency and implementation costs, with final decisions left to managers at the top of the internal hierarchy. Main tensions arise between commercial and substantial arguments. The editorial process is closely connected to commercial practices of creating business value, and the very specific terms in which business value is understood, such as reputation considerations and the urge to increase efficiency. Journals independent of large commercial publishers tend to have less hierarchically structured processes, report more flexibility to implement innovations, and to a greater extent decouple commercial and editorial perspectives.
Conclusion: Our study demonstrates that peer review innovations are partly to be understood in light of commercial considerations related to reputation, efficiency and implementations costs. These arguments extend beyond previously studied topics in publishing economics, including publishers' choice for business or publication models and reach into the very heart of the editorial and peer review process.
{"title":"Innovating editorial practices: academic publishers at work.","authors":"Serge P J M Horbach, Willem Halffman","doi":"10.1186/s41073-020-00097-w","DOIUrl":"https://doi.org/10.1186/s41073-020-00097-w","url":null,"abstract":"<p><strong>Background: </strong>Triggered by a series of controversies and diversifying expectations of editorial practices, several innovative peer review procedures and supporting technologies have been proposed. However, adoption of these new initiatives seems slow. This raises questions about the wider conditions for peer review change and about the considerations that inform decisions to innovate. We set out to study the structure of commercial publishers' editorial process, to reveal how the benefits of peer review innovations are understood, and to describe the considerations that inform the implementation of innovations.</p><p><strong>Methods: </strong>We carried out field visits to the editorial office of two large academic publishers housing the editorial staff of several hundreds of journals, to study their editorial process, and interviewed editors not affiliated with large publishers. Field notes were transcribed and analysed using coding software.</p><p><strong>Results: </strong>At the publishers we analysed, the decision-making structure seems to show both clear patterns of hierarchy and layering of the different editorial practices. While information about new initiatives circulates widely, their implementation depends on assessment of stakeholder's wishes, impact on reputation, efficiency and implementation costs, with final decisions left to managers at the top of the internal hierarchy. Main tensions arise between commercial and substantial arguments. The editorial process is closely connected to commercial practices of creating business value, and the very specific terms in which business value is understood, such as reputation considerations and the urge to increase efficiency. Journals independent of large commercial publishers tend to have less hierarchically structured processes, report more flexibility to implement innovations, and to a greater extent decouple commercial and editorial perspectives.</p><p><strong>Conclusion: </strong>Our study demonstrates that peer review innovations are partly to be understood in light of commercial considerations related to reputation, efficiency and implementations costs. These arguments extend beyond previously studied topics in publishing economics, including publishers' choice for business or publication models and reach into the very heart of the editorial and peer review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"11"},"PeriodicalIF":0.0,"publicationDate":"2020-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00097-w","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38246445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-24eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-00096-x
Travis G Gerwing, Alyssa M Allen Gerwing, Stephanie Avery-Gomm, Chi-Yeung Choi, Jeff C Clements, Joshua A Rash
Background: The process of peer-review in academia has attracted criticism surrounding issues of bias, fairness, and professionalism; however, frequency of occurrence of such comments is unknown.
Methods: We evaluated 1491 sets of reviewer comments from the fields of "Ecology and Evolution" and "Behavioural Medicine," of which 920 were retrieved from the online review repository Publons and 571 were obtained from six early career investigators. Comment sets were coded for the occurrence of "unprofessional comments" and "incomplete, inaccurate or unsubstantiated critiques" using an a-prior rubric based on our published research. Results are presented as absolute numbers and percentages.
Results: Overall, 12% (179) of comment sets included at least one unprofessional comment towards the author or their work, and 41% (611) contained incomplete, inaccurate of unsubstantiated critiques (IIUC).
Conclusions: The large number of unprofessional comments, and IIUCs observed could heighten psychological distress among investigators, particularly those at an early stage in their career. We suggest that development and adherence to a universally agreed upon reviewer code of conduct is necessary to improve the quality and professional experience of peer review.
{"title":"Quantifying professionalism in peer review.","authors":"Travis G Gerwing, Alyssa M Allen Gerwing, Stephanie Avery-Gomm, Chi-Yeung Choi, Jeff C Clements, Joshua A Rash","doi":"10.1186/s41073-020-00096-x","DOIUrl":"https://doi.org/10.1186/s41073-020-00096-x","url":null,"abstract":"<p><strong>Background: </strong>The process of peer-review in academia has attracted criticism surrounding issues of bias, fairness, and professionalism; however, frequency of occurrence of such comments is unknown.</p><p><strong>Methods: </strong>We evaluated 1491 sets of reviewer comments from the fields of \"Ecology and Evolution\" and \"Behavioural Medicine,\" of which 920 were retrieved from the online review repository Publons and 571 were obtained from six early career investigators. Comment sets were coded for the occurrence of \"unprofessional comments\" and \"incomplete, inaccurate or unsubstantiated critiques\" using an a-prior rubric based on our published research. Results are presented as absolute numbers and percentages.</p><p><strong>Results: </strong>Overall, 12% (179) of comment sets included at least one unprofessional comment towards the author or their work, and 41% (611) contained incomplete, inaccurate of unsubstantiated critiques (IIUC).</p><p><strong>Conclusions: </strong>The large number of unprofessional comments, and IIUCs observed could heighten psychological distress among investigators, particularly those at an early stage in their career. We suggest that development and adherence to a universally agreed upon reviewer code of conduct is necessary to improve the quality and professional experience of peer review.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2020-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00096-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38236038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-14eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-00095-y
Markus Konkol, Daniel Nüst, Laura Goulier
Background: The trend toward open science increases the pressure on authors to provide access to the source code and data they used to compute the results reported in their scientific papers. Since sharing materials reproducibly is challenging, several projects have developed solutions to support the release of executable analyses alongside articles.
Methods: We reviewed 11 applications that can assist researchers in adhering to reproducibility principles. The applications were found through a literature search and interactions with the reproducible research community. An application was included in our analysis if it (i) was actively maintained at the time the data for this paper was collected, (ii) supports the publication of executable code and data, (iii) is connected to the scholarly publication process. By investigating the software documentation and published articles, we compared the applications across 19 criteria, such as deployment options and features that support authors in creating and readers in studying executable papers.
Results: From the 11 applications, eight allow publishers to self-host the system for free, whereas three provide paid services. Authors can submit an executable analysis using Jupyter Notebooks or R Markdown documents (10 applications support these formats). All approaches provide features to assist readers in studying the materials, e.g., one-click reproducible results or tools for manipulating the analysis parameters. Six applications allow for modifying materials after publication.
Conclusions: The applications support authors to publish reproducible research predominantly with literate programming. Concerning readers, most applications provide user interfaces to inspect and manipulate the computational analysis. The next step is to investigate the gaps identified in this review, such as the costs publishers have to expect when hosting an application, the consideration of sensitive data, and impacts on the review process.
{"title":"Publishing computational research - a review of infrastructures for reproducible and transparent scholarly communication.","authors":"Markus Konkol, Daniel Nüst, Laura Goulier","doi":"10.1186/s41073-020-00095-y","DOIUrl":"10.1186/s41073-020-00095-y","url":null,"abstract":"<p><strong>Background: </strong>The trend toward open science increases the pressure on authors to provide access to the source code and data they used to compute the results reported in their scientific papers. Since sharing materials reproducibly is challenging, several projects have developed solutions to support the release of executable analyses alongside articles.</p><p><strong>Methods: </strong>We reviewed 11 applications that can assist researchers in adhering to reproducibility principles. The applications were found through a literature search and interactions with the reproducible research community. An application was included in our analysis if it <b>(i)</b> was actively maintained at the time the data for this paper was collected, <b>(ii)</b> supports the publication of executable code and data, <b>(iii)</b> is connected to the scholarly publication process. By investigating the software documentation and published articles, we compared the applications across 19 criteria, such as deployment options and features that support authors in creating and readers in studying executable papers.</p><p><strong>Results: </strong>From the 11 applications, eight allow publishers to self-host the system for free, whereas three provide paid services. Authors can submit an executable analysis using Jupyter Notebooks or R Markdown documents (10 applications support these formats). All approaches provide features to assist readers in studying the materials, e.g., one-click reproducible results or tools for manipulating the analysis parameters. Six applications allow for modifying materials after publication.</p><p><strong>Conclusions: </strong>The applications support authors to publish reproducible research predominantly with literate programming. Concerning readers, most applications provide user interfaces to inspect and manipulate the computational analysis. The next step is to investigate the gaps identified in this review, such as the costs publishers have to expect when hosting an application, the consideration of sensitive data, and impacts on the review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"10"},"PeriodicalIF":0.0,"publicationDate":"2020-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00095-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38177048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-26eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-00094-z
Lonni Besançon, Niklas Rönnberg, Jonas Löwgren, Jonathan P Tennant, Matthew Cooper
Background: Our aim is to highlight the benefits and limitations of open and non-anonymized peer review. Our argument is based on the literature and on responses to a survey on the reviewing process of alt.chi, a more or less open review track within the so-called Computer Human Interaction (CHI) conference, the predominant conference in the field of human-computer interaction. This track currently is the only implementation of an open peer review process in the field of human-computer interaction while, with the recent increase in interest in open scientific practices, open review is now being considered and used in other fields.
Methods: We ran an online survey with 30 responses from alt.chi authors and reviewers, collecting quantitative data using multiple-choice questions and Likert scales. Qualitative data were collected using open questions.
Results: Our main quantitative result is that respondents are more positive to open and non-anonymous reviewing for alt.chi than for other parts of the CHI conference. The qualitative data specifically highlight the benefits of open and transparent academic discussions. The data and scripts are available on https://osf.io/vuw7h/, and the figures and follow-up work on http://tiny.cc/OpenReviews.
Conclusion: While the benefits are quite clear and the system is generally well-liked by alt.chi participants, they remain reluctant to see it used in other venues. This concurs with a number of recent studies that suggest a divergence between support for a more open review process and its practical implementation.
{"title":"Open up: a survey on open and non-anonymized peer reviewing.","authors":"Lonni Besançon, Niklas Rönnberg, Jonas Löwgren, Jonathan P Tennant, Matthew Cooper","doi":"10.1186/s41073-020-00094-z","DOIUrl":"10.1186/s41073-020-00094-z","url":null,"abstract":"<p><strong>Background: </strong>Our aim is to highlight the benefits and limitations of open and non-anonymized peer review. Our argument is based on the literature and on responses to a survey on the reviewing process of alt.chi, a more or less open review track within the so-called Computer Human Interaction (CHI) conference, the predominant conference in the field of human-computer interaction. This track currently is the only implementation of an open peer review process in the field of human-computer interaction while, with the recent increase in interest in open scientific practices, open review is now being considered and used in other fields.</p><p><strong>Methods: </strong>We ran an online survey with 30 responses from alt.chi authors and reviewers, collecting quantitative data using multiple-choice questions and Likert scales. Qualitative data were collected using open questions.</p><p><strong>Results: </strong>Our main quantitative result is that respondents are more positive to open and non-anonymous reviewing for alt.chi than for other parts of the CHI conference. The qualitative data specifically highlight the benefits of open and transparent academic discussions. The data and scripts are available on https://osf.io/vuw7h/, and the figures and follow-up work on http://tiny.cc/OpenReviews.</p><p><strong>Conclusion: </strong>While the benefits are quite clear and the system is generally well-liked by alt.chi participants, they remain reluctant to see it used in other venues. This concurs with a number of recent studies that suggest a divergence between support for a more open review process and its practical implementation.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"8"},"PeriodicalIF":7.2,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7318523/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38109832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-15eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-00093-0
Stephen A Gallo, Karen B Schmaling, Lisa A Thompson, Scott R Glisson
Background: Funding agencies have long used panel discussion in the peer review of research grant proposals as a way to utilize a set of expertise and perspectives in making funding decisions. Little research has examined the quality of panel discussions and how effectively they are facilitated.
Methods: Here, we present a mixed-method analysis of data from a survey of reviewers focused on their perceptions of the quality, effectiveness, and influence of panel discussion from their last peer review experience.
Results: Reviewers indicated that panel discussions were viewed favorably in terms of participation, clarifying differing opinions, informing unassigned reviewers, and chair facilitation. However, some reviewers mentioned issues with panel discussions, including an uneven focus, limited participation from unassigned reviewers, and short discussion times. Most reviewers felt the discussions affected the review outcome, helped in choosing the best science, and were generally fair and balanced. However, those who felt the discussion did not affect the outcome were also more likely to evaluate panel communication negatively, and several reviewers mentioned potential sources of bias related to the discussion. While respondents strongly acknowledged the importance of the chair in ensuring appropriate facilitation of the discussion to influence scoring and to limit the influence of potential sources of bias from the discussion on scoring, nearly a third of respondents did not find the chair of their most recent panel to have performed these roles effectively.
Conclusions: It is likely that improving chair training in the management of discussion as well as creating review procedures that are informed by the science of leadership and team communication would improve review processes and proposal review reliability.
{"title":"Grant reviewer perceptions of the quality, effectiveness, and influence of panel discussion.","authors":"Stephen A Gallo, Karen B Schmaling, Lisa A Thompson, Scott R Glisson","doi":"10.1186/s41073-020-00093-0","DOIUrl":"https://doi.org/10.1186/s41073-020-00093-0","url":null,"abstract":"<p><strong>Background: </strong>Funding agencies have long used panel discussion in the peer review of research grant proposals as a way to utilize a set of expertise and perspectives in making funding decisions. Little research has examined the quality of panel discussions and how effectively they are facilitated.</p><p><strong>Methods: </strong>Here, we present a mixed-method analysis of data from a survey of reviewers focused on their perceptions of the quality, effectiveness, and influence of panel discussion from their last peer review experience.</p><p><strong>Results: </strong>Reviewers indicated that panel discussions were viewed favorably in terms of participation, clarifying differing opinions, informing unassigned reviewers, and chair facilitation. However, some reviewers mentioned issues with panel discussions, including an uneven focus, limited participation from unassigned reviewers, and short discussion times. Most reviewers felt the discussions affected the review outcome, helped in choosing the best science, and were generally fair and balanced. However, those who felt the discussion did not affect the outcome were also more likely to evaluate panel communication negatively, and several reviewers mentioned potential sources of bias related to the discussion. While respondents strongly acknowledged the importance of the chair in ensuring appropriate facilitation of the discussion to influence scoring and to limit the influence of potential sources of bias from the discussion on scoring, nearly a third of respondents did not find the chair of their most recent panel to have performed these roles effectively.</p><p><strong>Conclusions: </strong>It is likely that improving chair training in the management of discussion as well as creating review procedures that are informed by the science of leadership and team communication would improve review processes and proposal review reliability.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2020-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00093-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37986771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-30eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-00092-1
Jonathan P Tennant, Tony Ross-Hellauer
Peer review is embedded in the core of our knowledge generation systems, perceived as a method for establishing quality or scholarly legitimacy for research, while also often distributing academic prestige and standing on individuals. Despite its critical importance, it curiously remains poorly understood in a number of dimensions. In order to address this, we have analysed peer review to assess where the major gaps in our theoretical and empirical understanding of it lie. We identify core themes including editorial responsibility, the subjectivity and bias of reviewers, the function and quality of peer review, and the social and epistemic implications of peer review. The high-priority gaps are focused around increased accountability and justification in decision-making processes for editors and developing a deeper, empirical understanding of the social impact of peer review. Addressing this at the bare minimum will require the design of a consensus for a minimal set of standards for what constitutes peer review, and the development of a shared data infrastructure to support this. Such a field requires sustained funding and commitment from publishers and research funders, who both have a commitment to uphold the integrity of the published scholarly record. We use this to present a guide for the future of peer review, and the development of a new research discipline based on the study of peer review.
{"title":"The limitations to our understanding of peer review.","authors":"Jonathan P Tennant, Tony Ross-Hellauer","doi":"10.1186/s41073-020-00092-1","DOIUrl":"10.1186/s41073-020-00092-1","url":null,"abstract":"<p><p>Peer review is embedded in the core of our knowledge generation systems, perceived as a method for establishing quality or scholarly legitimacy for research, while also often distributing academic prestige and standing on individuals. Despite its critical importance, it curiously remains poorly understood in a number of dimensions. In order to address this, we have analysed peer review to assess where the major gaps in our theoretical and empirical understanding of it lie. We identify core themes including editorial responsibility, the subjectivity and bias of reviewers, the function and quality of peer review, and the social and epistemic implications of peer review. The high-priority gaps are focused around increased accountability and justification in decision-making processes for editors and developing a deeper, empirical understanding of the social impact of peer review. Addressing this at the bare minimum will require the design of a consensus for a minimal set of standards for what constitutes peer review, and the development of a shared data infrastructure to support this. Such a field requires sustained funding and commitment from publishers and research funders, who both have a commitment to uphold the integrity of the published scholarly record. We use this to present a guide for the future of peer review, and the development of a new research discipline based on the study of peer review.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"6"},"PeriodicalIF":0.0,"publicationDate":"2020-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7191707/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37901685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.1186/s41073-020-0090-6
Adriane Gomes, D. Custódio, Lara Coelho, Marina Marques, R. Sanda, Tânia Araújo, M. Gallas, E. F. Silveira
{"title":"Proceedings from the V Brazilian Meeting on Research Integrity, Science and Publication Ethics (V BRISPE)","authors":"Adriane Gomes, D. Custódio, Lara Coelho, Marina Marques, R. Sanda, Tânia Araújo, M. Gallas, E. F. Silveira","doi":"10.1186/s41073-020-0090-6","DOIUrl":"https://doi.org/10.1186/s41073-020-0090-6","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-0090-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47693162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-28eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-0091-5
Shelby Rauh, Trevor Torgerson, Austin L Johnson, Jonathan Pollard, Daniel Tritz, Matt Vassar
Background: The objective of this study was to evaluate the nature and extent of reproducible and transparent research practices in neurology publications.
Methods: The NLM catalog was used to identify MEDLINE-indexed neurology journals. A PubMed search of these journals was conducted to retrieve publications over a 5-year period from 2014 to 2018. A random sample of publications was extracted. Two authors conducted data extraction in a blinded, duplicate fashion using a pilot-tested Google form. This form prompted data extractors to determine whether publications provided access to items such as study materials, raw data, analysis scripts, and protocols. In addition, we determined if the publication was included in a replication study or systematic review, was preregistered, had a conflict of interest declaration, specified funding sources, and was open access.
Results: Our search identified 223,932 publications meeting the inclusion criteria, from which 400 were randomly sampled. Only 389 articles were accessible, yielding 271 publications with empirical data for analysis. Our results indicate that 9.4% provided access to materials, 9.2% provided access to raw data, 0.7% provided access to the analysis scripts, 0.7% linked the protocol, and 3.7% were preregistered. A third of sampled publications lacked funding or conflict of interest statements. No publications from our sample were included in replication studies, but a fifth were cited in a systematic review or meta-analysis.
Conclusions: Currently, published neurology research does not consistently provide information needed for reproducibility. The implications of poor research reporting can both affect patient care and increase research waste. Collaborative intervention by authors, peer reviewers, journals, and funding sources is needed to mitigate this problem.
{"title":"Reproducible and transparent research practices in published neurology research.","authors":"Shelby Rauh, Trevor Torgerson, Austin L Johnson, Jonathan Pollard, Daniel Tritz, Matt Vassar","doi":"10.1186/s41073-020-0091-5","DOIUrl":"10.1186/s41073-020-0091-5","url":null,"abstract":"<p><strong>Background: </strong>The objective of this study was to evaluate the nature and extent of reproducible and transparent research practices in neurology publications.</p><p><strong>Methods: </strong>The NLM catalog was used to identify MEDLINE-indexed neurology journals. A PubMed search of these journals was conducted to retrieve publications over a 5-year period from 2014 to 2018. A random sample of publications was extracted. Two authors conducted data extraction in a blinded, duplicate fashion using a pilot-tested Google form. This form prompted data extractors to determine whether publications provided access to items such as study materials, raw data, analysis scripts, and protocols. In addition, we determined if the publication was included in a replication study or systematic review, was preregistered, had a conflict of interest declaration, specified funding sources, and was open access.</p><p><strong>Results: </strong>Our search identified 223,932 publications meeting the inclusion criteria, from which 400 were randomly sampled. Only 389 articles were accessible, yielding 271 publications with empirical data for analysis. Our results indicate that 9.4% provided access to materials, 9.2% provided access to raw data, 0.7% provided access to the analysis scripts, 0.7% linked the protocol, and 3.7% were preregistered. A third of sampled publications lacked funding or conflict of interest statements. No publications from our sample were included in replication studies, but a fifth were cited in a systematic review or meta-analysis.</p><p><strong>Conclusions: </strong>Currently, published neurology research does not consistently provide information needed for reproducibility. The implications of poor research reporting can both affect patient care and increase research waste. Collaborative intervention by authors, peer reviewers, journals, and funding sources is needed to mitigate this problem.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2020-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7049215/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37729304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}