Pub Date : 2021-04-15DOI: 10.1186/s41073-021-00109-3
Elizabeth Wager, Sabine Kleinert
Background: Inaccurate, false or incomplete research publications may mislead readers including researchers and decision-makers. It is therefore important that such problems are identified and rectified promptly. This usually involves collaboration between the research institutions and academic journals involved, but these interactions can be problematic.
Methods: These recommendations were developed following discussions at World Conferences on Research Integrity in 2013 and 2017, and at a specially convened 3-day workshop in 2016 involving participants from 7 countries with expertise in publication ethics and research integrity. The recommendations aim to address issues surrounding cooperation and liaison between institutions (e.g. universities) and journals about possible and actual problems with the integrity of reported research arising before and after publication.
Results: The main recommendations are that research institutions should: 1) develop mechanisms for assessing the integrity of reported research (if concerns are raised) that are distinct from processes to determine whether individual researchers have committed misconduct; 2) release relevant sections of reports of research integrity or misconduct investigations to all journals that have published research that was investigated; 3) take responsibility for research performed under their auspices regardless of whether the researcher still works at that institution or how long ago the work was done; 4) work with funders to ensure essential research data is retained for at least 10 years. Journals should: 1) respond to institutions about research integrity cases in a timely manner; 2) have criteria for determining whether, and what type of, information and evidence relating to the integrity of research reports should be passed on to institutions; 3) pass on research integrity concerns to institutions, regardless of whether they intend to accept the work for publication; 4) retain peer review records for at least 10 years to enable the investigation of peer review manipulation or other inappropriate behaviour by authors or reviewers.
Conclusions: Various difficulties can prevent effective cooperation between academic journals and research institutions about research integrity concerns and hinder the correction of the research record if problems are discovered. While the issues and their solutions may vary across different settings, we encourage research institutions, journals and funders to consider how they might improve future collaboration and cooperation on research integrity cases.
{"title":"Cooperation & Liaison between Universities & Editors (CLUE): recommendations on best practice.","authors":"Elizabeth Wager, Sabine Kleinert","doi":"10.1186/s41073-021-00109-3","DOIUrl":"10.1186/s41073-021-00109-3","url":null,"abstract":"<p><strong>Background: </strong>Inaccurate, false or incomplete research publications may mislead readers including researchers and decision-makers. It is therefore important that such problems are identified and rectified promptly. This usually involves collaboration between the research institutions and academic journals involved, but these interactions can be problematic.</p><p><strong>Methods: </strong>These recommendations were developed following discussions at World Conferences on Research Integrity in 2013 and 2017, and at a specially convened 3-day workshop in 2016 involving participants from 7 countries with expertise in publication ethics and research integrity. The recommendations aim to address issues surrounding cooperation and liaison between institutions (e.g. universities) and journals about possible and actual problems with the integrity of reported research arising before and after publication.</p><p><strong>Results: </strong>The main recommendations are that research institutions should: 1) develop mechanisms for assessing the integrity of reported research (if concerns are raised) that are distinct from processes to determine whether individual researchers have committed misconduct; 2) release relevant sections of reports of research integrity or misconduct investigations to all journals that have published research that was investigated; 3) take responsibility for research performed under their auspices regardless of whether the researcher still works at that institution or how long ago the work was done; 4) work with funders to ensure essential research data is retained for at least 10 years. Journals should: 1) respond to institutions about research integrity cases in a timely manner; 2) have criteria for determining whether, and what type of, information and evidence relating to the integrity of research reports should be passed on to institutions; 3) pass on research integrity concerns to institutions, regardless of whether they intend to accept the work for publication; 4) retain peer review records for at least 10 years to enable the investigation of peer review manipulation or other inappropriate behaviour by authors or reviewers.</p><p><strong>Conclusions: </strong>Various difficulties can prevent effective cooperation between academic journals and research institutions about research integrity concerns and hinder the correction of the research record if problems are discovered. While the issues and their solutions may vary across different settings, we encourage research institutions, journals and funders to consider how they might improve future collaboration and cooperation on research integrity cases.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"6"},"PeriodicalIF":7.2,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8048029/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25590216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-01DOI: 10.1186/s41073-021-00108-4
Klaas Sijtsma, Wilco H M Emons, Nicholas H Steneck, Lex M Bouter
Background: A proposal to encourage the preregistration of research on research integrity was developed and adopted as the Amsterdam Agenda at the 5th World Conference on Research Integrity (Amsterdam, 2017). This paper reports on the degree to which abstracts of the 6th World Conference in Research Integrity (Hong Kong, 2019) reported on preregistered research.
Methods: Conference registration data on participants presenting a paper or a poster at 6th WCRI were made available to the research team. Because the data set was too small for inferential statistics this report is limited to a basic description of results and some recommendations that should be considered when taking further steps to improve preregistration.
Results: 19% of the 308 presenters preregistered their research. Of the 56 usable cases, less than half provided information on the six key elements of the Amsterdam Agenda. Others provided information that invalidated their data, such as an uninformative URL. There was no discernable difference between qualitative and quantitative research.
Conclusions: Some presenters at the WCRI have preregistered their research on research integrity, but further steps are needed to increase frequency and completeness of preregistration. One approach to increase preregistration would be to make it a requirement for research presented at the World Conferences on Research Integrity.
{"title":"Steps toward preregistration of research on research integrity.","authors":"Klaas Sijtsma, Wilco H M Emons, Nicholas H Steneck, Lex M Bouter","doi":"10.1186/s41073-021-00108-4","DOIUrl":"10.1186/s41073-021-00108-4","url":null,"abstract":"<p><strong>Background: </strong>A proposal to encourage the preregistration of research on research integrity was developed and adopted as the Amsterdam Agenda at the 5th World Conference on Research Integrity (Amsterdam, 2017). This paper reports on the degree to which abstracts of the 6th World Conference in Research Integrity (Hong Kong, 2019) reported on preregistered research.</p><p><strong>Methods: </strong>Conference registration data on participants presenting a paper or a poster at 6th WCRI were made available to the research team. Because the data set was too small for inferential statistics this report is limited to a basic description of results and some recommendations that should be considered when taking further steps to improve preregistration.</p><p><strong>Results: </strong>19% of the 308 presenters preregistered their research. Of the 56 usable cases, less than half provided information on the six key elements of the Amsterdam Agenda. Others provided information that invalidated their data, such as an uninformative URL. There was no discernable difference between qualitative and quantitative research.</p><p><strong>Conclusions: </strong>Some presenters at the WCRI have preregistered their research on research integrity, but further steps are needed to increase frequency and completeness of preregistration. One approach to increase preregistration would be to make it a requirement for research presented at the World Conferences on Research Integrity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7923522/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25425863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-16DOI: 10.1186/s41073-020-00107-x
Travis G Gerwing, Alyssa M Allen Gerwing, Chi-Yeung Choi, Stephanie Avery-Gomm, Jeff C Clements, Joshua A Rash
Our recent paper ( https://doi.org/10.1186/s41073-020-00096-x ) reported that 43% of reviewer comment sets (n=1491) shared with authors contained at least one unprofessional comment or an incomplete, inaccurate of unsubstantiated critique (IIUC). Publication of this work sparked an online (i.e., Twitter, Instagram, Facebook, and Reddit) conversation surrounding professionalism in peer review. We collected and analyzed these social media comments as they offered real-time responses to our work and provided insight into the views held by commenters and potential peer-reviewers that would be difficult to quantify using existing empirical tools (96 comments from July 24th to September 3rd, 2020). Overall, 75% of comments were positive, of which 59% were supportive and 16% shared similar personal experiences. However, a subset of negative comments emerged (22% of comments were negative and 6% were an unsubstantiated critique of the methodology), that provided potential insight into the reasons underlying unprofessional comments were made during the peer-review process. These comments were classified into three main themes: (1) forced niceness will adversely impact the peer-review process and allow for publication of poor-quality science (5% of online comments); (2) dismissing comments as not offensive to another person because they were not deemed personally offensive to the reader (6%); and (3) authors brought unprofessional comments upon themselves as they submitted substandard work (5%). Here, we argue against these themes as justifications for directing unprofessional comments towards authors during the peer review process. We argue that it is possible to be both critical and professional, and that no author deserves to be the recipient of demeaning ad hominem attacks regardless of supposed provocation. Suggesting otherwise only serves to propagate a toxic culture within peer review. While we previously postulated that establishing a peer-reviewer code of conduct could help improve the peer-review system, we now posit that priority should be given to repairing the negative cultural zeitgeist that exists in peer-review.
{"title":"Re-evaluation of solutions to the problem of unprofessionalism in peer review.","authors":"Travis G Gerwing, Alyssa M Allen Gerwing, Chi-Yeung Choi, Stephanie Avery-Gomm, Jeff C Clements, Joshua A Rash","doi":"10.1186/s41073-020-00107-x","DOIUrl":"https://doi.org/10.1186/s41073-020-00107-x","url":null,"abstract":"<p><p>Our recent paper ( https://doi.org/10.1186/s41073-020-00096-x ) reported that 43% of reviewer comment sets (n=1491) shared with authors contained at least one unprofessional comment or an incomplete, inaccurate of unsubstantiated critique (IIUC). Publication of this work sparked an online (i.e., Twitter, Instagram, Facebook, and Reddit) conversation surrounding professionalism in peer review. We collected and analyzed these social media comments as they offered real-time responses to our work and provided insight into the views held by commenters and potential peer-reviewers that would be difficult to quantify using existing empirical tools (96 comments from July 24th to September 3rd, 2020). Overall, 75% of comments were positive, of which 59% were supportive and 16% shared similar personal experiences. However, a subset of negative comments emerged (22% of comments were negative and 6% were an unsubstantiated critique of the methodology), that provided potential insight into the reasons underlying unprofessional comments were made during the peer-review process. These comments were classified into three main themes: (1) forced niceness will adversely impact the peer-review process and allow for publication of poor-quality science (5% of online comments); (2) dismissing comments as not offensive to another person because they were not deemed personally offensive to the reader (6%); and (3) authors brought unprofessional comments upon themselves as they submitted substandard work (5%). Here, we argue against these themes as justifications for directing unprofessional comments towards authors during the peer review process. We argue that it is possible to be both critical and professional, and that no author deserves to be the recipient of demeaning ad hominem attacks regardless of supposed provocation. Suggesting otherwise only serves to propagate a toxic culture within peer review. While we previously postulated that establishing a peer-reviewer code of conduct could help improve the peer-review system, we now posit that priority should be given to repairing the negative cultural zeitgeist that exists in peer-review.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"4"},"PeriodicalIF":0.0,"publicationDate":"2021-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00107-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25375244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1186/s41073-020-00106-y
Nick Kinney, Araba Wubah, Miguel Roig, Harold R Garner
Background: Scientists communicate progress and exchange information via publication and presentation at scientific meetings. We previously showed that text similarity analysis applied to Medline can identify and quantify plagiarism and duplicate publications in peer-reviewed biomedical journals. In the present study, we applied the same analysis to a large sample of conference abstracts.
Methods: We downloaded 144,149 abstracts from 207 national and international meetings of 63 biomedical conferences. Pairwise comparisons were made using eTBLAST: a text similarity engine. A domain expert then reviewed random samples of highly similar abstracts (1500 total) to estimate the extent of text overlap and possible plagiarism.
Results: Our main findings indicate that the vast majority of textual overlap occurred within the same meeting (2%) and between meetings of the same conference (3%), both of which were significantly higher than instances of plagiarism, which occurred in less than .5% of abstracts.
Conclusions: This analysis indicates that textual overlap in abstracts of papers presented at scientific meetings is one-tenth that of peer-reviewed publications, yet the plagiarism rate is approximately the same as previously measured in peer-reviewed publications. This latter finding underscores a need for monitoring scientific meeting submissions - as is now done when submitting manuscripts to peer-reviewed journals - to improve the integrity of scientific communications.
{"title":"Estimating the prevalence of text overlap in biomedical conference abstracts.","authors":"Nick Kinney, Araba Wubah, Miguel Roig, Harold R Garner","doi":"10.1186/s41073-020-00106-y","DOIUrl":"https://doi.org/10.1186/s41073-020-00106-y","url":null,"abstract":"<p><strong>Background: </strong>Scientists communicate progress and exchange information via publication and presentation at scientific meetings. We previously showed that text similarity analysis applied to Medline can identify and quantify plagiarism and duplicate publications in peer-reviewed biomedical journals. In the present study, we applied the same analysis to a large sample of conference abstracts.</p><p><strong>Methods: </strong>We downloaded 144,149 abstracts from 207 national and international meetings of 63 biomedical conferences. Pairwise comparisons were made using eTBLAST: a text similarity engine. A domain expert then reviewed random samples of highly similar abstracts (1500 total) to estimate the extent of text overlap and possible plagiarism.</p><p><strong>Results: </strong>Our main findings indicate that the vast majority of textual overlap occurred within the same meeting (2%) and between meetings of the same conference (3%), both of which were significantly higher than instances of plagiarism, which occurred in less than .5% of abstracts.</p><p><strong>Conclusions: </strong>This analysis indicates that textual overlap in abstracts of papers presented at scientific meetings is one-tenth that of peer-reviewed publications, yet the plagiarism rate is approximately the same as previously measured in peer-reviewed publications. This latter finding underscores a need for monitoring scientific meeting submissions - as is now done when submitting manuscripts to peer-reviewed journals - to improve the integrity of scientific communications.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00106-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25313727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-14DOI: 10.1186/s41073-020-00105-z
Noémie Aubert Bonn, Wim Pinxten
Background: Research misconduct and questionable research practices have been the subject of increasing attention in the past few years. But despite the rich body of research available, few empirical works also include the perspectives of non-researcher stakeholders.
Methods: We conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science. We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting.
Results: Given the breadth of our results, we divided our findings in a two-paper series with the current paper focusing on the problems that affect the integrity and research culture. We first found that different actors have different perspectives on the problems that affect the integrity and culture of research. Problems were either linked to personalities and attitudes, or to the climates in which researchers operate. Elements that were described as essential for success (in the associate paper) were often thought to accentuate the problems of research climates by disrupting research culture and research integrity. Even though all participants agreed that current research climates need to be addressed, participants generally did not feel responsible nor capable of initiating change. Instead, respondents revealed a circle of blame and mistrust between actor groups.
Conclusions: Our findings resonate with recent debates, and extrapolate a few action points which might help advance the discussion. First, the research integrity debate must revisit and tackle the way in which researchers are assessed. Second, approaches to promote better science need to address the impact that research climates have on research integrity and research culture rather than to capitalize on individual researchers' compliance. Finally, inter-actor dialogues and shared decision making must be given priority to ensure that the perspectives of the full research system are captured. Understanding the relations and interdependency between these perspectives is key to be able to address the problems of science.
{"title":"Rethinking success, integrity, and culture in research (part 2) - a multi-actor qualitative study on problems of science.","authors":"Noémie Aubert Bonn, Wim Pinxten","doi":"10.1186/s41073-020-00105-z","DOIUrl":"10.1186/s41073-020-00105-z","url":null,"abstract":"<p><strong>Background: </strong>Research misconduct and questionable research practices have been the subject of increasing attention in the past few years. But despite the rich body of research available, few empirical works also include the perspectives of non-researcher stakeholders.</p><p><strong>Methods: </strong>We conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science. We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting.</p><p><strong>Results: </strong>Given the breadth of our results, we divided our findings in a two-paper series with the current paper focusing on the problems that affect the integrity and research culture. We first found that different actors have different perspectives on the problems that affect the integrity and culture of research. Problems were either linked to personalities and attitudes, or to the climates in which researchers operate. Elements that were described as essential for success (in the associate paper) were often thought to accentuate the problems of research climates by disrupting research culture and research integrity. Even though all participants agreed that current research climates need to be addressed, participants generally did not feel responsible nor capable of initiating change. Instead, respondents revealed a circle of blame and mistrust between actor groups.</p><p><strong>Conclusions: </strong>Our findings resonate with recent debates, and extrapolate a few action points which might help advance the discussion. First, the research integrity debate must revisit and tackle the way in which researchers are assessed. Second, approaches to promote better science need to address the impact that research climates have on research integrity and research culture rather than to capitalize on individual researchers' compliance. Finally, inter-actor dialogues and shared decision making must be given priority to ensure that the perspectives of the full research system are captured. Understanding the relations and interdependency between these perspectives is key to be able to address the problems of science.</p><p><strong>Study registration: </strong>https://osf.io/33v3m.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2021-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7807493/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39152990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-14DOI: 10.1186/s41073-020-00104-0
Noémie Aubert Bonn, Wim Pinxten
Background: Success shapes the lives and careers of scientists. But success in science is difficult to define, let alone to translate in indicators that can be used for assessment. In the past few years, several groups expressed their dissatisfaction with the indicators currently used for assessing researchers. But given the lack of agreement on what should constitute success in science, most propositions remain unanswered. This paper aims to complement our understanding of success in science and to document areas of tension and conflict in research assessments.
Methods: We conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science. We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting.
Results: Given the breadth of our results, we divided our findings in a two-paper series, with the current paper focusing on what defines and determines success in science. Respondents depicted success as a multi-factorial, context-dependent, and mutable construct. Success appeared to be an interaction between characteristics from the researcher (Who), research outputs (What), processes (How), and luck. Interviewees noted that current research assessments overvalued outputs but largely ignored the processes deemed essential for research quality and integrity. Interviewees suggested that science needs a diversity of indicators that are transparent, robust, and valid, and that also allow a balanced and diverse view of success; that assessment of scientists should not blindly depend on metrics but also value human input; and that quality should be valued over quantity.
Conclusions: The objective of research assessments may be to encourage good researchers, to benefit society, or simply to advance science. Yet we show that current assessments fall short on each of these objectives. Open and transparent inter-actor dialogue is needed to understand what research assessments aim for and how they can best achieve their objective.
{"title":"Rethinking success, integrity, and culture in research (part 1) - a multi-actor qualitative study on success in science.","authors":"Noémie Aubert Bonn, Wim Pinxten","doi":"10.1186/s41073-020-00104-0","DOIUrl":"10.1186/s41073-020-00104-0","url":null,"abstract":"<p><strong>Background: </strong>Success shapes the lives and careers of scientists. But success in science is difficult to define, let alone to translate in indicators that can be used for assessment. In the past few years, several groups expressed their dissatisfaction with the indicators currently used for assessing researchers. But given the lack of agreement on what should constitute success in science, most propositions remain unanswered. This paper aims to complement our understanding of success in science and to document areas of tension and conflict in research assessments.</p><p><strong>Methods: </strong>We conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science. We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting.</p><p><strong>Results: </strong>Given the breadth of our results, we divided our findings in a two-paper series, with the current paper focusing on what defines and determines success in science. Respondents depicted success as a multi-factorial, context-dependent, and mutable construct. Success appeared to be an interaction between characteristics from the researcher (Who), research outputs (What), processes (How), and luck. Interviewees noted that current research assessments overvalued outputs but largely ignored the processes deemed essential for research quality and integrity. Interviewees suggested that science needs a diversity of indicators that are transparent, robust, and valid, and that also allow a balanced and diverse view of success; that assessment of scientists should not blindly depend on metrics but also value human input; and that quality should be valued over quantity.</p><p><strong>Conclusions: </strong>The objective of research assessments may be to encourage good researchers, to benefit society, or simply to advance science. Yet we show that current assessments fall short on each of these objectives. Open and transparent inter-actor dialogue is needed to understand what research assessments aim for and how they can best achieve their objective.</p><p><strong>Study registration: </strong>osf.io/33v3m.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2021-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7807516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38816118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-11DOI: 10.1186/s41073-020-00103-1
Michael Kalichman
Background: Research on research integrity has tended to focus on frequency of research misconduct and factors that might induce someone to commit research misconduct. A definitive answer to the first question has been elusive, but it remains clear that any research misconduct is too much. Answers to the second question are so diverse, it might be productive to ask a different question: What about how research is done allows research misconduct to occur?
Methods: With that question in mind, research integrity officers (RIOs) of the 62 members of the American Association of Universities were invited to complete a brief survey about their most recent instance of a finding of research misconduct. Respondents were asked whether one or more good practices of research (e.g., openness and transparency, keeping good research records) were present in their case of research misconduct.
Results: Twenty-four (24) of the respondents (39% response rate) indicated they had dealt with at least one finding of research misconduct and answered the survey questions. Over half of these RIOs reported that their case of research misconduct had occurred in an environment in which at least nine of the ten listed good practices of research were deficient.
Conclusions: These results are not evidence for a causal effect of poor practices, but it is arguable that committing research misconduct would be more difficult if not impossible in research environments adhering to good practices of research.
{"title":"Survey study of research integrity officers' perceptions of research practices associated with instances of research misconduct.","authors":"Michael Kalichman","doi":"10.1186/s41073-020-00103-1","DOIUrl":"https://doi.org/10.1186/s41073-020-00103-1","url":null,"abstract":"<p><strong>Background: </strong>Research on research integrity has tended to focus on frequency of research misconduct and factors that might induce someone to commit research misconduct. A definitive answer to the first question has been elusive, but it remains clear that any research misconduct is too much. Answers to the second question are so diverse, it might be productive to ask a different question: What about how research is done allows research misconduct to occur?</p><p><strong>Methods: </strong>With that question in mind, research integrity officers (RIOs) of the 62 members of the American Association of Universities were invited to complete a brief survey about their most recent instance of a finding of research misconduct. Respondents were asked whether one or more good practices of research (e.g., openness and transparency, keeping good research records) were present in their case of research misconduct.</p><p><strong>Results: </strong>Twenty-four (24) of the respondents (39% response rate) indicated they had dealt with at least one finding of research misconduct and answered the survey questions. Over half of these RIOs reported that their case of research misconduct had occurred in an environment in which at least nine of the ten listed good practices of research were deficient.</p><p><strong>Conclusions: </strong>These results are not evidence for a causal effect of poor practices, but it is arguable that committing research misconduct would be more difficult if not impossible in research environments adhering to good practices of research.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 1","pages":"17"},"PeriodicalIF":0.0,"publicationDate":"2020-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00103-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38696768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1186/s41073-020-00101-3
Clarissa F D Carneiro, Victor G S Queiroz, Thiago C Moulin, Carlos A M Carvalho, Clarissa B Haas, Danielle Rayêe, David E Henshall, Evandro A De-Souza, Felippe E Amorim, Flávia Z Boos, Gerson D Guercio, Igor R Costa, Karina L Hajdu, Lieve van Egmond, Martin Modrák, Pedro B Tan, Richard J Abdill, Steven J Burgess, Sylvia F S Guerra, Vanessa T Bortoluzzi, Olavo B Amaral
Background: Preprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader's ability to independently interpret data and reproduce findings.
Methods: In this observational study, we initially compared independent samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. After that, we performed paired comparisons between preprints from bioRxiv to their own peer-reviewed versions in journals.
Results: Peer-reviewed articles had, on average, higher quality of reporting than preprints, although the difference was small, with absolute differences of 5.0% [95% CI 1.4, 8.6] and 4.7% [95% CI 2.4, 7.0] of reported items in the independent samples and paired sample comparison, respectively. There were larger differences favoring peer-reviewed articles in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information. Changes in reporting from preprints to peer-reviewed versions did not correlate with the impact factor of the publication venue or with the time lag from bioRxiv to journal publication.
Conclusions: Our results suggest that, on average, publication in a peer-reviewed journal is associated with improvement in quality of reporting. They also show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions.
背景:预印本的使用在生命科学领域迅速增长;然而,与已发表的文章相比,预印本的相对质量仍然存在问题。报告的完整性是一个易于衡量的客观质量维度,因为透明度可以提高读者独立解释数据和复制研究结果的能力:在这项观察性研究中,我们首先使用报告质量问卷比较了2016年在bioRxiv和PubM索引期刊上发表的文章的独立样本。之后,我们将bioRxiv上的预印本与期刊上的同行评审版本进行了配对比较:同行评审文章的报告质量平均高于预印本,但差异较小,在独立样本和配对样本比较中,报告项目的绝对差异分别为 5.0% [95% CI 1.4, 8.6] 和 4.7% [95% CI 2.4, 7.0]。在标题和摘要对主要研究结果的清晰表述程度以及查找相关报告信息的难易程度的主观评价方面,同行评议文章的差异更大。从预印版本到同行评议版本的报告变化与发表地点的影响因子或从 bioRxiv 到期刊发表的时间间隔无关:我们的研究结果表明,平均而言,在同行评审期刊上发表论文与报告质量的提高有关。这些结果还表明,生命科学预印本的报告质量与同行评审文章的报告质量在相似的范围内,尽管平均水平略低,这支持了预印本应被视为有效科学贡献的观点。
{"title":"Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature.","authors":"Clarissa F D Carneiro, Victor G S Queiroz, Thiago C Moulin, Carlos A M Carvalho, Clarissa B Haas, Danielle Rayêe, David E Henshall, Evandro A De-Souza, Felippe E Amorim, Flávia Z Boos, Gerson D Guercio, Igor R Costa, Karina L Hajdu, Lieve van Egmond, Martin Modrák, Pedro B Tan, Richard J Abdill, Steven J Burgess, Sylvia F S Guerra, Vanessa T Bortoluzzi, Olavo B Amaral","doi":"10.1186/s41073-020-00101-3","DOIUrl":"10.1186/s41073-020-00101-3","url":null,"abstract":"<p><strong>Background: </strong>Preprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader's ability to independently interpret data and reproduce findings.</p><p><strong>Methods: </strong>In this observational study, we initially compared independent samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. After that, we performed paired comparisons between preprints from bioRxiv to their own peer-reviewed versions in journals.</p><p><strong>Results: </strong>Peer-reviewed articles had, on average, higher quality of reporting than preprints, although the difference was small, with absolute differences of 5.0% [95% CI 1.4, 8.6] and 4.7% [95% CI 2.4, 7.0] of reported items in the independent samples and paired sample comparison, respectively. There were larger differences favoring peer-reviewed articles in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information. Changes in reporting from preprints to peer-reviewed versions did not correlate with the impact factor of the publication venue or with the time lag from bioRxiv to journal publication.</p><p><strong>Conclusions: </strong>Our results suggest that, on average, publication in a peer-reviewed journal is associated with improvement in quality of reporting. They also show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 1","pages":"16"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7706207/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38699770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-13DOI: 10.1186/s41073-020-00102-2
Robin Mason
Background: Integrating a sex and gender lens is increasingly recognized as important in health research studies. Past failures to adequately consider sex in drug development, for example, led to medications that were metabolized differently, proved harmful, or ineffective, for females. Including both males and females in study populations is important but not sufficient; health, access to healthcare, and treatment provided are also influenced by gender, the socially mediated roles, responsibilities, and behaviors of boys, girls, women and men. Despite understanding the relevance of sex and gender to health research, integrating this lens into study designs can still be challenging. Identified here, are nine opportunities to address sex and gender and thereby strengthen research proposals.
Methods: Ontario investigators were invited to submit a draft of their health research proposal to the Sex and Gender Research Support Service (SGRSS) at Women's College Hospital in Toronto, Ontario. The service works to build capacity on the integration of sex, gender, and other identity factors, in health research. Using the SAGER Guidelines and the METRICS for the Study of Sex and Gender in Human Participants as guides, proposals were reviewed to enhance their sex and gender considerations. Content analysis of the feedback provided these investigators was subsequently completed.
Results: Nearly 100 hundred study proposals were reviewed and investigators provided with suggestions on how to enhance their proposal. Analyzing the feedback provided across the reviewed studies revealed commonly overlooked opportunities to elevate consideration of sex and gender. These were organized into nine suggestions to mirror the sections of a research proposal.
Conclusion: Health researchers are often challenged on how to integrate a sex and gender lens into their work. Reviews completed across a range of health research studies show there are several commonly overlooked opportunities to do better in this regard. Nine ways to improve the integration of a sex and gender lens in health research proposals have been identified.
{"title":"Doing better: eleven ways to improve the integration of sex and gender in health research proposals.","authors":"Robin Mason","doi":"10.1186/s41073-020-00102-2","DOIUrl":"https://doi.org/10.1186/s41073-020-00102-2","url":null,"abstract":"<p><strong>Background: </strong>Integrating a sex and gender lens is increasingly recognized as important in health research studies. Past failures to adequately consider sex in drug development, for example, led to medications that were metabolized differently, proved harmful, or ineffective, for females. Including both males and females in study populations is important but not sufficient; health, access to healthcare, and treatment provided are also influenced by gender, the socially mediated roles, responsibilities, and behaviors of boys, girls, women and men. Despite understanding the relevance of sex and gender to health research, integrating this lens into study designs can still be challenging. Identified here, are nine opportunities to address sex and gender and thereby strengthen research proposals.</p><p><strong>Methods: </strong>Ontario investigators were invited to submit a draft of their health research proposal to the Sex and Gender Research Support Service (SGRSS) at Women's College Hospital in Toronto, Ontario. The service works to build capacity on the integration of sex, gender, and other identity factors, in health research. Using the SAGER Guidelines and the METRICS for the Study of Sex and Gender in Human Participants as guides, proposals were reviewed to enhance their sex and gender considerations. Content analysis of the feedback provided these investigators was subsequently completed.</p><p><strong>Results: </strong>Nearly 100 hundred study proposals were reviewed and investigators provided with suggestions on how to enhance their proposal. Analyzing the feedback provided across the reviewed studies revealed commonly overlooked opportunities to elevate consideration of sex and gender. These were organized into nine suggestions to mirror the sections of a research proposal.</p><p><strong>Conclusion: </strong>Health researchers are often challenged on how to integrate a sex and gender lens into their work. Reviews completed across a range of health research studies show there are several commonly overlooked opportunities to do better in this regard. Nine ways to improve the integration of a sex and gender lens in health research proposals have been identified.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 1","pages":"15"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00102-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38351104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-22eCollection Date: 2020-01-01DOI: 10.1186/s41073-020-00100-4
Courtenay Cavanaugh, Yara Abu Hussein
Background: Sex and gender influence individuals' psychology, but are often overlooked in psychological science. The sex and gender equity in research (SAGER) guidelines provide instruction for addressing sex and gender within five sections of a manuscript (i.e., title/abstract, introduction, methods, results, and discussion) (Heidari et al., Res Integr Peer Rev 1:1-9, 2016).
Methods: We examined whether the 89 journals published by the American Psychological Association provide explicit instruction for authors to address sex and gender within these five sections. Both authors reviewed the journal instructions to authors for the words "sex," and "gender," and noted explicit instruction pertaining to these five sections.
Results: Only 8 journals (9.0%) instructed authors to address sex/gender within the abstract, introduction, and/or methods sections. No journals instructed authors to address sex and gender in the results or discussion sections.
Conclusion: These journals could increase sex/gender equity and improve the reproducibility of psychological science by instructing authors to follow the SAGER guidelines.
{"title":"Do journals instruct authors to address sex and gender in psychological science?","authors":"Courtenay Cavanaugh, Yara Abu Hussein","doi":"10.1186/s41073-020-00100-4","DOIUrl":"10.1186/s41073-020-00100-4","url":null,"abstract":"<p><strong>Background: </strong>Sex and gender influence individuals' psychology, but are often overlooked in psychological science. The sex and gender equity in research (SAGER) guidelines provide instruction for addressing sex and gender within five sections of a manuscript (i.e., title/abstract, introduction, methods, results, and discussion) (Heidari et al., Res Integr Peer Rev 1:1-9, 2016).</p><p><strong>Methods: </strong>We examined whether the 89 journals published by the American Psychological Association provide explicit instruction for authors to address sex and gender within these five sections. Both authors reviewed the journal instructions to authors for the words \"sex,\" and \"gender,\" and noted explicit instruction pertaining to these five sections.</p><p><strong>Results: </strong>Only 8 journals (9.0%) instructed authors to address sex/gender within the abstract, introduction, and/or methods sections. No journals instructed authors to address sex and gender in the results or discussion sections.</p><p><strong>Conclusion: </strong>These journals could increase sex/gender equity and improve the reproducibility of psychological science by instructing authors to follow the SAGER guidelines.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 ","pages":"14"},"PeriodicalIF":0.0,"publicationDate":"2020-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-020-00100-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38534138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}