Pub Date : 2025-04-07DOI: 10.1186/s41073-025-00161-3
Nicholas Lo Vecchio
Background: While some recent studies have looked at large language model (LLM) use in peer review at the corpus level, to date there have been few examinations of instances of AI-generated reviews in their social context. The goal of this first-person account is to present my experience of receiving two anonymous peer review reports that I believe were produced using generative AI, as well as lessons learned from that experience.
Methods: This is a case report on the timeline of the incident, and my and the journal's actions following it. Supporting evidence includes text patterns in the reports, online AI detection tools and ChatGPT simulations; recommendations are offered for others who may find themselves in a similar situation. The primary research limitation of this article is that it is based on one individual's personal experience.
Results: After alleging the use of generative AI in December 2023, two months of back-and-forth ensued between myself and the journal, leading to my withdrawal of the submission. The journal denied any ethical breach, without taking an explicit position on the allegations of LLM use. Based on this experience, I recommend that authors engage in dialogue with journals on AI use in peer review prior to article submission; where undisclosed AI use is suspected, authors should proactively amass evidence, request an investigation protocol, escalate the matter as needed, involve independent bodies where possible, and share their experience with fellow researchers.
Conclusions: Journals need to promptly adopt transparent policies on LLM use in peer review, in particular requiring disclosure. Open peer review where identities of all stakeholders are declared might safeguard against LLM misuse, but accountability in the AI era is needed from all parties.
{"title":"Personal experience with AI-generated peer reviews: a case study.","authors":"Nicholas Lo Vecchio","doi":"10.1186/s41073-025-00161-3","DOIUrl":"10.1186/s41073-025-00161-3","url":null,"abstract":"<p><strong>Background: </strong>While some recent studies have looked at large language model (LLM) use in peer review at the corpus level, to date there have been few examinations of instances of AI-generated reviews in their social context. The goal of this first-person account is to present my experience of receiving two anonymous peer review reports that I believe were produced using generative AI, as well as lessons learned from that experience.</p><p><strong>Methods: </strong>This is a case report on the timeline of the incident, and my and the journal's actions following it. Supporting evidence includes text patterns in the reports, online AI detection tools and ChatGPT simulations; recommendations are offered for others who may find themselves in a similar situation. The primary research limitation of this article is that it is based on one individual's personal experience.</p><p><strong>Results: </strong>After alleging the use of generative AI in December 2023, two months of back-and-forth ensued between myself and the journal, leading to my withdrawal of the submission. The journal denied any ethical breach, without taking an explicit position on the allegations of LLM use. Based on this experience, I recommend that authors engage in dialogue with journals on AI use in peer review prior to article submission; where undisclosed AI use is suspected, authors should proactively amass evidence, request an investigation protocol, escalate the matter as needed, involve independent bodies where possible, and share their experience with fellow researchers.</p><p><strong>Conclusions: </strong>Journals need to promptly adopt transparent policies on LLM use in peer review, in particular requiring disclosure. Open peer review where identities of all stakeholders are declared might safeguard against LLM misuse, but accountability in the AI era is needed from all parties.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"4"},"PeriodicalIF":7.2,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11974187/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143796279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-31DOI: 10.1186/s41073-025-00160-4
Johanna Goldberg, Heather Snijdewind, Céline Soudant, Kendra Godwin, Robin O'Hanlon
Background: Journals and publishers vary in the methods they use to detect plagiarism, when they implement these methods, and how they respond when plagiarism is suspected both before and after publication. This study aims to determine the policies and procedures of oncology journals for detecting and responding to suspected plagiarism in unpublished and published manuscripts.
Methods: We reviewed the websites of each journal in the Oncology category of Journal Citation Reports' Science Citation Index Expanded (SCIE) to determine how they detect and respond to suspected plagiarism. We collected data from each journal's website, or publisher webpages directly linked from journal websites, to ascertain what information about plagiarism policies and procedures is publicly available.
Results: There are 241 extant oncology journals included in SCIE, of which 224 (92.95%) have a plagiarism policy or mention plagiarism. Text similarity software or other plagiarism checking methods are mentioned by 207 of these (92.41%, and 85.89% of the 241 total journals examined). These text similarity checks occur most frequently at manuscript submission or initial editorial review. Journal or journal-linked publisher webpages frequently report following guidelines from the Committee on Publication Ethics (COPE) (135, 56.01%).
Conclusions: Oncology journals report similar methods for identifying and responding to plagiarism, with some variation based on the breadth, location, and timing of plagiarism detection. Journal policies and procedures are often informed by guidance from professional organizations, like COPE.
背景:期刊和出版商使用不同的方法来检测剽窃,何时实施这些方法,以及在发表前后怀疑剽窃时如何应对。本研究旨在确定肿瘤学期刊在未发表和已发表稿件中发现和应对疑似抄袭的政策和程序。方法:我们对journal Citation Reports’s Science Citation Index Expanded (SCIE)的肿瘤学类期刊的网站进行了回顾,以确定他们如何检测和应对疑似抄袭。我们从每个期刊的网站或从期刊网站直接链接的出版商网页收集数据,以确定哪些关于抄袭政策和程序的信息是公开的。结果:SCIE收录现有肿瘤期刊241种,其中有抄袭政策或提及抄袭的期刊224种(92.95%)。其中有207种(92.41%,占241种被检期刊总数的85.89%)提到了文本相似软件或其他抄袭检查方法。这些文本相似性检查最常发生在手稿提交或最初的编辑审查。期刊或期刊链接出版商的网页经常报告遵循出版伦理委员会(COPE)的指导方针(135,56.01%)。结论:肿瘤学期刊报告了类似的识别和应对抄袭的方法,根据剽窃检测的广度、地点和时间有一些变化。期刊政策和程序通常由专业组织(如COPE)提供指导。
{"title":"How do oncology journals approach plagiarism? A website review.","authors":"Johanna Goldberg, Heather Snijdewind, Céline Soudant, Kendra Godwin, Robin O'Hanlon","doi":"10.1186/s41073-025-00160-4","DOIUrl":"10.1186/s41073-025-00160-4","url":null,"abstract":"<p><strong>Background: </strong>Journals and publishers vary in the methods they use to detect plagiarism, when they implement these methods, and how they respond when plagiarism is suspected both before and after publication. This study aims to determine the policies and procedures of oncology journals for detecting and responding to suspected plagiarism in unpublished and published manuscripts.</p><p><strong>Methods: </strong>We reviewed the websites of each journal in the Oncology category of Journal Citation Reports' Science Citation Index Expanded (SCIE) to determine how they detect and respond to suspected plagiarism. We collected data from each journal's website, or publisher webpages directly linked from journal websites, to ascertain what information about plagiarism policies and procedures is publicly available.</p><p><strong>Results: </strong>There are 241 extant oncology journals included in SCIE, of which 224 (92.95%) have a plagiarism policy or mention plagiarism. Text similarity software or other plagiarism checking methods are mentioned by 207 of these (92.41%, and 85.89% of the 241 total journals examined). These text similarity checks occur most frequently at manuscript submission or initial editorial review. Journal or journal-linked publisher webpages frequently report following guidelines from the Committee on Publication Ethics (COPE) (135, 56.01%).</p><p><strong>Conclusions: </strong>Oncology journals report similar methods for identifying and responding to plagiarism, with some variation based on the breadth, location, and timing of plagiarism detection. Journal policies and procedures are often informed by guidance from professional organizations, like COPE.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"3"},"PeriodicalIF":7.2,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11956406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143756243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-05DOI: 10.1186/s41073-025-00159-x
Paula Starke, Zhentian Zhang, Hannah Papmeier, Dawid Pieper, Tim Mathes
Background: We assess if there are indications that results of registry-based studies comparing the effectiveness of interventions might be selectively missing depending on the statistical significance (p < 0.05).
Methods: Eligibility criteria Sample of cohort type studies that used data from a patient registry, compared two study arms for assessing a medical intervention, and reported an effect for a binary outcome. Information sources We searched PubMed to identify registries in seven different medical specialties in 2022/23. Subsequently, we included all studies that satisfied the eligibility criteria for each of the identified registries and collected p-values from these studies. Synthesis of results We plotted the cumulative distribution of p-values and a histogram of absolute z-scores for visual inspection of selectively missing results because of p-hacking, selective reporting, or publication bias. In addition, we tested for publication bias by applying a caliper test.
Results: Included studies Sample of 150 registry-based cohort type studies. Synthesis of results The cumulative distribution of p-values displays an abrupt, heavy increase just below the significance threshold of 0.05 while the distribution above the threshold shows a slow, gradual increase. The p-value of the caliper test with a 10% caliper was 0.011 (k = 2, N = 13).
Conclusions: We found that the results of registry-based studies might be selectively missing. Results from registry-based studies comparing medical interventions should be interpreted very cautiously, as positive findings could be a result from p-hacking, publication bias, or selective reporting. Prospective registration of such studies is necessary and should be made mandatory both in regulatory contexts and for publication in journals. Further research is needed to determine the main reasons for selectively missing results to support the development and implementation of more specific methods for preventing selectively missing results.
背景:我们评估是否有迹象表明,比较干预措施有效性的基于登记的研究结果可能会选择性地遗漏,这取决于统计显著性(p)。方法:资格标准:队列类型研究的样本使用来自患者登记的数据,比较两个研究组来评估医疗干预措施,并报告了对二元结果的影响。我们检索PubMed以确定2022/23年7个不同医学专业的注册。随后,我们纳入了所有符合每个已确定注册中心资格标准的研究,并收集了这些研究的p值。结果的综合我们绘制了p值的累积分布和绝对z分数的直方图,用于目视检查由于p黑客、选择性报告或发表偏倚而选择性缺失的结果。此外,我们采用卡钳检验来检验发表偏倚。结果:纳入研究样本为150个基于注册的队列研究。p值的累积分布在显著性阈值0.05以下表现为突然的、大幅度的增加,而高于显著性阈值的分布则表现为缓慢的、渐进的增加。10%卡尺检验的p值为0.011 (k = 2, N = 13)。结论:我们发现基于登记的研究结果可能有选择性地缺失。基于注册表的比较医疗干预的研究结果应非常谨慎地解释,因为阳性结果可能是p-hacking、发表偏倚或选择性报道的结果。这类研究的前瞻性注册是必要的,在监管环境和期刊发表方面都应该是强制性的。需要进一步的研究来确定选择性缺失结果的主要原因,以支持制定和实施更具体的方法来预防选择性缺失结果。
{"title":"Analysis of indications for selectively missing results in comparative registry-based studies in medicine: a meta-research study.","authors":"Paula Starke, Zhentian Zhang, Hannah Papmeier, Dawid Pieper, Tim Mathes","doi":"10.1186/s41073-025-00159-x","DOIUrl":"10.1186/s41073-025-00159-x","url":null,"abstract":"<p><strong>Background: </strong>We assess if there are indications that results of registry-based studies comparing the effectiveness of interventions might be selectively missing depending on the statistical significance (p < 0.05).</p><p><strong>Methods: </strong>Eligibility criteria Sample of cohort type studies that used data from a patient registry, compared two study arms for assessing a medical intervention, and reported an effect for a binary outcome. Information sources We searched PubMed to identify registries in seven different medical specialties in 2022/23. Subsequently, we included all studies that satisfied the eligibility criteria for each of the identified registries and collected p-values from these studies. Synthesis of results We plotted the cumulative distribution of p-values and a histogram of absolute z-scores for visual inspection of selectively missing results because of p-hacking, selective reporting, or publication bias. In addition, we tested for publication bias by applying a caliper test.</p><p><strong>Results: </strong>Included studies Sample of 150 registry-based cohort type studies. Synthesis of results The cumulative distribution of p-values displays an abrupt, heavy increase just below the significance threshold of 0.05 while the distribution above the threshold shows a slow, gradual increase. The p-value of the caliper test with a 10% caliper was 0.011 (k = 2, N = 13).</p><p><strong>Conclusions: </strong>We found that the results of registry-based studies might be selectively missing. Results from registry-based studies comparing medical interventions should be interpreted very cautiously, as positive findings could be a result from p-hacking, publication bias, or selective reporting. Prospective registration of such studies is necessary and should be made mandatory both in regulatory contexts and for publication in journals. Further research is needed to determine the main reasons for selectively missing results to support the development and implementation of more specific methods for preventing selectively missing results.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"2"},"PeriodicalIF":7.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11881244/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-28DOI: 10.1186/s41073-025-00158-y
Daivat Bhavsar, Laura Duffy, Hamin Jo, Cynthia Lokker, R Brian Haynes, Alfonso Iorio, Ana Marusic, Jeremy Y Ng
Background: Artificial intelligence (AI) chatbots are novel computer programs that can generate text or content in a natural language format. Academic publishers are adapting to the transformative role of AI chatbots in producing or facilitating scientific research. This study aimed to examine the policies established by scientific, technical, and medical academic publishers for defining and regulating the authors' responsible use of AI chatbots.
Methods: This study performed a cross-sectional audit on the publicly available policies of 162 academic publishers, indexed as members of the International Association of the Scientific, Technical, and Medical Publishers (STM). Data extraction of publicly available policies on the webpages of all STM academic publishers was performed independently, in duplicate, with content analysis reviewed by a third contributor (September 2023-December 2023). Data was categorized into policy elements, such as 'proofreading' and 'image generation'. Counts and percentages of 'yes' (i.e., permitted), 'no', and 'no available information' (NAI) were established for each policy element.
Results: A total of 56/162 (34.6%) STM academic publishers had a publicly available policy guiding the authors' use of AI chatbots. No policy allowed authorship for AI chatbots (or other AI tool). Most (49/56 or 87.5%) required specific disclosure of AI chatbot use. Four policies/publishers placed a complete ban on the use of AI chatbots by authors.
Conclusions: Only a third of STM academic publishers had publicly available policies as of December 2023. A re-examination of all STM members in 12-18 months may uncover evolving approaches toward AI chatbot use with more academic publishers having a policy.
{"title":"Policies on artificial intelligence chatbots among academic publishers: a cross-sectional audit.","authors":"Daivat Bhavsar, Laura Duffy, Hamin Jo, Cynthia Lokker, R Brian Haynes, Alfonso Iorio, Ana Marusic, Jeremy Y Ng","doi":"10.1186/s41073-025-00158-y","DOIUrl":"10.1186/s41073-025-00158-y","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) chatbots are novel computer programs that can generate text or content in a natural language format. Academic publishers are adapting to the transformative role of AI chatbots in producing or facilitating scientific research. This study aimed to examine the policies established by scientific, technical, and medical academic publishers for defining and regulating the authors' responsible use of AI chatbots.</p><p><strong>Methods: </strong>This study performed a cross-sectional audit on the publicly available policies of 162 academic publishers, indexed as members of the International Association of the Scientific, Technical, and Medical Publishers (STM). Data extraction of publicly available policies on the webpages of all STM academic publishers was performed independently, in duplicate, with content analysis reviewed by a third contributor (September 2023-December 2023). Data was categorized into policy elements, such as 'proofreading' and 'image generation'. Counts and percentages of 'yes' (i.e., permitted), 'no', and 'no available information' (NAI) were established for each policy element.</p><p><strong>Results: </strong>A total of 56/162 (34.6%) STM academic publishers had a publicly available policy guiding the authors' use of AI chatbots. No policy allowed authorship for AI chatbots (or other AI tool). Most (49/56 or 87.5%) required specific disclosure of AI chatbot use. Four policies/publishers placed a complete ban on the use of AI chatbots by authors.</p><p><strong>Conclusions: </strong>Only a third of STM academic publishers had publicly available policies as of December 2023. A re-examination of all STM members in 12-18 months may uncover evolving approaches toward AI chatbot use with more academic publishers having a policy.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"1"},"PeriodicalIF":7.2,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11869395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143532223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-23DOI: 10.1186/s41073-024-00157-5
Samina Hamilton, Aaron B Bernstein, Graham Blakey, Vivien Fagan, Tracy Farrow, Debbie Jordan, Walther Seiler, Anna Shannon, Art Gertel
{"title":"Publisher Correction: Developing the Clarity and Openness in Reporting: E3-based (CORE) Reference user manual for creation of clinical study reports in the era of clinical trial transparency.","authors":"Samina Hamilton, Aaron B Bernstein, Graham Blakey, Vivien Fagan, Tracy Farrow, Debbie Jordan, Walther Seiler, Anna Shannon, Art Gertel","doi":"10.1186/s41073-024-00157-5","DOIUrl":"10.1186/s41073-024-00157-5","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"16"},"PeriodicalIF":7.2,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668038/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1186/s41073-024-00154-8
Adam G Dunn, Enrico Coiera, Kenneth D Mandl, Florence T Bourgeois
{"title":"Publisher Correction: Conflict of interest disclosure in biomedical research: a review of current practices, biases, and the role of public registries in improving transparency.","authors":"Adam G Dunn, Enrico Coiera, Kenneth D Mandl, Florence T Bourgeois","doi":"10.1186/s41073-024-00154-8","DOIUrl":"10.1186/s41073-024-00154-8","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"13"},"PeriodicalIF":7.2,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11660574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1186/s41073-024-00156-6
Paul E van der Vet, Harm Nijveen
{"title":"Publisher Correction: Propagation of errors in citation networks: a study involving the entire citation network of a widely cited paper published in, and later retracted from, the journal Nature.","authors":"Paul E van der Vet, Harm Nijveen","doi":"10.1186/s41073-024-00156-6","DOIUrl":"10.1186/s41073-024-00156-6","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"14"},"PeriodicalIF":7.2,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11660461/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1186/s41073-024-00155-7
Shirin Heidari, Thomas F Babor, Paola De Castro, Sera Tort, Mirjam Curno
{"title":"Publisher Correction: Sex and Gender Equity in Research: rationale for the SAGER guidelines and recommended use.","authors":"Shirin Heidari, Thomas F Babor, Paola De Castro, Sera Tort, Mirjam Curno","doi":"10.1186/s41073-024-00155-7","DOIUrl":"10.1186/s41073-024-00155-7","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"15"},"PeriodicalIF":7.2,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11660825/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-14DOI: 10.1186/s41073-024-00151-x
Robin Brooker, Nick Allum
Background: This study investigates the determinants of engagement in questionable research practices (QRPs), focusing on both individual-level factors (such as scholarly field, commitment to scientific norms, gender, contract type, and career stage) and institution-level factors (including industry type, researchers' perceptions of their research culture, and awareness of institutional policies on research integrity).
Methods: Using a multi-level modelling approach, we analyse data from an international survey of researchers working across disciplinary fields to estimate the effect of these factors on QRP engagement.
Results: Our findings indicate that contract type, career stage, academic field, adherence to scientific norms and gender significantly predict QRP engagement. At the institution level, factors such as being outside of a collegial culture and experiencing harmful publication pressure, and the presence of safeguards against integrity breaches have small associations. Only a minimal amount of variance in QRP engagement is attributable to differences between institutions and countries.
Conclusions: We discuss the implications of these findings for developing effective interventions to reduce QRPs, highlighting the importance of addressing both individual and institutional factors in efforts to foster research integrity.
{"title":"Investigating the links between questionable research practices, scientific norms and organisational culture.","authors":"Robin Brooker, Nick Allum","doi":"10.1186/s41073-024-00151-x","DOIUrl":"https://doi.org/10.1186/s41073-024-00151-x","url":null,"abstract":"<p><strong>Background: </strong>This study investigates the determinants of engagement in questionable research practices (QRPs), focusing on both individual-level factors (such as scholarly field, commitment to scientific norms, gender, contract type, and career stage) and institution-level factors (including industry type, researchers' perceptions of their research culture, and awareness of institutional policies on research integrity).</p><p><strong>Methods: </strong>Using a multi-level modelling approach, we analyse data from an international survey of researchers working across disciplinary fields to estimate the effect of these factors on QRP engagement.</p><p><strong>Results: </strong>Our findings indicate that contract type, career stage, academic field, adherence to scientific norms and gender significantly predict QRP engagement. At the institution level, factors such as being outside of a collegial culture and experiencing harmful publication pressure, and the presence of safeguards against integrity breaches have small associations. Only a minimal amount of variance in QRP engagement is attributable to differences between institutions and countries.</p><p><strong>Conclusions: </strong>We discuss the implications of these findings for developing effective interventions to reduce QRPs, highlighting the importance of addressing both individual and institutional factors in efforts to foster research integrity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"12"},"PeriodicalIF":7.2,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472529/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Preprints are scientific articles that have not undergone the peer-review process. They allow the latest evidence to be rapidly shared, however it is unclear whether they can be confidently used for decision-making during a public health emergency. This study aimed to compare the data and quality of preprints released during the first four months of the 2022 mpox outbreak to their published versions.
Methods: Eligible preprints (n = 76) posted between May to August 2022 were identified through an established mpox literature database and followed to July 2024 for changes in publication status. Quality of preprints and published studies was assessed by two independent reviewers to evaluate changes in quality, using validated tools that were available for the study design (n = 33). Tools included the Newcastle-Ottawa Scale; Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2); and JBI Critical Appraisal Checklists. The questions in each tool led to an overall quality assessment of high quality (no concerns with study design, conduct, and/or analysis), moderate quality (minor concerns) or low quality (several concerns). Changes in data (e.g. methods, outcomes, results) for preprint-published pairs (n = 60) were assessed by one reviewer and verified by a second.
Results: Preprints and published versions that could be evaluated for quality (n = 25 pairs) were mostly assessed as low quality. Minimal to no change in quality from preprint to published was identified: all observational studies (10/10), most case series (6/7) and all surveillance data analyses (3/3) had no change in overall quality, while some diagnostic test accuracy studies (3/5) improved or worsened their quality assessment scores. Among all pairs (n = 60), outcomes were often added in the published version (58%) and less commonly removed (18%). Numerical results changed from preprint to published in 53% of studies, however most of these studies (22/32) had changes that were minor and did not impact main conclusions of the study.
Conclusions: This study suggests the minimal changes in quality, results and main conclusions from preprint to published versions supports the use of preprints, and the use of the same critical evaluation tools on preprints as applied to published studies, in decision-making during a public health emergency.
{"title":"An evaluation of the preprints produced at the beginning of the 2022 mpox public health emergency.","authors":"Melanie Sterian, Anmol Samra, Kusala Pussegoda, Tricia Corrin, Mavra Qamar, Austyn Baumeister, Izza Israr, Lisa Waddell","doi":"10.1186/s41073-024-00152-w","DOIUrl":"10.1186/s41073-024-00152-w","url":null,"abstract":"<p><strong>Background: </strong>Preprints are scientific articles that have not undergone the peer-review process. They allow the latest evidence to be rapidly shared, however it is unclear whether they can be confidently used for decision-making during a public health emergency. This study aimed to compare the data and quality of preprints released during the first four months of the 2022 mpox outbreak to their published versions.</p><p><strong>Methods: </strong>Eligible preprints (n = 76) posted between May to August 2022 were identified through an established mpox literature database and followed to July 2024 for changes in publication status. Quality of preprints and published studies was assessed by two independent reviewers to evaluate changes in quality, using validated tools that were available for the study design (n = 33). Tools included the Newcastle-Ottawa Scale; Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2); and JBI Critical Appraisal Checklists. The questions in each tool led to an overall quality assessment of high quality (no concerns with study design, conduct, and/or analysis), moderate quality (minor concerns) or low quality (several concerns). Changes in data (e.g. methods, outcomes, results) for preprint-published pairs (n = 60) were assessed by one reviewer and verified by a second.</p><p><strong>Results: </strong>Preprints and published versions that could be evaluated for quality (n = 25 pairs) were mostly assessed as low quality. Minimal to no change in quality from preprint to published was identified: all observational studies (10/10), most case series (6/7) and all surveillance data analyses (3/3) had no change in overall quality, while some diagnostic test accuracy studies (3/5) improved or worsened their quality assessment scores. Among all pairs (n = 60), outcomes were often added in the published version (58%) and less commonly removed (18%). Numerical results changed from preprint to published in 53% of studies, however most of these studies (22/32) had changes that were minor and did not impact main conclusions of the study.</p><p><strong>Conclusions: </strong>This study suggests the minimal changes in quality, results and main conclusions from preprint to published versions supports the use of preprints, and the use of the same critical evaluation tools on preprints as applied to published studies, in decision-making during a public health emergency.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"11"},"PeriodicalIF":7.2,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11457328/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}