首页 > 最新文献

Research integrity and peer review最新文献

英文 中文
Predicting retracted research: a dataset and machine learning approaches. 预测撤回研究:数据集和机器学习方法。
IF 7.2 Q1 ETHICS Pub Date : 2025-06-11 DOI: 10.1186/s41073-025-00168-w
Aaron H A Fletcher, Mark Stevenson

Background: Retractions undermine the scientific record's reliability and can lead to the continued propagation of flawed research. This study aimed to (1) create a dataset aggregating retraction information with bibliographic metadata, (2) train and evaluate various machine learning approaches to predict article retractions, and (3) assess each feature's contribution to feature-based classifier performance using ablation studies.

Methods: An open-access dataset was developed by combining information from the Retraction Watch database and the OpenAlex API. Using a case-controlled design, retracted research articles were paired with non-retracted articles published in the same period. Traditional feature-based classifiers and models leveraging contextual language representations were then trained and evaluated. Model performance was assessed using accuracy, precision, recall, and the F1-score.

Results: The Llama 3.2 base model achieved the highest overall accuracy. The Random Forest classifier achieved a precision of 0.687 for identifying non-retracted articles, while the Llama 3.2 base model reached a precision of 0.683 for identifying retracted articles. Traditional feature-based classifiers generally outperformed most contextual language models, except for the Llama 3.2 base model, which showed competitive performance across several metrics.

Conclusions: Although no single model excelled across all metrics, our findings indicate that machine learning techniques can effectively support the identification of retracted research. These results provide a foundation for developing automated tools to assist publishers and reviewers in detecting potentially problematic publications. Further research should focus on refining these models and investigating additional features to improve predictive performance.

Trial registration: Not applicable.

背景:撤稿破坏了科学记录的可靠性,并可能导致有缺陷的研究继续传播。本研究旨在(1)创建一个包含文献元数据的撤稿信息的数据集,(2)训练和评估各种机器学习方法来预测文章撤稿,以及(3)使用消融研究评估每个特征对基于特征的分类器性能的贡献。方法:结合《撤稿观察》数据库信息和OpenAlex API开发开放获取数据集。采用病例对照设计,将撤回的研究文章与同期发表的未撤回的文章配对。然后对传统的基于特征的分类器和利用上下文语言表示的模型进行训练和评估。使用准确性、精密度、召回率和f1分数来评估模型的性能。结果:Llama 3.2基础模型总体精度最高。随机森林分类器识别未撤稿文章的精度为0.687,而Llama 3.2基础模型识别撤稿文章的精度为0.683。传统的基于特征的分类器通常优于大多数上下文语言模型,除了Llama 3.2基本模型,它在几个指标上都表现出竞争力。结论:尽管没有一个单一的模型在所有指标上都表现出色,但我们的研究结果表明,机器学习技术可以有效地支持撤回研究的识别。这些结果为开发自动化工具提供了基础,以帮助出版商和审稿人检测潜在的问题出版物。进一步的研究应该集中在改进这些模型和研究其他特征以提高预测性能。试验注册:不适用。
{"title":"Predicting retracted research: a dataset and machine learning approaches.","authors":"Aaron H A Fletcher, Mark Stevenson","doi":"10.1186/s41073-025-00168-w","DOIUrl":"10.1186/s41073-025-00168-w","url":null,"abstract":"<p><strong>Background: </strong>Retractions undermine the scientific record's reliability and can lead to the continued propagation of flawed research. This study aimed to (1) create a dataset aggregating retraction information with bibliographic metadata, (2) train and evaluate various machine learning approaches to predict article retractions, and (3) assess each feature's contribution to feature-based classifier performance using ablation studies.</p><p><strong>Methods: </strong>An open-access dataset was developed by combining information from the Retraction Watch database and the OpenAlex API. Using a case-controlled design, retracted research articles were paired with non-retracted articles published in the same period. Traditional feature-based classifiers and models leveraging contextual language representations were then trained and evaluated. Model performance was assessed using accuracy, precision, recall, and the F1-score.</p><p><strong>Results: </strong>The Llama 3.2 base model achieved the highest overall accuracy. The Random Forest classifier achieved a precision of 0.687 for identifying non-retracted articles, while the Llama 3.2 base model reached a precision of 0.683 for identifying retracted articles. Traditional feature-based classifiers generally outperformed most contextual language models, except for the Llama 3.2 base model, which showed competitive performance across several metrics.</p><p><strong>Conclusions: </strong>Although no single model excelled across all metrics, our findings indicate that machine learning techniques can effectively support the identification of retracted research. These results provide a foundation for developing automated tools to assist publishers and reviewers in detecting potentially problematic publications. Further research should focus on refining these models and investigating additional features to improve predictive performance.</p><p><strong>Trial registration: </strong>Not applicable.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"9"},"PeriodicalIF":7.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12153192/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144268110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
False authorship: an explorative case study around an AI-generated article published under my name. 虚假作者:关于以我的名义发表的人工智能生成文章的探索性案例研究。
IF 7.2 Q1 ETHICS Pub Date : 2025-05-27 DOI: 10.1186/s41073-025-00165-z
Diomidis Spinellis

Background: The proliferation of generative artificial intelligence (AI) has facilitated the creation and publication of fraudulent scientific articles, often in predatory journals. This study investigates the extent of AI-generated content in the Global International Journal of Innovative Research (GIJIR), where a fabricated article was falsely attributed to me.

Methods: The entire GIJIR website was crawled to collect article PDFs and metadata. Automated scripts were used to extract the number of probable in-text citations, DOIs, affiliations, and contact emails. A heuristic based on the number of in-text citations was employed to identify the probability of AI-generated content. A subset of articles was manually reviewed for AI indicators such as formulaic writing and missing empirical data. Turnitin's AI detection tool was used as an additional indicator. The extracted data were compiled into a structured dataset, which was analyzed to examine human-authored and AI-generated articles.

Results: Of the 53 examined articles with the fewest in-text citations, at least 48 appeared to be AI-generated, while five showed signs of human involvement. Turnitin's AI detection scores confirmed high probabilities of AI-generated content in most cases, with scores reaching 100% for multiple papers. The analysis also revealed fraudulent authorship attribution, with AI-generated articles falsely assigned to researchers from prestigious institutions. The journal appears to use AI-generated content both to inflate its standing through misattributed papers and to attract authors aiming to inflate their publication record.

Conclusions: The findings highlight the risks posed by AI-generated and misattributed research articles, which threaten the credibility of academic publishing. Ways to mitigate these issues include strengthening identity verification mechanisms for DOIs and ORCIDs, enhancing AI detection methods, and reforming research assessment practices. Without effective countermeasures, the unchecked growth of AI-generated content in scientific literature could severely undermine trust in scholarly communication.

背景:生成式人工智能(AI)的扩散促进了欺诈性科学文章的创作和发表,通常是在掠夺性期刊上。这项研究调查了全球国际创新研究杂志(GIJIR)上人工智能生成内容的程度,其中一篇捏造的文章被错误地归因于我。方法:对整个GIJIR网站进行抓取,收集文章pdf和元数据。自动化脚本用于提取可能的文本引用、doi、隶属关系和联系电子邮件的数量。采用基于文本引用次数的启发式方法来确定人工智能生成内容的概率。人工审查了部分文章的人工智能指标,如公式化写作和缺少经验数据。Turnitin的AI检测工具作为附加指标。提取的数据被编译成一个结构化的数据集,并对其进行分析,以检查人类撰写和人工智能生成的文章。结果:在53篇文本引用最少的文章中,至少48篇似乎是人工智能生成的,而5篇显示出人类参与的迹象。Turnitin的AI检测得分在大多数情况下证实了AI生成内容的高概率,多篇论文的得分达到100%。分析还发现了虚假的作者归属,人工智能生成的文章被错误地分配给了知名机构的研究人员。该杂志似乎利用人工智能生成的内容,通过错误署名的论文来提高其地位,并吸引旨在提高其发表记录的作者。结论:研究结果强调了人工智能生成和错误署名的研究文章所带来的风险,这些风险威胁到学术出版的可信度。缓解这些问题的方法包括加强doi和orcid的身份验证机制,增强人工智能检测方法,以及改革研究评估实践。如果没有有效的对策,科学文献中人工智能生成内容的无限制增长可能会严重破坏学术交流的信任。
{"title":"False authorship: an explorative case study around an AI-generated article published under my name.","authors":"Diomidis Spinellis","doi":"10.1186/s41073-025-00165-z","DOIUrl":"10.1186/s41073-025-00165-z","url":null,"abstract":"<p><strong>Background: </strong>The proliferation of generative artificial intelligence (AI) has facilitated the creation and publication of fraudulent scientific articles, often in predatory journals. This study investigates the extent of AI-generated content in the Global International Journal of Innovative Research (GIJIR), where a fabricated article was falsely attributed to me.</p><p><strong>Methods: </strong>The entire GIJIR website was crawled to collect article PDFs and metadata. Automated scripts were used to extract the number of probable in-text citations, DOIs, affiliations, and contact emails. A heuristic based on the number of in-text citations was employed to identify the probability of AI-generated content. A subset of articles was manually reviewed for AI indicators such as formulaic writing and missing empirical data. Turnitin's AI detection tool was used as an additional indicator. The extracted data were compiled into a structured dataset, which was analyzed to examine human-authored and AI-generated articles.</p><p><strong>Results: </strong>Of the 53 examined articles with the fewest in-text citations, at least 48 appeared to be AI-generated, while five showed signs of human involvement. Turnitin's AI detection scores confirmed high probabilities of AI-generated content in most cases, with scores reaching 100% for multiple papers. The analysis also revealed fraudulent authorship attribution, with AI-generated articles falsely assigned to researchers from prestigious institutions. The journal appears to use AI-generated content both to inflate its standing through misattributed papers and to attract authors aiming to inflate their publication record.</p><p><strong>Conclusions: </strong>The findings highlight the risks posed by AI-generated and misattributed research articles, which threaten the credibility of academic publishing. Ways to mitigate these issues include strengthening identity verification mechanisms for DOIs and ORCIDs, enhancing AI detection methods, and reforming research assessment practices. Without effective countermeasures, the unchecked growth of AI-generated content in scientific literature could severely undermine trust in scholarly communication.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"8"},"PeriodicalIF":7.2,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12107892/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144153024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on policy mechanisms to address funding bias and conflicts of interest in biomedical research: a scoping review. 解决生物医学研究中资助偏见和利益冲突的政策机制研究:范围审查。
IF 7.2 Q1 ETHICS Pub Date : 2025-05-14 DOI: 10.1186/s41073-025-00164-0
S Scott Graham, Quinn Grundy, Nandini Sharma, Jade Shiva Edward, Joshua B Barbour, Justin F Rousseau, Zoltan P Majdik, Lisa Bero

Background: Industry funding and author conflicts of interest (COI) have been consistently shown to introduce bias into agenda-setting and results-reporting in biomedical research. Accordingly, maintaining public trust, diminishing patient harm, and securing the integrity of the biomedical research enterprise are critical policy priorities. In this context, a coordinated and methodical research effort is required to effectively identify which policy interventions are most likely to mitigate against the risks of funding bias. Subsequently this scoping review aims to identify and synthesize the available research on policy mechanisms designed to address funding bias and COI in biomedical research.

Methods: We searched PubMed for peer-reviewed, empirical analyses of policy mechanisms designed to address industry sponsorship of research studies, author industry affiliation, and author COI at any stage of the biomedical research process and published between January 2009 and 28 August 2023. The review identified literature conducting five primary analysis types: (1) surveys of COI policies, (2) disclosure compliance analyses, (3) disclosure concordance analyses, (4) COI policy effects analyses, and (5) studies of policy perceptions and contexts. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies.

Results: Six thousand three hundreds eighty five articles were screened, and 81 studies were included. Studies were conducted in 11 geographic regions, with studies of international scope being the most common. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies. This evidence demonstrates that while disclosure policies are pervasive, those policies are not consistently designed, implemented, or enforced. The available evidence also indicates that COI disclosure policies are not particularly effective in mitigating risk of bias or subsequent negative externalities.

Conclusions: The results of this review indicate that the COI policy landscape could benefit from a significant shift in the research agenda. The available literature predominantly focuses on a single policy intervention-author disclosure requirements. As a result, new lines of research are needed to establish a more robust evidence-based policy landscape. There is a particular need for implementation research, greater attention to the structural conditions that create COI, and evaluation of policy mechanisms other than disclosure.

背景:行业资助和作者利益冲突(COI)一直被证明会在生物医学研究的议程设置和结果报告中引入偏见。因此,维持公众信任、减少对患者的伤害和确保生物医学研究企业的完整性是关键的政策优先事项。在这方面,需要进行协调和有条不紊的研究工作,以有效地确定哪些政策干预措施最有可能减轻资助偏见的风险。随后,本范围审查旨在确定和综合现有的研究,旨在解决生物医学研究中的资助偏见和COI的政策机制。方法:我们在PubMed检索了2009年1月至2023年8月28日期间发表的生物医学研究过程中任何阶段旨在解决研究的行业赞助、作者行业隶属关系和作者COI的政策机制的同行评议实证分析。该综述确定了进行五种主要分析类型的文献:(1)COI政策调查,(2)披露合规性分析,(3)披露一致性分析,(4)COI政策效果分析,以及(5)政策认知和背景研究。大多数现有的研究都致力于评估作者COI披露政策的普遍性、性质和影响。结果:共筛选了六千三百八十五篇文章,纳入了81项研究。研究在11个地理区域进行,其中国际范围的研究最为普遍。大多数现有的研究都致力于评估作者COI披露政策的普遍性、性质和影响。这一证据表明,虽然披露政策普遍存在,但这些政策的设计、实施或执行并不一致。现有证据还表明,COI披露政策在减轻偏见风险或随后的负面外部性方面并不是特别有效。结论:本综述的结果表明,COI政策格局可以从研究议程的重大转变中受益。现有文献主要集中于单一政策干预——作者披露要求。因此,需要新的研究方向来建立一个更强有力的以证据为基础的政策格局。特别需要进行实施研究,更多地关注产生COI的结构条件,以及评估除披露以外的政策机制。
{"title":"Research on policy mechanisms to address funding bias and conflicts of interest in biomedical research: a scoping review.","authors":"S Scott Graham, Quinn Grundy, Nandini Sharma, Jade Shiva Edward, Joshua B Barbour, Justin F Rousseau, Zoltan P Majdik, Lisa Bero","doi":"10.1186/s41073-025-00164-0","DOIUrl":"10.1186/s41073-025-00164-0","url":null,"abstract":"<p><strong>Background: </strong>Industry funding and author conflicts of interest (COI) have been consistently shown to introduce bias into agenda-setting and results-reporting in biomedical research. Accordingly, maintaining public trust, diminishing patient harm, and securing the integrity of the biomedical research enterprise are critical policy priorities. In this context, a coordinated and methodical research effort is required to effectively identify which policy interventions are most likely to mitigate against the risks of funding bias. Subsequently this scoping review aims to identify and synthesize the available research on policy mechanisms designed to address funding bias and COI in biomedical research.</p><p><strong>Methods: </strong>We searched PubMed for peer-reviewed, empirical analyses of policy mechanisms designed to address industry sponsorship of research studies, author industry affiliation, and author COI at any stage of the biomedical research process and published between January 2009 and 28 August 2023. The review identified literature conducting five primary analysis types: (1) surveys of COI policies, (2) disclosure compliance analyses, (3) disclosure concordance analyses, (4) COI policy effects analyses, and (5) studies of policy perceptions and contexts. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies.</p><p><strong>Results: </strong>Six thousand three hundreds eighty five articles were screened, and 81 studies were included. Studies were conducted in 11 geographic regions, with studies of international scope being the most common. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies. This evidence demonstrates that while disclosure policies are pervasive, those policies are not consistently designed, implemented, or enforced. The available evidence also indicates that COI disclosure policies are not particularly effective in mitigating risk of bias or subsequent negative externalities.</p><p><strong>Conclusions: </strong>The results of this review indicate that the COI policy landscape could benefit from a significant shift in the research agenda. The available literature predominantly focuses on a single policy intervention-author disclosure requirements. As a result, new lines of research are needed to establish a more robust evidence-based policy landscape. There is a particular need for implementation research, greater attention to the structural conditions that create COI, and evaluation of policy mechanisms other than disclosure.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"6"},"PeriodicalIF":7.2,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12076912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144060408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Raising concerns on questionable ethics approvals - a case study of 456 trials from the Institut Hospitalo-Universitaire Méditerranée Infection. 更正:引起对可疑伦理批准的关注——一项来自医院和大学的456项试验的案例研究。
IF 7.2 Q1 ETHICS Pub Date : 2025-05-09 DOI: 10.1186/s41073-025-00162-2
Fabrice Frank, Nans Florens, Gideon Meyerowitz-Katz, Jerome Barriere, Eric Billy, Veronique Saada, Alexander Samuel, Jacques Robert, Lonni Besancon
{"title":"Correction: Raising concerns on questionable ethics approvals - a case study of 456 trials from the Institut Hospitalo-Universitaire Méditerranée Infection.","authors":"Fabrice Frank, Nans Florens, Gideon Meyerowitz-Katz, Jerome Barriere, Eric Billy, Veronique Saada, Alexander Samuel, Jacques Robert, Lonni Besancon","doi":"10.1186/s41073-025-00162-2","DOIUrl":"https://doi.org/10.1186/s41073-025-00162-2","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"7"},"PeriodicalIF":7.2,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063339/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144045630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From 2015 to 2023, eight years of empirical research on research integrity: a scoping review. 2015 - 2023年8年科研诚信实证研究综述
IF 7.2 Q1 ETHICS Pub Date : 2025-04-30 DOI: 10.1186/s41073-025-00163-1
Baptiste Vendé, Anouk Barberousse, Stéphanie Ruphy

Background: Research on research integrity (RI) has grown exponentially over the past several decades. Although the earliest publications emerged in the 1980 s, more than half of the existing literature has been produced within the last five years. Given that the most recent comprehensive literature review is now eight years old, the present study aims to extend and update previous findings.

Method: We conducted a systematic search of the Web of Science and Constellate databases for articles published between 2015 and 2023. To structure our overview and guide our inquiry, we addressed the following seven broad questions about the field:-What topics does the empirical literature on RI explore? What are the primary objectives of the empirical literature on RI? What methodologies are prevalent in the empirical literature on RI? What populations or organizations are studied in the empirical literature on RI? Where are the empirical studies on RI conducted? Where is the empirical literature on RI published? To what degree is the general literature on RI grounded in empirical research? Additionally, we used the previous scoping review as a benchmark to identify emerging trends and shifts.

Results: Our search yielded a total of 3,282 studies, of which 660 articles met our inclusion criteria. All research questions were comprehensively addressed. Notably, we observed a significant shift in methodologies: the reliance on interviews and surveys decreased from 51 to 30%, whereas the application of meta-scientific methods increased from 17 to 31%. In terms of theoretical orientation, the previously dominant "Bad Apple" hypothesis declined from 54 to 30%, while the "Wicked System" hypothesis increased from 46 to 52%. Furthermore, there has been a pronounced trend toward testing solutions, rising from 31 to 56% at the expense of merely describing the problem, which fell from 69 to 44%.

Conclusion: Three gaps highlighted eight years ago by the previous scoping review remain unresolved. Research on decision makers (e.g., scientists in positions of power, policymakers, accounting for 3%), the private research sector and patents (4.7%), and the peer review system (0.3%) continues to be underexplored. Even more concerning, if current trends persist, these gaps are likely to become increasingly problematic.

背景:在过去的几十年里,关于科研诚信的研究呈指数级增长。虽然最早的出版物出现在20世纪80年代,但现有文献的一半以上是在最近五年内出版的。鉴于最近的综合文献综述已有8年历史,本研究旨在扩展和更新先前的研究结果。方法:系统检索Web of Science和constellation数据库中2015 - 2023年间发表的文章。为了构建我们的概述并指导我们的调查,我们解决了关于该领域的以下七个广泛问题:关于国际扶轮的实证文献探讨了哪些主题?国际扶轮实证文献的主要目标是什么?在国际扶轮的实证文献中,什么方法学是流行的?在国际扶轮的实证文献中研究了哪些人口或组织?国际扶轮的实证研究在哪里进行?关于RI的实证文献在哪里发表?关于国际扶轮的一般文献在多大程度上是基于实证研究?此外,我们使用之前的范围审查作为基准,以确定新出现的趋势和变化。结果:我们共检索到3282篇研究,其中660篇符合我们的纳入标准。所有的研究问题都得到了全面的解决。值得注意的是,我们观察到方法论的重大转变:对访谈和调查的依赖从51%下降到30%,而元科学方法的应用从17%增加到31%。在理论取向上,先前占主导地位的“坏苹果”假说从54%下降到30%,而“邪恶系统”假说从46%上升到52%。此外,有一个明显的趋势是测试解决方案,从31%上升到56%,代价是仅仅描述问题,从69%下降到44%。结论:八年前的范围审查强调的三个差距仍然没有解决。对决策者(例如,有权力的科学家、政策制定者,占3%)、私营研究部门和专利(4.7%)以及同行评议制度(0.3%)的研究仍未得到充分探索。更令人担忧的是,如果目前的趋势持续下去,这些差距可能会变得越来越成问题。
{"title":"From 2015 to 2023, eight years of empirical research on research integrity: a scoping review.","authors":"Baptiste Vendé, Anouk Barberousse, Stéphanie Ruphy","doi":"10.1186/s41073-025-00163-1","DOIUrl":"https://doi.org/10.1186/s41073-025-00163-1","url":null,"abstract":"<p><strong>Background: </strong>Research on research integrity (RI) has grown exponentially over the past several decades. Although the earliest publications emerged in the 1980 s, more than half of the existing literature has been produced within the last five years. Given that the most recent comprehensive literature review is now eight years old, the present study aims to extend and update previous findings.</p><p><strong>Method: </strong>We conducted a systematic search of the Web of Science and Constellate databases for articles published between 2015 and 2023. To structure our overview and guide our inquiry, we addressed the following seven broad questions about the field:-What topics does the empirical literature on RI explore? What are the primary objectives of the empirical literature on RI? What methodologies are prevalent in the empirical literature on RI? What populations or organizations are studied in the empirical literature on RI? Where are the empirical studies on RI conducted? Where is the empirical literature on RI published? To what degree is the general literature on RI grounded in empirical research? Additionally, we used the previous scoping review as a benchmark to identify emerging trends and shifts.</p><p><strong>Results: </strong>Our search yielded a total of 3,282 studies, of which 660 articles met our inclusion criteria. All research questions were comprehensively addressed. Notably, we observed a significant shift in methodologies: the reliance on interviews and surveys decreased from 51 to 30%, whereas the application of meta-scientific methods increased from 17 to 31%. In terms of theoretical orientation, the previously dominant \"Bad Apple\" hypothesis declined from 54 to 30%, while the \"Wicked System\" hypothesis increased from 46 to 52%. Furthermore, there has been a pronounced trend toward testing solutions, rising from 31 to 56% at the expense of merely describing the problem, which fell from 69 to 44%.</p><p><strong>Conclusion: </strong>Three gaps highlighted eight years ago by the previous scoping review remain unresolved. Research on decision makers (e.g., scientists in positions of power, policymakers, accounting for 3%), the private research sector and patents (4.7%), and the peer review system (0.3%) continues to be underexplored. Even more concerning, if current trends persist, these gaps are likely to become increasingly problematic.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"5"},"PeriodicalIF":7.2,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12042460/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personal experience with AI-generated peer reviews: a case study. 人工智能产生的同行评议的个人经验:案例研究。
IF 7.2 Q1 ETHICS Pub Date : 2025-04-07 DOI: 10.1186/s41073-025-00161-3
Nicholas Lo Vecchio

Background: While some recent studies have looked at large language model (LLM) use in peer review at the corpus level, to date there have been few examinations of instances of AI-generated reviews in their social context. The goal of this first-person account is to present my experience of receiving two anonymous peer review reports that I believe were produced using generative AI, as well as lessons learned from that experience.

Methods: This is a case report on the timeline of the incident, and my and the journal's actions following it. Supporting evidence includes text patterns in the reports, online AI detection tools and ChatGPT simulations; recommendations are offered for others who may find themselves in a similar situation. The primary research limitation of this article is that it is based on one individual's personal experience.

Results: After alleging the use of generative AI in December 2023, two months of back-and-forth ensued between myself and the journal, leading to my withdrawal of the submission. The journal denied any ethical breach, without taking an explicit position on the allegations of LLM use. Based on this experience, I recommend that authors engage in dialogue with journals on AI use in peer review prior to article submission; where undisclosed AI use is suspected, authors should proactively amass evidence, request an investigation protocol, escalate the matter as needed, involve independent bodies where possible, and share their experience with fellow researchers.

Conclusions: Journals need to promptly adopt transparent policies on LLM use in peer review, in particular requiring disclosure. Open peer review where identities of all stakeholders are declared might safeguard against LLM misuse, but accountability in the AI era is needed from all parties.

背景:虽然最近的一些研究着眼于在语料库层面的同行评议中使用大型语言模型(LLM),但迄今为止,在其社会背景下对人工智能生成的评议实例的研究很少。这个第一人称账户的目标是呈现我收到两份匿名同行评议报告的经历,我相信这些报告是使用生成人工智能生成的,以及从中吸取的教训。方法:这是一个关于事件时间线的案例报告,以及我和杂志随后的行动。支持性证据包括报告中的文本模式、在线人工智能检测工具和ChatGPT模拟;为其他可能发现自己处于类似情况的人提供建议。本文的主要研究局限是基于一个人的个人经历。结果:在2023年12月声称使用生成式AI后,我与期刊之间进行了两个月的反复讨论,导致我撤回了投稿。《华尔街日报》否认有任何违反道德的行为,但没有对使用法学硕士的指控采取明确立场。基于这一经验,我建议作者在文章提交之前与期刊就人工智能在同行评审中的使用进行对话;在怀疑未公开使用人工智能的情况下,作者应主动收集证据,要求调查方案,根据需要升级问题,尽可能让独立机构参与进来,并与其他研究人员分享经验。结论:期刊需要在同行评审中迅速采用透明的法学硕士使用政策,特别是要求披露。公开同行评议,宣布所有利益相关者的身份,可能会防止法学硕士被滥用,但在人工智能时代,各方都需要问责制。
{"title":"Personal experience with AI-generated peer reviews: a case study.","authors":"Nicholas Lo Vecchio","doi":"10.1186/s41073-025-00161-3","DOIUrl":"10.1186/s41073-025-00161-3","url":null,"abstract":"<p><strong>Background: </strong>While some recent studies have looked at large language model (LLM) use in peer review at the corpus level, to date there have been few examinations of instances of AI-generated reviews in their social context. The goal of this first-person account is to present my experience of receiving two anonymous peer review reports that I believe were produced using generative AI, as well as lessons learned from that experience.</p><p><strong>Methods: </strong>This is a case report on the timeline of the incident, and my and the journal's actions following it. Supporting evidence includes text patterns in the reports, online AI detection tools and ChatGPT simulations; recommendations are offered for others who may find themselves in a similar situation. The primary research limitation of this article is that it is based on one individual's personal experience.</p><p><strong>Results: </strong>After alleging the use of generative AI in December 2023, two months of back-and-forth ensued between myself and the journal, leading to my withdrawal of the submission. The journal denied any ethical breach, without taking an explicit position on the allegations of LLM use. Based on this experience, I recommend that authors engage in dialogue with journals on AI use in peer review prior to article submission; where undisclosed AI use is suspected, authors should proactively amass evidence, request an investigation protocol, escalate the matter as needed, involve independent bodies where possible, and share their experience with fellow researchers.</p><p><strong>Conclusions: </strong>Journals need to promptly adopt transparent policies on LLM use in peer review, in particular requiring disclosure. Open peer review where identities of all stakeholders are declared might safeguard against LLM misuse, but accountability in the AI era is needed from all parties.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"4"},"PeriodicalIF":7.2,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11974187/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143796279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How do oncology journals approach plagiarism? A website review. 肿瘤学期刊如何处理剽窃?网站评论。
IF 7.2 Q1 ETHICS Pub Date : 2025-03-31 DOI: 10.1186/s41073-025-00160-4
Johanna Goldberg, Heather Snijdewind, Céline Soudant, Kendra Godwin, Robin O'Hanlon

Background: Journals and publishers vary in the methods they use to detect plagiarism, when they implement these methods, and how they respond when plagiarism is suspected both before and after publication. This study aims to determine the policies and procedures of oncology journals for detecting and responding to suspected plagiarism in unpublished and published manuscripts.

Methods: We reviewed the websites of each journal in the Oncology category of Journal Citation Reports' Science Citation Index Expanded (SCIE) to determine how they detect and respond to suspected plagiarism. We collected data from each journal's website, or publisher webpages directly linked from journal websites, to ascertain what information about plagiarism policies and procedures is publicly available.

Results: There are 241 extant oncology journals included in SCIE, of which 224 (92.95%) have a plagiarism policy or mention plagiarism. Text similarity software or other plagiarism checking methods are mentioned by 207 of these (92.41%, and 85.89% of the 241 total journals examined). These text similarity checks occur most frequently at manuscript submission or initial editorial review. Journal or journal-linked publisher webpages frequently report following guidelines from the Committee on Publication Ethics (COPE) (135, 56.01%).

Conclusions: Oncology journals report similar methods for identifying and responding to plagiarism, with some variation based on the breadth, location, and timing of plagiarism detection. Journal policies and procedures are often informed by guidance from professional organizations, like COPE.

背景:期刊和出版商使用不同的方法来检测剽窃,何时实施这些方法,以及在发表前后怀疑剽窃时如何应对。本研究旨在确定肿瘤学期刊在未发表和已发表稿件中发现和应对疑似抄袭的政策和程序。方法:我们对journal Citation Reports’s Science Citation Index Expanded (SCIE)的肿瘤学类期刊的网站进行了回顾,以确定他们如何检测和应对疑似抄袭。我们从每个期刊的网站或从期刊网站直接链接的出版商网页收集数据,以确定哪些关于抄袭政策和程序的信息是公开的。结果:SCIE收录现有肿瘤期刊241种,其中有抄袭政策或提及抄袭的期刊224种(92.95%)。其中有207种(92.41%,占241种被检期刊总数的85.89%)提到了文本相似软件或其他抄袭检查方法。这些文本相似性检查最常发生在手稿提交或最初的编辑审查。期刊或期刊链接出版商的网页经常报告遵循出版伦理委员会(COPE)的指导方针(135,56.01%)。结论:肿瘤学期刊报告了类似的识别和应对抄袭的方法,根据剽窃检测的广度、地点和时间有一些变化。期刊政策和程序通常由专业组织(如COPE)提供指导。
{"title":"How do oncology journals approach plagiarism? A website review.","authors":"Johanna Goldberg, Heather Snijdewind, Céline Soudant, Kendra Godwin, Robin O'Hanlon","doi":"10.1186/s41073-025-00160-4","DOIUrl":"10.1186/s41073-025-00160-4","url":null,"abstract":"<p><strong>Background: </strong>Journals and publishers vary in the methods they use to detect plagiarism, when they implement these methods, and how they respond when plagiarism is suspected both before and after publication. This study aims to determine the policies and procedures of oncology journals for detecting and responding to suspected plagiarism in unpublished and published manuscripts.</p><p><strong>Methods: </strong>We reviewed the websites of each journal in the Oncology category of Journal Citation Reports' Science Citation Index Expanded (SCIE) to determine how they detect and respond to suspected plagiarism. We collected data from each journal's website, or publisher webpages directly linked from journal websites, to ascertain what information about plagiarism policies and procedures is publicly available.</p><p><strong>Results: </strong>There are 241 extant oncology journals included in SCIE, of which 224 (92.95%) have a plagiarism policy or mention plagiarism. Text similarity software or other plagiarism checking methods are mentioned by 207 of these (92.41%, and 85.89% of the 241 total journals examined). These text similarity checks occur most frequently at manuscript submission or initial editorial review. Journal or journal-linked publisher webpages frequently report following guidelines from the Committee on Publication Ethics (COPE) (135, 56.01%).</p><p><strong>Conclusions: </strong>Oncology journals report similar methods for identifying and responding to plagiarism, with some variation based on the breadth, location, and timing of plagiarism detection. Journal policies and procedures are often informed by guidance from professional organizations, like COPE.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"3"},"PeriodicalIF":7.2,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11956406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143756243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of indications for selectively missing results in comparative registry-based studies in medicine: a meta-research study. 基于比较登记的医学研究中选择性缺失结果的适应症分析:一项荟萃研究。
IF 7.2 Q1 ETHICS Pub Date : 2025-03-05 DOI: 10.1186/s41073-025-00159-x
Paula Starke, Zhentian Zhang, Hannah Papmeier, Dawid Pieper, Tim Mathes

Background: We assess if there are indications that results of registry-based studies comparing the effectiveness of interventions might be selectively missing depending on the statistical significance (p < 0.05).

Methods: Eligibility criteria Sample of cohort type studies that used data from a patient registry, compared two study arms for assessing a medical intervention, and reported an effect for a binary outcome. Information sources We searched PubMed to identify registries in seven different medical specialties in 2022/23. Subsequently, we included all studies that satisfied the eligibility criteria for each of the identified registries and collected p-values from these studies. Synthesis of results We plotted the cumulative distribution of p-values and a histogram of absolute z-scores for visual inspection of selectively missing results because of p-hacking, selective reporting, or publication bias. In addition, we tested for publication bias by applying a caliper test.

Results: Included studies Sample of 150 registry-based cohort type studies. Synthesis of results The cumulative distribution of p-values displays an abrupt, heavy increase just below the significance threshold of 0.05 while the distribution above the threshold shows a slow, gradual increase. The p-value of the caliper test with a 10% caliper was 0.011 (k = 2, N = 13).

Conclusions: We found that the results of registry-based studies might be selectively missing. Results from registry-based studies comparing medical interventions should be interpreted very cautiously, as positive findings could be a result from p-hacking, publication bias, or selective reporting. Prospective registration of such studies is necessary and should be made mandatory both in regulatory contexts and for publication in journals. Further research is needed to determine the main reasons for selectively missing results to support the development and implementation of more specific methods for preventing selectively missing results.

背景:我们评估是否有迹象表明,比较干预措施有效性的基于登记的研究结果可能会选择性地遗漏,这取决于统计显著性(p)。方法:资格标准:队列类型研究的样本使用来自患者登记的数据,比较两个研究组来评估医疗干预措施,并报告了对二元结果的影响。我们检索PubMed以确定2022/23年7个不同医学专业的注册。随后,我们纳入了所有符合每个已确定注册中心资格标准的研究,并收集了这些研究的p值。结果的综合我们绘制了p值的累积分布和绝对z分数的直方图,用于目视检查由于p黑客、选择性报告或发表偏倚而选择性缺失的结果。此外,我们采用卡钳检验来检验发表偏倚。结果:纳入研究样本为150个基于注册的队列研究。p值的累积分布在显著性阈值0.05以下表现为突然的、大幅度的增加,而高于显著性阈值的分布则表现为缓慢的、渐进的增加。10%卡尺检验的p值为0.011 (k = 2, N = 13)。结论:我们发现基于登记的研究结果可能有选择性地缺失。基于注册表的比较医疗干预的研究结果应非常谨慎地解释,因为阳性结果可能是p-hacking、发表偏倚或选择性报道的结果。这类研究的前瞻性注册是必要的,在监管环境和期刊发表方面都应该是强制性的。需要进一步的研究来确定选择性缺失结果的主要原因,以支持制定和实施更具体的方法来预防选择性缺失结果。
{"title":"Analysis of indications for selectively missing results in comparative registry-based studies in medicine: a meta-research study.","authors":"Paula Starke, Zhentian Zhang, Hannah Papmeier, Dawid Pieper, Tim Mathes","doi":"10.1186/s41073-025-00159-x","DOIUrl":"10.1186/s41073-025-00159-x","url":null,"abstract":"<p><strong>Background: </strong>We assess if there are indications that results of registry-based studies comparing the effectiveness of interventions might be selectively missing depending on the statistical significance (p < 0.05).</p><p><strong>Methods: </strong>Eligibility criteria Sample of cohort type studies that used data from a patient registry, compared two study arms for assessing a medical intervention, and reported an effect for a binary outcome. Information sources We searched PubMed to identify registries in seven different medical specialties in 2022/23. Subsequently, we included all studies that satisfied the eligibility criteria for each of the identified registries and collected p-values from these studies. Synthesis of results We plotted the cumulative distribution of p-values and a histogram of absolute z-scores for visual inspection of selectively missing results because of p-hacking, selective reporting, or publication bias. In addition, we tested for publication bias by applying a caliper test.</p><p><strong>Results: </strong>Included studies Sample of 150 registry-based cohort type studies. Synthesis of results The cumulative distribution of p-values displays an abrupt, heavy increase just below the significance threshold of 0.05 while the distribution above the threshold shows a slow, gradual increase. The p-value of the caliper test with a 10% caliper was 0.011 (k = 2, N = 13).</p><p><strong>Conclusions: </strong>We found that the results of registry-based studies might be selectively missing. Results from registry-based studies comparing medical interventions should be interpreted very cautiously, as positive findings could be a result from p-hacking, publication bias, or selective reporting. Prospective registration of such studies is necessary and should be made mandatory both in regulatory contexts and for publication in journals. Further research is needed to determine the main reasons for selectively missing results to support the development and implementation of more specific methods for preventing selectively missing results.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"2"},"PeriodicalIF":7.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11881244/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Policies on artificial intelligence chatbots among academic publishers: a cross-sectional audit. 学术出版商对人工智能聊天机器人的政策:一项横断面审计。
IF 7.2 Q1 ETHICS Pub Date : 2025-02-28 DOI: 10.1186/s41073-025-00158-y
Daivat Bhavsar, Laura Duffy, Hamin Jo, Cynthia Lokker, R Brian Haynes, Alfonso Iorio, Ana Marusic, Jeremy Y Ng

Background: Artificial intelligence (AI) chatbots are novel computer programs that can generate text or content in a natural language format. Academic publishers are adapting to the transformative role of AI chatbots in producing or facilitating scientific research. This study aimed to examine the policies established by scientific, technical, and medical academic publishers for defining and regulating the authors' responsible use of AI chatbots.

Methods: This study performed a cross-sectional audit on the publicly available policies of 162 academic publishers, indexed as members of the International Association of the Scientific, Technical, and Medical Publishers (STM). Data extraction of publicly available policies on the webpages of all STM academic publishers was performed independently, in duplicate, with content analysis reviewed by a third contributor (September 2023-December 2023). Data was categorized into policy elements, such as 'proofreading' and 'image generation'. Counts and percentages of 'yes' (i.e., permitted), 'no', and 'no available information' (NAI) were established for each policy element.

Results: A total of 56/162 (34.6%) STM academic publishers had a publicly available policy guiding the authors' use of AI chatbots. No policy allowed authorship for AI chatbots (or other AI tool). Most (49/56 or 87.5%) required specific disclosure of AI chatbot use. Four policies/publishers placed a complete ban on the use of AI chatbots by authors.

Conclusions: Only a third of STM academic publishers had publicly available policies as of December 2023. A re-examination of all STM members in 12-18 months may uncover evolving approaches toward AI chatbot use with more academic publishers having a policy.

背景:人工智能(AI)聊天机器人是一种新颖的计算机程序,可以以自然语言格式生成文本或内容。学术出版商正在适应人工智能聊天机器人在生产或促进科学研究方面的变革性作用。本研究旨在研究科学、技术和医学学术出版商为定义和规范作者负责任地使用人工智能聊天机器人而制定的政策。方法:本研究对162家作为国际科学、技术和医学出版商协会(STM)成员索引的学术出版商的公开政策进行了横断面审计。所有STM学术出版商网页上公开可用政策的数据提取是独立进行的,一式两份,内容分析由第三位贡献者审查(2023年9月至2023年12月)。数据被归类为政策要素,如“校对”和“图像生成”。为每个策略元素建立“是”(即允许)、“否”和“无可用信息”(NAI)的计数和百分比。结果:共有56/162 (34.6%)STM学术出版商制定了指导作者使用AI聊天机器人的公开政策。没有政策允许AI聊天机器人(或其他AI工具)的作者身份。大多数(49/56或87.5%)要求具体披露人工智能聊天机器人的使用情况。四项政策/出版商完全禁止作者使用人工智能聊天机器人。结论:截至2023年12月,只有三分之一的STM学术出版商有公开的政策。在12-18个月内对所有STM成员进行重新检查,可能会发现越来越多的学术出版商制定了相关政策,从而发现使用人工智能聊天机器人的方法在不断发展。
{"title":"Policies on artificial intelligence chatbots among academic publishers: a cross-sectional audit.","authors":"Daivat Bhavsar, Laura Duffy, Hamin Jo, Cynthia Lokker, R Brian Haynes, Alfonso Iorio, Ana Marusic, Jeremy Y Ng","doi":"10.1186/s41073-025-00158-y","DOIUrl":"10.1186/s41073-025-00158-y","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) chatbots are novel computer programs that can generate text or content in a natural language format. Academic publishers are adapting to the transformative role of AI chatbots in producing or facilitating scientific research. This study aimed to examine the policies established by scientific, technical, and medical academic publishers for defining and regulating the authors' responsible use of AI chatbots.</p><p><strong>Methods: </strong>This study performed a cross-sectional audit on the publicly available policies of 162 academic publishers, indexed as members of the International Association of the Scientific, Technical, and Medical Publishers (STM). Data extraction of publicly available policies on the webpages of all STM academic publishers was performed independently, in duplicate, with content analysis reviewed by a third contributor (September 2023-December 2023). Data was categorized into policy elements, such as 'proofreading' and 'image generation'. Counts and percentages of 'yes' (i.e., permitted), 'no', and 'no available information' (NAI) were established for each policy element.</p><p><strong>Results: </strong>A total of 56/162 (34.6%) STM academic publishers had a publicly available policy guiding the authors' use of AI chatbots. No policy allowed authorship for AI chatbots (or other AI tool). Most (49/56 or 87.5%) required specific disclosure of AI chatbot use. Four policies/publishers placed a complete ban on the use of AI chatbots by authors.</p><p><strong>Conclusions: </strong>Only a third of STM academic publishers had publicly available policies as of December 2023. A re-examination of all STM members in 12-18 months may uncover evolving approaches toward AI chatbot use with more academic publishers having a policy.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"1"},"PeriodicalIF":7.2,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11869395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143532223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Publisher Correction: Developing the Clarity and Openness in Reporting: E3-based (CORE) Reference user manual for creation of clinical study reports in the era of clinical trial transparency. 出版商更正:发展报告的清晰度和开放性:基于e3 (CORE)参考用户手册,用于在临床试验透明时代创建临床研究报告。
IF 7.2 Q1 ETHICS Pub Date : 2024-12-23 DOI: 10.1186/s41073-024-00157-5
Samina Hamilton, Aaron B Bernstein, Graham Blakey, Vivien Fagan, Tracy Farrow, Debbie Jordan, Walther Seiler, Anna Shannon, Art Gertel
{"title":"Publisher Correction: Developing the Clarity and Openness in Reporting: E3-based (CORE) Reference user manual for creation of clinical study reports in the era of clinical trial transparency.","authors":"Samina Hamilton, Aaron B Bernstein, Graham Blakey, Vivien Fagan, Tracy Farrow, Debbie Jordan, Walther Seiler, Anna Shannon, Art Gertel","doi":"10.1186/s41073-024-00157-5","DOIUrl":"10.1186/s41073-024-00157-5","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"16"},"PeriodicalIF":7.2,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668038/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Research integrity and peer review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1