首页 > 最新文献

Cochrane Evidence Synthesis and Methods最新文献

英文 中文
Systematic Reviews as Part of Doctoral Theses and for the Promotion to Associate Professor: A Descriptive Study of University Policies in Sweden. 作为博士论文一部分的系统评论和副教授的晋升:瑞典大学政策的描述性研究。
Pub Date : 2026-01-14 eCollection Date: 2026-01-01 DOI: 10.1002/cesm.70069
Martin Ringsten, Lea Styrmisdottir, Matilda Naesström, Minna Johansson, Matteo Bruschettini, Susanna M Wallerstedt

Background: Almost a decade ago, about half of biomedical PhD programs across Europe specifically stated that systematic reviews could not be accepted as part of a doctoral thesis, illustrating limited merit value at that time. The aim of this study was to explore current Swedish university policies on this research design.

Methods: Policy documents for PhD theses and applications to associate professor positions were obtained from all medical faculties at universities in Sweden. Instructions regarding systematic reviews, with focus on their merit value and related aspects, were independently extracted and categorized by two authors, with discrepancies resolved in consensus discussions.

Results: All seven medical faculties accepted at least one systematic review within a PhD thesis, five restricted the number of such studies accepted, and five provided instructions regarding this study design. Regarding policies for promotion to associate professor, six medical faculties accepted at least one published systematic review to merit recognition-the remaining one required meta-analyses for acceptance-and three explicitly restricted the number of systematic reviews. No restrictions or guidance were provided for other designs intended to answer specific research questions.

Conclusion: As of 2025, systematic reviews appear to be generally recognized as contributing to authors' academic merit. For this research design exclusively, some universities impose restrictions that may limit their recognition, and some provide guidance which may help ensure quality in reporting. These findings may encourage research to evaluate the merit value of systematic reviews in other settings, and to examine potential implications of restrictions and guidance in policy documents.

背景:大约十年前,欧洲大约一半的生物医学博士课程特别声明,系统评论不能作为博士论文的一部分,这表明当时的价值有限。本研究的目的是探讨当前瑞典大学对这一研究设计的政策。方法:收集瑞典各大学医学院博士论文和副教授职位申请政策文件。关于系统评价的说明,重点是其优点和相关方面,由两位作者独立提取和分类,在一致讨论中解决差异。结果:所有七所医学院均接受博士论文中至少一篇系统评价,其中五所限制接受此类研究的数量,五所提供有关本研究设计的说明。在副教授晋升政策方面,6所医学院接受了至少一篇已发表的系统综述,以获得认可——其余一所需要荟萃分析才能接受——3所明确限制了系统综述的数量。对于旨在回答特定研究问题的其他设计,没有提供任何限制或指导。结论:截至2025年,系统综述似乎被普遍认为有助于作者的学术价值。对于这种专门的研究设计,有些大学会施加限制,这可能会限制他们的认可,有些大学会提供指导,这可能有助于确保报告的质量。这些发现可能鼓励研究在其他情况下评价系统审查的价值,并审查政策文件中限制和指导的潜在影响。
{"title":"Systematic Reviews as Part of Doctoral Theses and for the Promotion to Associate Professor: A Descriptive Study of University Policies in Sweden.","authors":"Martin Ringsten, Lea Styrmisdottir, Matilda Naesström, Minna Johansson, Matteo Bruschettini, Susanna M Wallerstedt","doi":"10.1002/cesm.70069","DOIUrl":"https://doi.org/10.1002/cesm.70069","url":null,"abstract":"<p><strong>Background: </strong>Almost a decade ago, about half of biomedical PhD programs across Europe specifically stated that systematic reviews could not be accepted as part of a doctoral thesis, illustrating limited merit value at that time. The aim of this study was to explore current Swedish university policies on this research design.</p><p><strong>Methods: </strong>Policy documents for PhD theses and applications to associate professor positions were obtained from all medical faculties at universities in Sweden. Instructions regarding systematic reviews, with focus on their merit value and related aspects, were independently extracted and categorized by two authors, with discrepancies resolved in consensus discussions.</p><p><strong>Results: </strong>All seven medical faculties accepted at least one systematic review within a PhD thesis, five restricted the number of such studies accepted, and five provided instructions regarding this study design. Regarding policies for promotion to associate professor, six medical faculties accepted at least one published systematic review to merit recognition-the remaining one required meta-analyses for acceptance-and three explicitly restricted the number of systematic reviews. No restrictions or guidance were provided for other designs intended to answer specific research questions.</p><p><strong>Conclusion: </strong>As of 2025, systematic reviews appear to be generally recognized as contributing to authors' academic merit. For this research design exclusively, some universities impose restrictions that may limit their recognition, and some provide guidance which may help ensure quality in reporting. These findings may encourage research to evaluate the merit value of systematic reviews in other settings, and to examine potential implications of restrictions and guidance in policy documents.</p>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"4 1","pages":"e70069"},"PeriodicalIF":0.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12806540/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146000321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Co-production Evaluation Tool Informed by Co-production Workshops for Use in Evidence Synthesis Contexts 由联合制作研讨会提供的用于证据综合的联合制作评估工具。
Pub Date : 2026-01-07 DOI: 10.1002/cesm.70065
Meena Khatwa, Vanessa Bennett, Rachael C. Edwards, Lisa Richardson, Phuong Tu Nguyen, Sajid Saleem, Sylvia Chaires, Alison O'Mara-Eves, Dylan Kneale
<div> <section> <h3> Aim</h3> <p>We aimed to co-produce a tool for evaluating co-production within evidence syntheses.</p> </section> <section> <h3> Background</h3> <p>Participatory approaches are recommended to enhance the salience and quality of evidence syntheses, and there is an increasing onus on co-producing evidence synthesis. Co-production is a way of working where research generators, beneficiaries and other interest holders work in equal partnership and for mutual benefit.</p> </section> <section> <h3> Methods</h3> <div>To develop our approach, we: <ul> <li> <p>Examined selected existing tools and frameworks that could be useful in evaluating co-production</p> </li> <li> <p>Developed an initial tool that was then modified through input from co-production workshops</p> </li> <li> <p>Piloted the tool and evaluation approach in a project as part of research involving co-producing a logic model to support evidence syntheses.</p> </li> </ul> </div> </section> <section> <h3> Results</h3> <p>The existing tools guidance and resources we examined were deemed to be oriented towards supporting the conduct and reporting of co-production, rather than evaluating what happens and how. This provided a basis for co-producing a new tool. A new tool was developed that captures our perspectives on: positionality and expertise; motivations and expected benefits; clarity of role and expectations; project involvement and contributions; value and recognition; skills, knowledge, and personal growth; relationships and networking; comfort, support, and accessibility; and decision-making and power sharing. We reflected that the tool and process for administering the tool worked well, and we liked the process of collective sensemaking.</p> </section> <section> <h3> Conclusions</h3> <p>We believe that the tool (which we refer to as the STRAPS tool – Synthesising Through Reflection And Participatory Sense-making) could provide a useful resource and starting point to other review teams who wish to evaluate co-production in their reviews and encourage others to share their experiences with us.</p> </section> <section>
目的:我们的目的是共同制作一个工具来评估证据综合中的共同制作。背景:建议采用参与式方法来提高证据合成的重要性和质量,共同合成证据的责任越来越大。合作生产是一种工作方式,研究产生者、受益者和其他利益相关者以平等的伙伴关系和互利的方式工作。方法:为了开发我们的方法,我们:检查了可用于评估合作生产的现有工具和框架;开发了一个初始工具,然后通过合作生产研讨会的输入进行修改;将该工具和评估方法应用于一个项目中,作为涉及共同生产逻辑模型以支持证据合成的研究的一部分。结果:我们检查的现有工具、指导和资源被认为是面向支持联合生产的行为和报告,而不是评估发生了什么和如何发生。这为共同开发新工具奠定了基础。我们开发了一种新工具,可以捕捉我们对以下方面的看法:定位和专业知识;动机和预期收益;明确角色和期望;参与项目及贡献;价值和认可;技能、知识和个人成长;关系和网络;舒适、支持和可及性;以及决策和权力分享。我们反映了管理工具的工具和过程运行良好,我们喜欢集体意义构建的过程。结论:我们相信该工具(我们称之为“通过反思和参与式意义构建的综合”工具)可以为其他希望在审查中评估合作制作的审查小组提供有用的资源和起点,并鼓励其他人与我们分享他们的经验。启示:联合制作提高了证据综合的质量。使用bands工具可以帮助评审人员使用标准化的方法对过程进行解包。
{"title":"A Co-production Evaluation Tool Informed by Co-production Workshops for Use in Evidence Synthesis Contexts","authors":"Meena Khatwa,&nbsp;Vanessa Bennett,&nbsp;Rachael C. Edwards,&nbsp;Lisa Richardson,&nbsp;Phuong Tu Nguyen,&nbsp;Sajid Saleem,&nbsp;Sylvia Chaires,&nbsp;Alison O'Mara-Eves,&nbsp;Dylan Kneale","doi":"10.1002/cesm.70065","DOIUrl":"10.1002/cesm.70065","url":null,"abstract":"&lt;div&gt;\u0000 \u0000 \u0000 &lt;section&gt;\u0000 \u0000 &lt;h3&gt; Aim&lt;/h3&gt;\u0000 \u0000 &lt;p&gt;We aimed to co-produce a tool for evaluating co-production within evidence syntheses.&lt;/p&gt;\u0000 &lt;/section&gt;\u0000 \u0000 &lt;section&gt;\u0000 \u0000 &lt;h3&gt; Background&lt;/h3&gt;\u0000 \u0000 &lt;p&gt;Participatory approaches are recommended to enhance the salience and quality of evidence syntheses, and there is an increasing onus on co-producing evidence synthesis. Co-production is a way of working where research generators, beneficiaries and other interest holders work in equal partnership and for mutual benefit.&lt;/p&gt;\u0000 &lt;/section&gt;\u0000 \u0000 &lt;section&gt;\u0000 \u0000 &lt;h3&gt; Methods&lt;/h3&gt;\u0000 \u0000 &lt;div&gt;To develop our approach, we:\u0000\u0000 &lt;ul&gt;\u0000 \u0000 &lt;li&gt;\u0000 &lt;p&gt;Examined selected existing tools and frameworks that could be useful in evaluating co-production&lt;/p&gt;\u0000 &lt;/li&gt;\u0000 \u0000 &lt;li&gt;\u0000 &lt;p&gt;Developed an initial tool that was then modified through input from co-production workshops&lt;/p&gt;\u0000 &lt;/li&gt;\u0000 \u0000 &lt;li&gt;\u0000 &lt;p&gt;Piloted the tool and evaluation approach in a project as part of research involving co-producing a logic model to support evidence syntheses.&lt;/p&gt;\u0000 &lt;/li&gt;\u0000 &lt;/ul&gt;\u0000 &lt;/div&gt;\u0000 &lt;/section&gt;\u0000 \u0000 &lt;section&gt;\u0000 \u0000 &lt;h3&gt; Results&lt;/h3&gt;\u0000 \u0000 &lt;p&gt;The existing tools guidance and resources we examined were deemed to be oriented towards supporting the conduct and reporting of co-production, rather than evaluating what happens and how. This provided a basis for co-producing a new tool. A new tool was developed that captures our perspectives on: positionality and expertise; motivations and expected benefits; clarity of role and expectations; project involvement and contributions; value and recognition; skills, knowledge, and personal growth; relationships and networking; comfort, support, and accessibility; and decision-making and power sharing. We reflected that the tool and process for administering the tool worked well, and we liked the process of collective sensemaking.&lt;/p&gt;\u0000 &lt;/section&gt;\u0000 \u0000 &lt;section&gt;\u0000 \u0000 &lt;h3&gt; Conclusions&lt;/h3&gt;\u0000 \u0000 &lt;p&gt;We believe that the tool (which we refer to as the STRAPS tool – Synthesising Through Reflection And Participatory Sense-making) could provide a useful resource and starting point to other review teams who wish to evaluate co-production in their reviews and encourage others to share their experiences with us.&lt;/p&gt;\u0000 &lt;/section&gt;\u0000 \u0000 &lt;section&gt;\u0000 ","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12782252/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145954696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensitivity Analysis in Meta-Analysis: A Tutorial meta分析中的敏感性分析:教程
Pub Date : 2026-01-05 DOI: 10.1002/cesm.70067
Nyan Min Aung, Ivan Jurak, Seemab Mehmood, Emma Axon

This tutorial explains when systematic review authors may consider performing a sensitivity analysis in a meta-analysis. Such scenarios include removing studies at high risk of bias, exploring the effect of outliers and examining differences in study characteristics (e.g., participants’ age, study design). In addition, examples are provided, as well as advice on how to interpret and report the results. The tutorial also explains the differences between subgroup and sensitivity analyses, as well as describing the disadvantages of a sensitivity analysis. To support this tutorial, a link to an online module, which includes videos and quizzes, is also provided.

本教程解释了系统评价作者何时可以考虑在荟萃分析中进行敏感性分析。这些场景包括移除高偏倚风险的研究,探索异常值的影响,以及检查研究特征的差异(例如,参与者的年龄、研究设计)。此外,还提供了示例,以及如何解释和报告结果的建议。本教程还解释了子组分析和敏感性分析之间的区别,并描述了敏感性分析的缺点。为了支持本教程,还提供了一个在线模块的链接,其中包括视频和测验。
{"title":"Sensitivity Analysis in Meta-Analysis: A Tutorial","authors":"Nyan Min Aung,&nbsp;Ivan Jurak,&nbsp;Seemab Mehmood,&nbsp;Emma Axon","doi":"10.1002/cesm.70067","DOIUrl":"https://doi.org/10.1002/cesm.70067","url":null,"abstract":"<p>This tutorial explains when systematic review authors may consider performing a sensitivity analysis in a meta-analysis. Such scenarios include removing studies at high risk of bias, exploring the effect of outliers and examining differences in study characteristics (e.g., participants’ age, study design). In addition, examples are provided, as well as advice on how to interpret and report the results. The tutorial also explains the differences between subgroup and sensitivity analyses, as well as describing the disadvantages of a sensitivity analysis. To support this tutorial, a link to an online module, which includes videos and quizzes, is also provided.</p>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70067","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Responsible Integration of Artificial Intelligence in Rapid Reviews: A Position Statement From the Cochrane Rapid Reviews Methods Group 人工智能在快速评论中的负责任整合:Cochrane快速评论方法组的立场声明
Pub Date : 2025-11-24 DOI: 10.1002/cesm.70063
Gerald Gartlehner, Barbara Nussbaumer-Streit, Candyce Hamel, Chantelle Garritty, Ursula Griebler, Valerie Jean King, Declan Devane, Chris Kamel
<p>Rapidly evolving artificial intelligence (AI) technologies are increasingly used to accelerate literature review processes. A recent review and evidence map identified almost 100 studies published since 2021assessing AI applications in evidence synthesis [<span>1</span>]. These technologies span from machine-learning classifiers to generative large-language models (LLMs). Recently, a preprint reported that a tool powered by LLMs autonomously reproduced and updated 12 Cochrane reviews in just 2 days [<span>2</span>], sparking debate about when and how AI can be used safely and effectively to support systematic and rapid reviews.</p><p>In this position statement, the Cochrane Rapid Reviews Methods Group outlines its stance on the use of AI in rapid reviews. Rapid reviews encompass various types of evidence synthesis, and while some AI tools have been developed for specific review types, such as qualitative evidence syntheses, most are designed for more general application across review methodologies.</p><p>The main recommendations are summarized in Textbox 1. They complement a recently released position statement by Cochrane and other evidence synthesis organizations on the use of AI in evidence synthesis [<span>3</span>].</p><p>Semi-automation of discrete steps in the evidence synthesis process —where algorithms assist but do not replace human reviewers—is not new. Cochrane, for instance, was an early adopter with the development of the randomized controlled trial (RCT) Classifier, a machine learning tool that identifies RCTs during abstract screening [<span>4</span>]. Semi-automation plays a different role in rapid reviews than in traditional systematic reviews, where methodological certainty is typically prioritized. Because rapid reviews already balance rigor and timeliness, teams may be more willing to adopt efficiency-enhancing tools sooner.</p><p>The advent of generative LLMs, such as ChatGPT [<span>5</span>] or Gemini [<span>6</span>], has substantially expanded the potential for AI to support tasks in evidence synthesis. Unlike earlier machine learning tools that required extensive task-specific training data, LLMs can be deployed in zero-shot settings—meaning they can be applied without prior training or fine-tuning to a given task. This dramatically lowers the barrier to entry, offering a more accessible pathway for integrating AI into review workflows. Multiple studies have assessed the utility of generative LLMs to support the development of search strategies [<span>7</span>], literature screening [<span>8-10</span>], risk of bias assessment [<span>11, 12</span>], and data extraction [<span>8, 13-15</span>]. However, findings to date indicate highly variable performance ranging from high accuracy in some tasks to concerning errors in others [<span>1</span>]. In parallel, developers of literature review software have begun integrating LLMs into their products.</p><p>Importantly, in rapid reviews, AI has the potential not only to enha
快速发展的人工智能(AI)技术越来越多地用于加快文献综述过程。最近的一项综述和证据图确定了自2021年以来发表的近100项研究,评估了人工智能在证据合成领域的应用。这些技术涵盖从机器学习分类器到生成大语言模型(llm)。最近,一篇预印本报道称,一个由法学硕士支持的工具在短短两天内自动复制和更新了12篇Cochrane综述,引发了关于何时以及如何安全有效地使用人工智能来支持系统和快速的综述的争论。在这份立场声明中,Cochrane快速评价方法小组概述了其在快速评价中使用人工智能的立场。快速审查包括各种类型的证据综合,虽然一些人工智能工具是为特定的审查类型开发的,如定性证据综合,但大多数是为更普遍的审查方法应用而设计的。主要建议汇总在文本框1中。它们补充了Cochrane和其他证据合成组织最近发布的关于在证据合成中使用人工智能的立场声明。证据合成过程中离散步骤的半自动化——算法辅助但不取代人工审查员——并不新鲜。例如,Cochrane是早期采用随机对照试验(RCT)分类器的公司,这是一种机器学习工具,可以在摘要筛选过程中识别RCT。半自动化在快速审查中扮演着与传统系统审查不同的角色,在传统系统审查中,方法的确定性通常是优先考虑的。因为快速评审已经平衡了严格性和及时性,团队可能更愿意更快地采用提高效率的工具。生成法学硕士的出现,如ChatGPT[5]或Gemini[6],极大地扩展了人工智能支持证据合成任务的潜力。与早期需要大量特定任务训练数据的机器学习工具不同,llm可以部署在零射击设置中,这意味着它们可以在没有事先训练或对给定任务进行微调的情况下应用。这大大降低了进入门槛,为将人工智能集成到审查工作流程中提供了更方便的途径。多项研究已经评估了生成式法学模型在支持搜索策略b[7]、文献筛选[8-10]、偏倚风险评估[11,12]和数据提取[8,13 -15]开发方面的效用。然而,迄今为止的研究结果表明,性能变化很大,从某些任务的高精度到其他任务的错误。与此同时,文献回顾软件的开发人员已经开始将法学硕士集成到他们的产品中。重要的是,在快速审查中,人工智能不仅可以提高效率,还可以提高质量。许多快速审查依赖于单个审稿人执行关键任务,例如研究选择,数据提取,偏倚风险评估或证据评级的确定性,这增加了未被发现的错误的风险。在这些情况下,人工智能可以作为一种可扩展的质量控制工具,帮助识别不一致之处,标记缺失的数据,或建议被忽视的研究。例如,Cochrane快速审稿指南建议,如果在双重筛选摘要期间审稿人之间的一致性很高,则切换到单审稿筛选[16,17],这可能会错过一些符合条件的研究bbb。在这种情况下,人工智能可以补充人类的判断,并减轻与单一审稿人工作流程相关的风险。集成人工智能的审查软件可以重新检查在单一审稿人筛选中被排除的摘要,减少错误排除的可能性。审查过程中另一个容易出错的步骤是数据提取,它可以从人工智能支持中受益。研究表明,根据审稿人的经验和主题的复杂性,高达50%的人类提取的数据元素包含错误[19,20]。使用人工智能作为数据提取的辅助审稿人可以提高数据质量并减少错误[21]。然而,最终,审稿人必须决定使用人工智能的额外努力是否对他们的快速审查可行。尽管人工智能潜力巨大,但它也带来了风险。具体来说,生成法学硕士可能会产生错误的反应,捏造数据或参考文献,使偏见永久化,并传播错误信息。为了确保人工智能整合加强而不是破坏快速审查的可信度,持续的人工监督(尽管其本身也有可能出错)必须仍然是任何人工智能支持的证据综合工作的核心原则。最近一项关于在证据合成中使用人工智能的综述发现,在筛选过程中,人工智能工具的不正确纳入决策范围为0%至29%(中位数= 10%),不正确的数据提取范围为4%至31%(中位数= 14%)。 通过多方利益相关者共识过程制定的《负责任的人工智能在系统证据合成中的应用指南》(RAISE)为人工智能在证据合成中的透明、合乎道德和科学合理的整合提供了基本原则。Cochrane一直积极参与RAISE,无论是作为贡献者还是作为实施组织,这反映了它在面对快速技术进步时确保方法严谨性的承诺。Cochrane快速评审方法小组的成员也参与了RAISE的开发,带来了快速评审领域的专业知识,在快速评审领域,速度的压力使得人工智能的采用特别有吸引力——但也有潜在的风险。RAISE强调一些关键原则,如报告的透明度、人为监督、可重复性和适合目的的评估。它提醒人们不要过度依赖未经可靠验证的人工智能系统,并强调披露何时、如何以及为哪些任务使用自动化的重要性。随着生成法学硕士和其他人工智能工具变得越来越容易获得,遵守RAISE原则对于维护人工智能辅助证据合成的可信度和实用性至关重要。在使用人工智能工具时,研究人员还需要验证任何上传或处理的材料在合理使用或同等学术例外的情况下是允许的,并且人工智能模型本身符合版权和数据保护标准。例如,一些专业或企业版本的生成法学硕士明确保证上传的材料不会用于模型训练或再分发,从而提供了更大的保密性和法律合规性保证。应保持人工智能工具使用的透明文档,包括模型版本、目的和数据输入,以维护证据合成的可重复性、问责性和道德完整性。Cochrane与Campbell Collaboration、JBI和the Collaboration for Environmental Evidence一起,在一份关于在证据合成中使用人工智能的立场声明中支持RAISE[10]。该声明强调,证据合成者仍然对他们的工作完全负责,在使用人工智能或自动化时必须确保符合道德和法律。人工智能的任何使用都应该有明确的理由,并且该工具必须在方法上合理,确保它不会损害审查结果的可信度或可靠性。重要的是,所有人工智能辅助任务都需要人工监督,任何人工智能生成或人工智能知情的判断都必须在最终综合中透明地报告。作者不应该使用人工智能来完全自动化整个快速审查或其任何方法步骤。这样做有引入错误、偏见和缺乏透明度的风险,最终会破坏快速审查的可信度和可重复性。此外,这些方法违反了既定的Cochrane方法标准。此外,作者必须继续遵循Cochrane快速评价的现有方法指南,并坚持Cochrane在透明度、利益冲突、问责制和科学严密性方面的标准。人工智能工具的使用是可以接受的——甚至是鼓励的——当它有助于提高评审质量时。当资源限制要求某项任务(如研究选择或数据提取)由单个审稿人完成时,可以使用人工智能工具提供二次检查或提供独立建议。通过这种方式,快速评审作者可以以最小的额外工作引入额外的质量保证层。然而,在所有使用人工智能的情况下,人工审查员必须继续负责验证所有人工智能输出并做出最终决定。人类审稿人必须继续解决歧义,深思熟虑地应用纳入标准,并在临床相关性或政策含义的更广泛背景下解释研究结果。人工智能工具不能被列为作者,也不能对自己的错误负责。因此,人工智能使用的透明度至关重要。评审方案必须记录在评审过程中包含人工智能的意图。Cochrane报告的审查方法和新的“人工智能使用披露”部分必须明确说明使用了哪些工具,如何应用,以及它们在审查过程中发挥了什么作用。如果审稿人使用生成式LLM,则需要记录模型版本和提示。这包括对人为监督程度和所采取的任何验证
{"title":"Responsible Integration of Artificial Intelligence in Rapid Reviews: A Position Statement From the Cochrane Rapid Reviews Methods Group","authors":"Gerald Gartlehner,&nbsp;Barbara Nussbaumer-Streit,&nbsp;Candyce Hamel,&nbsp;Chantelle Garritty,&nbsp;Ursula Griebler,&nbsp;Valerie Jean King,&nbsp;Declan Devane,&nbsp;Chris Kamel","doi":"10.1002/cesm.70063","DOIUrl":"https://doi.org/10.1002/cesm.70063","url":null,"abstract":"&lt;p&gt;Rapidly evolving artificial intelligence (AI) technologies are increasingly used to accelerate literature review processes. A recent review and evidence map identified almost 100 studies published since 2021assessing AI applications in evidence synthesis [&lt;span&gt;1&lt;/span&gt;]. These technologies span from machine-learning classifiers to generative large-language models (LLMs). Recently, a preprint reported that a tool powered by LLMs autonomously reproduced and updated 12 Cochrane reviews in just 2 days [&lt;span&gt;2&lt;/span&gt;], sparking debate about when and how AI can be used safely and effectively to support systematic and rapid reviews.&lt;/p&gt;&lt;p&gt;In this position statement, the Cochrane Rapid Reviews Methods Group outlines its stance on the use of AI in rapid reviews. Rapid reviews encompass various types of evidence synthesis, and while some AI tools have been developed for specific review types, such as qualitative evidence syntheses, most are designed for more general application across review methodologies.&lt;/p&gt;&lt;p&gt;The main recommendations are summarized in Textbox 1. They complement a recently released position statement by Cochrane and other evidence synthesis organizations on the use of AI in evidence synthesis [&lt;span&gt;3&lt;/span&gt;].&lt;/p&gt;&lt;p&gt;Semi-automation of discrete steps in the evidence synthesis process —where algorithms assist but do not replace human reviewers—is not new. Cochrane, for instance, was an early adopter with the development of the randomized controlled trial (RCT) Classifier, a machine learning tool that identifies RCTs during abstract screening [&lt;span&gt;4&lt;/span&gt;]. Semi-automation plays a different role in rapid reviews than in traditional systematic reviews, where methodological certainty is typically prioritized. Because rapid reviews already balance rigor and timeliness, teams may be more willing to adopt efficiency-enhancing tools sooner.&lt;/p&gt;&lt;p&gt;The advent of generative LLMs, such as ChatGPT [&lt;span&gt;5&lt;/span&gt;] or Gemini [&lt;span&gt;6&lt;/span&gt;], has substantially expanded the potential for AI to support tasks in evidence synthesis. Unlike earlier machine learning tools that required extensive task-specific training data, LLMs can be deployed in zero-shot settings—meaning they can be applied without prior training or fine-tuning to a given task. This dramatically lowers the barrier to entry, offering a more accessible pathway for integrating AI into review workflows. Multiple studies have assessed the utility of generative LLMs to support the development of search strategies [&lt;span&gt;7&lt;/span&gt;], literature screening [&lt;span&gt;8-10&lt;/span&gt;], risk of bias assessment [&lt;span&gt;11, 12&lt;/span&gt;], and data extraction [&lt;span&gt;8, 13-15&lt;/span&gt;]. However, findings to date indicate highly variable performance ranging from high accuracy in some tasks to concerning errors in others [&lt;span&gt;1&lt;/span&gt;]. In parallel, developers of literature review software have begun integrating LLMs into their products.&lt;/p&gt;&lt;p&gt;Importantly, in rapid reviews, AI has the potential not only to enha","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145625659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introducing a Series of Reviews Assessing Engagement in Evidence Syntheses 介绍一系列评估证据综合参与的综述
Pub Date : 2025-11-20 DOI: 10.1002/cesm.70057
Jennifer Petkovic, Joanne Khabsa, Lyubov Lytvyn, Alex Todhunter-Brown, Olivia Magwood, Pauline Campbell, Elie A. Akl, Thomas W. Concannon, Holger Schunemann, Vivian Welch, Peter Tugwell
<p>High quality evidence syntheses are used in health decision-making, such as policies, legislation, and clinical recommendations [<span>1</span>]. The usefulness, relevance, meaningfulness, and accessibility of evidence syntheses may be improved when people who are affected by those decisions, called “interest-holders,” are included in the evidence synthesis process [<span>2-4</span>]. This concept of engagement in research is based on the principle that those affected by the health condition under study or the intervention to address it have a moral right to contribute to the decisions about how the research is conducted [<span>3, 5</span>]. While there are increasing expectations from funders regarding the involvement of interest-holders [<span>6</span>], the most effective methods for engaging different interest-holders in evidence syntheses have not been identified [<span>5</span>]. Additionally, while there is some guidance related to engagement in research, it predominantly focuses on patient and public engagement in primary research, not evidence synthesis and there is limited guidance for engaging with other interest-holders [<span>3, 4, 7-9</span>].</p><p>The aim of this paper is to introduce a series of articles about how to successfully engage different interest-holders when conducting evidence syntheses. The series of articles will consider methods used to engage different interest-holders (including who to involve and in what way), barriers and facilitators to engagement, impacts of engagement, management of conflicts of interest, and factors relating to equity.</p><p>This paper presents the shared definitions used across each of the five reviews included in this series. These reviews will inform the development of a guidance checklist and resources for engaging interest-holders through all steps of evidence synthesis. The plan for developing this guidance is described in the project protocol [<span>10</span>].</p><p>“Interest-holders” are groups of people with legitimate interests in the health issue under consideration and whose perspectives and views should be considered when conducting this study [<span>2</span>]. Their interests arise and draw their legitimacy from the fact that these people are responsible for or affected by health- and healthcare-related decisions that can be informed by research evidence. Engagement of interest-holders in evidence syntheses can promote transparency, accountability, trust, and help to ensure that the needs of interest-holders are included. Engagement can improve the translation of evidence into policy and practice [<span>11</span>]. Interest-holders can contribute throughout the steps of evidence synthesis including, for example, refining the research question and suggesting appropriate outcomes, suggesting additional references to consider, and providing context to interpret the evidence.</p><p>This study was conducted by the MuSE Consortium, a group of over 160 individuals from 20 countrie
高质量的证据综合用于卫生决策,如政策、立法和临床建议[b]。当受这些决策影响的人(称为“利益相关者”)被纳入证据合成过程时,证据合成的有用性、相关性、意义和可及性可能会得到改善[2-4]。这种参与研究的概念是基于这样一个原则,即受所研究的健康状况或为解决这一问题而采取的干预措施影响的人在道义上有权参与决定如何进行研究[3,5]。虽然供资方对利益相关者参与的期望越来越高[b],但尚未确定让不同利益相关者参与证据综合的最有效方法[b]。此外,虽然有一些与参与研究相关的指导,但主要侧重于患者和公众参与初级研究,而不是证据合成,并且与其他利益相关者参与的指导有限[3,4,7 -9]。本文的目的是介绍一系列关于在进行证据合成时如何成功地吸引不同利益相关者的文章。该系列文章将考虑不同利益相关者参与的方法(包括谁参与以及以何种方式参与)、参与的障碍和促进因素、参与的影响、利益冲突的管理以及与公平有关的因素。本文介绍了本系列中包含的五篇综述中使用的共享定义。这些审查将为制定指导清单和资源提供信息,以便在证据综合的所有步骤中吸引利益攸关方。开发该指南的计划在项目协议[10]中进行了描述。“利益相关者”是指对所考虑的健康问题有合法利益的人群,在进行本研究时应考虑他们的观点和意见。他们的利益产生并获得合法性,是因为这些人对可由研究证据提供信息的卫生和卫生保健相关决定负责或受其影响。利益攸关方参与证据综合可以促进透明度、问责制和信任,并有助于确保包括利益攸关方的需求。参与可以改善将证据转化为政策和实践的过程。利益相关者可以在证据合成的整个步骤中做出贡献,包括,例如,精炼研究问题并提出适当的结果,建议考虑额外的参考资料,并提供解释证据的背景。这项研究是由MuSE联盟进行的,该联盟由来自20个国家的160多人组成,他们对参与卫生研究、证据合成和卫生指南感兴趣。该项目补充了以前的缪斯项目,该项目为参与制定健康准则和临床实践建议制定了指导方针。我们使用了一组标准化的术语和定义,这些术语和定义在本系列的一系列评论中一致使用。这些定义是通过我们参与研究[6,13,14]、指南[15,16]的相关工作,与MuSE联盟合作制定并达成一致的;(Petkovic et al., 2022[2,10,12,17 -19]),并在准备本文介绍的一系列综述。证据综合综合研究证据,以解决卫生保健相关问题。他们使用严格的、明确的和透明的方法,包括范围审查、快速审查以及定量或定性的系统审查bbb。有许多不同类型的证据合成,如表1所示。利益相关者包括以下11个P:患者和护理人员、公众、护理提供者、政策制定者、项目经理、卫生研究支付者、卫生服务支付者、同行评审编辑和产品制造商。表2提供了每个组的定义。利益相关者群体的这种分类法与证据综合有关,其他分类法用于其他类型的研究,如生物医学研究、临床研究和环境卫生研究。关于“利息持有人”一词的完整解释载于本系列[2]的评论中。参与是指利益相关者与研究团队之间的双向关系。也可以使用其他术语,如“参与”、“合作”,但出于我们工作的目的,我们使用“参与”(表3)。本系列包括六篇论文,涉及让利益相关者参与证据合成的不同方面。 这些论文都是由一个团队共同撰写的,该团队包括来自我们确定的各种利益相关者团体的代表,以及缪斯联盟的所有成员。我们的第一篇论文已经发表,并引入了“利息持有者[2]”这个术语。其余五篇论文是证据综合,描述了与审计业务相关的不同问题。第一个综合是范围审查,确定让利益相关者参与证据综合的方法。此范围界定审查是对先前的审查[5]的更新,描述了参与利益相关者的方法,包括参与的对象、参与的目标、参与的方式以及在审查过程[38]的哪些阶段。第二项审查是混合方法证据综合审查影响利益相关者参与的因素。具体而言,它旨在识别和综合使用理论领域框架bbb在审查周期的所有阶段涉及利益相关者的障碍和促进因素。该审查还审查了背景因素如何影响不同利益相关者群体之间参与的性质和程度。第三篇综述评估了利益相关者参与对证据综合的影响。在本综述中,关于参与的“影响”是指“研究过程、研究产品、相关人员或更广泛的社会因参与证据综合而发生的任何变化”bbb。第四篇综述描述了与参与证据综合的利益冲突有关的问题。本综述确定了不同利益相关者之间利益冲突的类型、如何管理这些冲突,以及这些冲突对证据合成过程的影响[b]。最后,本系列的最后一篇综述旨在确定和描述利益相关者参与证据合成bbb的公平考虑因素。公平参与的重点是有意地包容不同的个人和群体。这些审查将为制定在整个证据综合过程中与利益攸关方接触的指导草案提供信息。我们将通过与利益相关者的访谈和国际调查,探讨是否同意本指导草案。我们将采用共识方法,最终确定一份清单,以便在证据综合过程的各个阶段与11个已确定的利益相关者群体进行接触。我们欢迎有兴趣参与这一正在进行的项目的后续阶段。詹妮弗佩特科维奇:概念化,写作-原稿,项目管理,写作-审查和编辑,资金获取。Joanne Khabsa:概念化,写作-原稿,写作-审查和编辑,资金获取。柳波夫·利特温:构思,写作-原稿,写作-审查和编辑。Alex Todhunter-Brown:概念化,写作-原稿,写作-审查和编辑,资金获取。奥利维亚·马格伍德:概念化,写作-审查和编辑,写作-原稿,资金获取。宝琳·坎贝尔:构思、写作、评论和编辑。Elie A. Akl:概念化,写作-审查和编辑,资金获取。Thomas W. Concannon:概念化,写作-审查和编辑,资金获取。Holger Schunemann:概念化,写作-审查和编辑,资金获取。维维安·韦尔奇:概念化,写作-审查和编辑,资金获取。Peter Tugwell:概念,监督,写作-审查和编辑,资金获取。Cochrane Evidence Synthesis and Methods作为Cochrane Collaboration期刊,遵守Cochrane对Cochrane Library内容的利益冲突政策(2020),该政策适用于所有期刊内容。对于Cochrane Evidence Synthesis and Methods, Cochrane的利益冲突政策不仅要求尽早声明研究资金和作者利益,而且还规定一些资金和利益冲突会阻止人们成为投稿的作者。作者声明无利益冲突。数据共享不适用于本文,因为在当前研究期间没有生成或分析数据集。本文的同行评审历史可在https://www.webofscience.com/api/gateway/wos/peer-review/10.1002/cesm.70057上获得。
{"title":"Introducing a Series of Reviews Assessing Engagement in Evidence Syntheses","authors":"Jennifer Petkovic,&nbsp;Joanne Khabsa,&nbsp;Lyubov Lytvyn,&nbsp;Alex Todhunter-Brown,&nbsp;Olivia Magwood,&nbsp;Pauline Campbell,&nbsp;Elie A. Akl,&nbsp;Thomas W. Concannon,&nbsp;Holger Schunemann,&nbsp;Vivian Welch,&nbsp;Peter Tugwell","doi":"10.1002/cesm.70057","DOIUrl":"https://doi.org/10.1002/cesm.70057","url":null,"abstract":"&lt;p&gt;High quality evidence syntheses are used in health decision-making, such as policies, legislation, and clinical recommendations [&lt;span&gt;1&lt;/span&gt;]. The usefulness, relevance, meaningfulness, and accessibility of evidence syntheses may be improved when people who are affected by those decisions, called “interest-holders,” are included in the evidence synthesis process [&lt;span&gt;2-4&lt;/span&gt;]. This concept of engagement in research is based on the principle that those affected by the health condition under study or the intervention to address it have a moral right to contribute to the decisions about how the research is conducted [&lt;span&gt;3, 5&lt;/span&gt;]. While there are increasing expectations from funders regarding the involvement of interest-holders [&lt;span&gt;6&lt;/span&gt;], the most effective methods for engaging different interest-holders in evidence syntheses have not been identified [&lt;span&gt;5&lt;/span&gt;]. Additionally, while there is some guidance related to engagement in research, it predominantly focuses on patient and public engagement in primary research, not evidence synthesis and there is limited guidance for engaging with other interest-holders [&lt;span&gt;3, 4, 7-9&lt;/span&gt;].&lt;/p&gt;&lt;p&gt;The aim of this paper is to introduce a series of articles about how to successfully engage different interest-holders when conducting evidence syntheses. The series of articles will consider methods used to engage different interest-holders (including who to involve and in what way), barriers and facilitators to engagement, impacts of engagement, management of conflicts of interest, and factors relating to equity.&lt;/p&gt;&lt;p&gt;This paper presents the shared definitions used across each of the five reviews included in this series. These reviews will inform the development of a guidance checklist and resources for engaging interest-holders through all steps of evidence synthesis. The plan for developing this guidance is described in the project protocol [&lt;span&gt;10&lt;/span&gt;].&lt;/p&gt;&lt;p&gt;“Interest-holders” are groups of people with legitimate interests in the health issue under consideration and whose perspectives and views should be considered when conducting this study [&lt;span&gt;2&lt;/span&gt;]. Their interests arise and draw their legitimacy from the fact that these people are responsible for or affected by health- and healthcare-related decisions that can be informed by research evidence. Engagement of interest-holders in evidence syntheses can promote transparency, accountability, trust, and help to ensure that the needs of interest-holders are included. Engagement can improve the translation of evidence into policy and practice [&lt;span&gt;11&lt;/span&gt;]. Interest-holders can contribute throughout the steps of evidence synthesis including, for example, refining the research question and suggesting appropriate outcomes, suggesting additional references to consider, and providing context to interpret the evidence.&lt;/p&gt;&lt;p&gt;This study was conducted by the MuSE Consortium, a group of over 160 individuals from 20 countrie","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70057","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145581023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the Feasibility and Acceptability of a Bespoke Large Language Model Pipeline to Extract Data From Different Study Designs for Public Health Evidence Reviews 评估定制大型语言模型管道从不同研究设计中提取数据用于公共卫生证据评价的可行性和可接受性
Pub Date : 2025-11-04 DOI: 10.1002/cesm.70061
Zalaya Simmons, Beti Evans, Tamsyn Harris, Harry Woolnough, Lauren Dunn, Jonathon Fuller, Kerry Cella, Daphne Duval

Introduction

Data extraction is a critical but resource-intensive step of the evidence review process. Whilst there is evidence that artificial intelligence (AI) and large language models (LLMs) can improve the efficiency of data extraction from randomized controlled trials, their potential for other study designs is unclear. In this context, this study aimed to evaluate the performance of a bespoke LLM model pipeline (Retrieval-Augmented Generation pipeline utilizing LLaMa 3-70B) to automate data extraction from a range of study designs by assessing the accuracy and reliability of the extractions measured as error types and acceptability.

Methods

Accuracy was assessed by retrospectively comparing the LLM extractions against human extractions from a review previously conducted by the authors. A total of 173 data fields from 24 articles (including experimental, observational, qualitative, and modeling studies) were assessed, of which three were used for prompt engineering. Reliability was assessed by calculating the mean maximum agreement rate (the highest proportion of identical returns from 10 consecutive extractions) for 116 data fields from 16 of the 24 studies. An evaluation framework was developed to assess the accuracy and reliability of LLM outputs measured as error types and acceptability (acceptability was assessed on whether it would be usable in real-world settings if the model acted as one reviewer and a human as a second reviewer).

Results

Of the 173 data fields evaluated for accuracy, 68% were rated by human reviewers as acceptable (consistent with what is deemed to be acceptable data extraction from a human reviewer). However, acceptability ratings varied depending on the data field extracted (33% to 100%), with at least 90% acceptability for “objective,” “setting,” and “study design,” but 54% or less for data fields such as “outcome” and “time period.” For reliability, the mean maximum agreement rate was 0.71 (SD: 0.28), with variation across different data fields.

Conclusion

This evaluation demonstrates the potential for LLMs, when paired with human quality assurance, to support data extraction in evidence reviews that include a range of study designs. However, further improvements in performance and validation are required before the model can be introduced into review workflows.

数据提取是证据审查过程中一个关键但资源密集的步骤。虽然有证据表明人工智能(AI)和大型语言模型(llm)可以提高随机对照试验中数据提取的效率,但它们在其他研究设计中的潜力尚不清楚。在此背景下,本研究旨在评估定制LLM模型管道(利用LLaMa 3-70B的检索-增强生成管道)的性能,通过评估提取的准确性和可靠性,测量误差类型和可接受性,从一系列研究设计中自动提取数据。方法:通过回顾性比较LLM萃取物与作者先前进行的一项综述的人类萃取物来评估准确性。共评估了来自24篇文章(包括实验、观察、定性和建模研究)的173个数据字段,其中3个用于提示工程。通过计算24项研究中16项的116个数据字段的平均最大一致性率(10个连续提取的相同回报的最高比例)来评估可靠性。开发了一个评估框架来评估LLM输出的准确性和可靠性,以错误类型和可接受性来衡量(可接受性是根据如果模型作为一个审稿人,而人类作为第二个审稿人,它是否在现实环境中可用来评估的)。结果:在评估准确性的173个数据字段中,68%被人工审稿人评为可接受的(与人工审稿人认为可接受的数据提取一致)。然而,可接受度评级因提取的数据字段而异(33%至100%),“目标”、“设置”和“研究设计”的可接受度至少为90%,但“结果”和“时间段”等数据字段的可接受度为54%或更低。对于可靠性,平均最大一致性率为0.71 (SD: 0.28),在不同的数据字段中存在差异。结论:该评价表明llm与人类质量保证相结合,在包括一系列研究设计的证据审查中支持数据提取的潜力。然而,在将模型引入评审工作流之前,还需要进一步改进性能和验证。
{"title":"Assessing the Feasibility and Acceptability of a Bespoke Large Language Model Pipeline to Extract Data From Different Study Designs for Public Health Evidence Reviews","authors":"Zalaya Simmons,&nbsp;Beti Evans,&nbsp;Tamsyn Harris,&nbsp;Harry Woolnough,&nbsp;Lauren Dunn,&nbsp;Jonathon Fuller,&nbsp;Kerry Cella,&nbsp;Daphne Duval","doi":"10.1002/cesm.70061","DOIUrl":"10.1002/cesm.70061","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Introduction</h3>\u0000 \u0000 <p>Data extraction is a critical but resource-intensive step of the evidence review process. Whilst there is evidence that artificial intelligence (AI) and large language models (LLMs) can improve the efficiency of data extraction from randomized controlled trials, their potential for other study designs is unclear. In this context, this study aimed to evaluate the performance of a bespoke LLM model pipeline (Retrieval-Augmented Generation pipeline utilizing LLaMa 3-70B) to automate data extraction from a range of study designs by assessing the accuracy and reliability of the extractions measured as error types and acceptability.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>Accuracy was assessed by retrospectively comparing the LLM extractions against human extractions from a review previously conducted by the authors. A total of 173 data fields from 24 articles (including experimental, observational, qualitative, and modeling studies) were assessed, of which three were used for prompt engineering. Reliability was assessed by calculating the mean maximum agreement rate (the highest proportion of identical returns from 10 consecutive extractions) for 116 data fields from 16 of the 24 studies. An evaluation framework was developed to assess the accuracy and reliability of LLM outputs measured as error types and acceptability (acceptability was assessed on whether it would be usable in real-world settings if the model acted as one reviewer and a human as a second reviewer).</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Of the 173 data fields evaluated for accuracy, 68% were rated by human reviewers as acceptable (consistent with what is deemed to be acceptable data extraction from a human reviewer). However, acceptability ratings varied depending on the data field extracted (33% to 100%), with at least 90% acceptability for “objective,” “setting,” and “study design,” but 54% or less for data fields such as “outcome” and “time period.” For reliability, the mean maximum agreement rate was 0.71 (SD: 0.28), with variation across different data fields.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>This evaluation demonstrates the potential for LLMs, when paired with human quality assurance, to support data extraction in evidence reviews that include a range of study designs. However, further improvements in performance and validation are required before the model can be introduced into review workflows.</p>\u0000 </section>\u0000 </div>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12584109/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145454519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Health Equity in Systematic Reviews: A Tutorial—Part 1 Getting Started With Health Equity in Your Review 系统评论中的健康公平:教程-第1部分:开始在你的评论中使用健康公平
Pub Date : 2025-10-30 DOI: 10.1002/cesm.70055
Jennifer Petkovic, Jordi Pardo Pardo, Vivian Welch, Omar Dewidar, Lara J. Maxwell, Andrea Darzi, Tamara Lotfi, Lawrence Mbuagbaw, Kevin Pottie, Peter Tugwell

This tutorial focuses on how to get started with considering health equity in systematic reviews of interventions. We will explain why health equity should be considered, how to frame your question, and which interest-holders to engage. This is the first tutorial in a series on health equity. The second tutorial focuses on implementing health equity methods in your review.

本教程侧重于如何开始在干预措施的系统审查中考虑卫生公平。我们将解释为什么应该考虑卫生公平,如何构建你的问题,以及与哪些利益攸关方接触。这是健康公平系列教程中的第一篇。第二篇教程侧重于在您的审查中实施卫生公平方法。
{"title":"Health Equity in Systematic Reviews: A Tutorial—Part 1 Getting Started With Health Equity in Your Review","authors":"Jennifer Petkovic,&nbsp;Jordi Pardo Pardo,&nbsp;Vivian Welch,&nbsp;Omar Dewidar,&nbsp;Lara J. Maxwell,&nbsp;Andrea Darzi,&nbsp;Tamara Lotfi,&nbsp;Lawrence Mbuagbaw,&nbsp;Kevin Pottie,&nbsp;Peter Tugwell","doi":"10.1002/cesm.70055","DOIUrl":"https://doi.org/10.1002/cesm.70055","url":null,"abstract":"<p>This tutorial focuses on how to get started with considering health equity in systematic reviews of interventions. We will explain why health equity should be considered, how to frame your question, and which interest-holders to engage. This is the first tutorial in a series on health equity. The second tutorial focuses on implementing health equity methods in your review.</p>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70055","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145406969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing Large-Language Models for Efficient Data Extraction in Systematic Reviews: The Role of Prompt Engineering 利用大语言模型在系统评论中进行有效的数据提取:提示工程的作用。
Pub Date : 2025-10-27 DOI: 10.1002/cesm.70058
Molly Murton, Ellie Boulton, Shona Cross, Ambar Khan, Swati Kumar, Giuseppina Magri, Charlotte Marris, David Slater, Emma Worthington, Elizabeth Lunn

Introduction

Systematic literature reviews (SLRs) of randomized clinical trials (RCTs) underpin evidence-based medicine but can be limited by the intensive resource demands of data extraction. Recent advances in accessible large-language models (LLMs) hold promise for automating this step, however testing is limited across different outcomes and disease areas.

Methods

This study developed prompt engineering strategies for GPT-4o to extract data from RCTs across three disease areas: non-small cell lung cancer, endometrial cancer and hypertrophic cardiomyopathy. Prompts were iteratively refined during the development phase, then tested on unseen data. Performance was evaluated via comparison to human extraction of the same data, using F1 scores, precision, recall and percentage accuracy.

Results

The LLM was highly effective for extracting study and baseline characteristics, often equaling human performance, with test F1 scores exceeding 0.85. Complex efficacy and adverse event data proved more challenging, with test F1 scores ranging from 0.22 to 0.50. Transferability of prompts across disease areas was promising but varied, highlighting the need for disease-specific refinement.

Conclusion

Our findings demonstrate the potential of LLMs, guided by rigorous prompt engineering, to augment the SLR process. However, human oversight remains essential, particularly for complex and nuanced data. As these technologies evolve, continued validation of AI tools will be necessary to ensure accuracy and reliability, and safeguarding of the quality of evidence synthesis.

简介:随机临床试验(rct)的系统文献综述(slr)是循证医学的基础,但可能受到数据提取的密集资源需求的限制。可访问大语言模型(llm)的最新进展有望实现这一步骤的自动化,然而测试在不同的结果和疾病领域受到限制。方法:本研究为gpt - 40制定了快速工程策略,从三个疾病领域的随机对照试验中提取数据:非小细胞肺癌、子宫内膜癌和肥厚性心肌病。在开发阶段迭代地改进提示,然后在不可见的数据上进行测试。通过与人类提取相同数据的比较,使用F1分数、精度、召回率和准确率百分比来评估性能。结果:LLM在提取研究和基线特征方面非常有效,通常与人的表现相当,测试F1得分超过0.85。复杂的疗效和不良事件数据证明更具挑战性,测试F1评分范围为0.22至0.50。提示符在疾病领域之间的可转移性是有希望的,但各不相同,突出了对疾病特异性改进的需要。结论:我们的研究结果表明,在严格的快速工程指导下,llm具有增强SLR过程的潜力。然而,人类的监督仍然是必不可少的,特别是对于复杂和微妙的数据。随着这些技术的发展,有必要对人工智能工具进行持续验证,以确保准确性和可靠性,并保障证据合成的质量。
{"title":"Harnessing Large-Language Models for Efficient Data Extraction in Systematic Reviews: The Role of Prompt Engineering","authors":"Molly Murton,&nbsp;Ellie Boulton,&nbsp;Shona Cross,&nbsp;Ambar Khan,&nbsp;Swati Kumar,&nbsp;Giuseppina Magri,&nbsp;Charlotte Marris,&nbsp;David Slater,&nbsp;Emma Worthington,&nbsp;Elizabeth Lunn","doi":"10.1002/cesm.70058","DOIUrl":"10.1002/cesm.70058","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Introduction</h3>\u0000 \u0000 <p>Systematic literature reviews (SLRs) of randomized clinical trials (RCTs) underpin evidence-based medicine but can be limited by the intensive resource demands of data extraction. Recent advances in accessible large-language models (LLMs) hold promise for automating this step, however testing is limited across different outcomes and disease areas.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>This study developed prompt engineering strategies for GPT-4o to extract data from RCTs across three disease areas: non-small cell lung cancer, endometrial cancer and hypertrophic cardiomyopathy. Prompts were iteratively refined during the development phase, then tested on unseen data. Performance was evaluated via comparison to human extraction of the same data, using F1 scores, precision, recall and percentage accuracy.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The LLM was highly effective for extracting study and baseline characteristics, often equaling human performance, with test F1 scores exceeding 0.85. Complex efficacy and adverse event data proved more challenging, with test F1 scores ranging from 0.22 to 0.50. Transferability of prompts across disease areas was promising but varied, highlighting the need for disease-specific refinement.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>Our findings demonstrate the potential of LLMs, guided by rigorous prompt engineering, to augment the SLR process. However, human oversight remains essential, particularly for complex and nuanced data. As these technologies evolve, continued validation of AI tools will be necessary to ensure accuracy and reliability, and safeguarding of the quality of evidence synthesis.</p>\u0000 </section>\u0000 </div>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12559671/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145403637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-in-the-Loop Artificial Intelligence System for Systematic Literature Review: Methods and Validations for the AutoLit Review Software 用于系统文献评论的人在环人工智能系统:AutoLit评论软件的方法和验证
Pub Date : 2025-10-25 DOI: 10.1002/cesm.70059
Kevin M. Kallmes, Jade Thurnham, Marius Sauca, Ranita Tarchand, Keith R. Kallmes, Karl J. Holub

Introduction

While artificial intelligence (AI) tools have been utilized for individual stages within the systematic literature review (SLR) process, no tool has previously been shown to support each critical SLR step. In addition, the need for expert oversight has been recognized to ensure the quality of SLR findings. Here, we describe a complete methodology for utilizing our AI SLR tool with human-in-the-loop curation workflows, as well as AI validations, time savings, and approaches to ensure compliance with best review practices.

Methods

SLRs require completing Search, Screening, and Extraction from relevant studies, with meta-analysis and critical appraisal as relevant. We present a full methodological framework for completing SLRs utilizing our AutoLit software (Nested Knowledge). This system integrates AI models into the central steps in SLR: Search strategy generation, Dual Screening of Titles/Abstracts and Full Texts, and Extraction of qualitative and quantitative evidence. The system also offers manual Critical Appraisal and Insight drafting and fully-automated Network Meta-analysis. Validations comparing AI performance to experts are reported, and where relevant, time savings and ‘rapid review’ alternatives to the SLR workflow.

Results

Search strategy generation with the Smart Search AI can turn a Research Question into full Boolean strings with 76.8% and 79.6% Recall in two validation sets. Supervised machine learning tools can achieve 82–97% Recall in reviewer-level Screening. Population, Interventions/Comparators, and Outcomes (PICOs) extraction achieved F1 of 0.74; accuracy for study type, location, and size were 74%, 78%, and 91%, respectively. Time savings of 50% in Abstract Screening and 70–80% in qualitative extraction were reported. Extraction of user-specified qualitative and quantitative tags and data elements remains exploratory and requires human curation for SLRs.

Conclusion

AI systems can support high-quality, human-in-the-loop execution of key SLR stages. Transparency, replicability, and expert oversight are central to the use of AI SLR tools.

虽然人工智能(AI)工具已经被用于系统文献综述(SLR)过程中的各个阶段,但以前还没有工具被证明可以支持每个关键的单反步骤。此外,已认识到需要专家监督,以确保单反调查结果的质量。在这里,我们描述了一个完整的方法,用于利用我们的人工智能单反工具和人工在环管理工作流程,以及人工智能验证、节省时间和确保符合最佳审查实践的方法。方法单反研究需要完成相关研究的检索、筛选和提取,并进行meta分析和批判性评价。我们提出了一个完整的方法框架,利用我们的AutoLit软件(嵌套知识)完成单反。该系统将人工智能模型集成到单反的核心步骤中:搜索策略生成,标题/摘要和全文的双重筛选,以及定性和定量证据的提取。该系统还提供手动关键评估和洞察力起草和全自动网络元分析。报告了将人工智能性能与专家进行比较的验证,并在相关的情况下,节省时间和“快速审查”替代单反工作流程。使用Smart Search AI生成搜索策略可以将研究问题转换为完整的布尔字符串,在两个验证集中召回率分别为76.8%和79.6%。有监督的机器学习工具在审查员级别的筛选中可以达到82-97%的召回率。人群、干预/比较物和结果(PICOs)提取的F1为0.74;研究类型、地点和规模的准确性分别为74%、78%和91%。摘要筛选节省时间50%,定性提取节省时间70-80%。用户指定的定性和定量标签和数据元素的提取仍然是探索性的,需要人为的单反管理。结论:人工智能系统可以支持高质量的、人工在环的单反关键阶段的执行。透明度、可复制性和专家监督是使用人工智能单反工具的核心。
{"title":"Human-in-the-Loop Artificial Intelligence System for Systematic Literature Review: Methods and Validations for the AutoLit Review Software","authors":"Kevin M. Kallmes,&nbsp;Jade Thurnham,&nbsp;Marius Sauca,&nbsp;Ranita Tarchand,&nbsp;Keith R. Kallmes,&nbsp;Karl J. Holub","doi":"10.1002/cesm.70059","DOIUrl":"https://doi.org/10.1002/cesm.70059","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Introduction</h3>\u0000 \u0000 <p>While artificial intelligence (AI) tools have been utilized for individual stages within the systematic literature review (SLR) process, no tool has previously been shown to support each critical SLR step. In addition, the need for expert oversight has been recognized to ensure the quality of SLR findings. Here, we describe a complete methodology for utilizing our AI SLR tool with human-in-the-loop curation workflows, as well as AI validations, time savings, and approaches to ensure compliance with best review practices.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>SLRs require completing Search, Screening, and Extraction from relevant studies, with meta-analysis and critical appraisal as relevant. We present a full methodological framework for completing SLRs utilizing our AutoLit software (Nested Knowledge). This system integrates AI models into the central steps in SLR: Search strategy generation, Dual Screening of Titles/Abstracts and Full Texts, and Extraction of qualitative and quantitative evidence. The system also offers manual Critical Appraisal and Insight drafting and fully-automated Network Meta-analysis. Validations comparing AI performance to experts are reported, and where relevant, time savings and ‘rapid review’ alternatives to the SLR workflow.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Search strategy generation with the Smart Search AI can turn a Research Question into full Boolean strings with 76.8% and 79.6% Recall in two validation sets. Supervised machine learning tools can achieve 82–97% Recall in reviewer-level Screening. Population, Interventions/Comparators, and Outcomes (PICOs) extraction achieved F1 of 0.74; accuracy for study type, location, and size were 74%, 78%, and 91%, respectively. Time savings of 50% in Abstract Screening and 70–80% in qualitative extraction were reported. Extraction of user-specified qualitative and quantitative tags and data elements remains exploratory and requires human curation for SLRs.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>AI systems can support high-quality, human-in-the-loop execution of key SLR stages. Transparency, replicability, and expert oversight are central to the use of AI SLR tools.</p>\u0000 </section>\u0000 </div>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70059","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145367149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Evidence Synthesis Efficiency: Leveraging Large Language Models and Agentic Workflows for Optimized Literature Screening 提高证据合成效率:利用大型语言模型和代理工作流程优化文献筛选。
Pub Date : 2025-10-21 DOI: 10.1002/cesm.70042
Bing Hu, Emmalie Tomini, Tricia Corrin, Kusala Pussegoda, Elias Sandner, Andre Henriques, Alice Simniceanu, Luca Fontana, Andreas Wagner, Stephanie Brazeau, Lisa Waddell

Background

Public health events of international concern highlight the need for up-to-date evidence curated using sustainable processes that are accessible. In development of the Global Repository of Epidemiological Parameters (grEPI) we explore the performance of an agentic-AI assisted pipeline (GREP-Agent) for screening evidence which capitalizes on recent advancements in large language models (LLMs).

Methods

In this study, the performance of the GREP-Agent was evaluated on a data set of 2000 citations from a systematic review on measles using four LLMs (GPT4o, GPT4o-mini, Llama3.1, and Phi4). The GREP-Agent framework integrates multiple LLMs and human feedback to fine-tune its performance, optimize workload reduction and accuracy in screening research articles. The impact on performance of each part of this Agentic-AI system is presented and measured by accuracy, precision, recall, and F1-score metrics.

Results

The results show how each phase of the GREP-Agent system incrementally improves accuracy regardless of the LLM. We found that GREP-Agent was able to increase sensitivity across a broad range of open source and proprietary LLMs to 84.2%–88.9% after fine-tuning and to 86.4%–95.3% by varying workload reduction strategies. Performance was significantly impacted by the clarity of the screening questions and setting thresholds for optimized workload reduction strategies.

Conclusions

The GREP-Agent shows promise in improving the efficiency and effectiveness of evidence synthesis in dynamic public health contexts. Further development and refinement of adaptable human-in-the-loop AI systems for screening literature are essential to support future public health response activities, while maintaining a human-centric approach.

背景:国际关注的公共卫生事件突出了利用可获得的可持续程序整理最新证据的必要性。在全球流行病学参数库(grEPI)的开发过程中,我们探索了一个代理-人工智能辅助管道(GREP-Agent)的性能,用于筛选证据,该管道利用了大型语言模型(llm)的最新进展。方法:在本研究中,使用四种llm (gpt40, gpt40 -mini, Llama3.1和Phi4)对来自麻疹系统综述的2000次引用数据集的GREP-Agent的性能进行评估。GREP-Agent框架集成了多个llm和人类反馈,以微调其性能,优化减少工作量和筛选研究文章的准确性。对这个agent - ai系统的每个部分的性能的影响是通过准确性、精度、召回率和f1得分指标来呈现和衡量的。结果:结果显示GREP-Agent系统的每个阶段如何逐步提高准确性,而与LLM无关。我们发现GREP-Agent能够在广泛的开源和专有llm中提高灵敏度,经过微调后达到84.2%-88.9%,通过不同的工作量减少策略提高到86.4%-95.3%。筛选问题的清晰度和为优化的工作量减少策略设置阈值对性能有显著影响。结论:GREP-Agent有望在动态公共卫生环境中提高证据合成的效率和有效性。进一步开发和完善用于筛选文献的适应性人在环人工智能系统,对于支持未来的公共卫生应对活动至关重要,同时保持以人为本的方法。
{"title":"Enhancing Evidence Synthesis Efficiency: Leveraging Large Language Models and Agentic Workflows for Optimized Literature Screening","authors":"Bing Hu,&nbsp;Emmalie Tomini,&nbsp;Tricia Corrin,&nbsp;Kusala Pussegoda,&nbsp;Elias Sandner,&nbsp;Andre Henriques,&nbsp;Alice Simniceanu,&nbsp;Luca Fontana,&nbsp;Andreas Wagner,&nbsp;Stephanie Brazeau,&nbsp;Lisa Waddell","doi":"10.1002/cesm.70042","DOIUrl":"10.1002/cesm.70042","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Public health events of international concern highlight the need for up-to-date evidence curated using sustainable processes that are accessible. In development of the Global Repository of Epidemiological Parameters (grEPI) we explore the performance of an agentic-AI assisted pipeline (GREP-Agent) for screening evidence which capitalizes on recent advancements in large language models (LLMs).</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>In this study, the performance of the GREP-Agent was evaluated on a data set of 2000 citations from a systematic review on measles using four LLMs (GPT4o, GPT4o-mini, Llama3.1, and Phi4). The GREP-Agent framework integrates multiple LLMs and human feedback to fine-tune its performance, optimize workload reduction and accuracy in screening research articles. The impact on performance of each part of this Agentic-AI system is presented and measured by accuracy, precision, recall, and F1-score metrics.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The results show how each phase of the GREP-Agent system incrementally improves accuracy regardless of the LLM. We found that GREP-Agent was able to increase sensitivity across a broad range of open source and proprietary LLMs to 84.2%–88.9% after fine-tuning and to 86.4%–95.3% by varying workload reduction strategies. Performance was significantly impacted by the clarity of the screening questions and setting thresholds for optimized workload reduction strategies.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>The GREP-Agent shows promise in improving the efficiency and effectiveness of evidence synthesis in dynamic public health contexts. Further development and refinement of adaptable human-in-the-loop AI systems for screening literature are essential to support future public health response activities, while maintaining a human-centric approach.</p>\u0000 </section>\u0000 </div>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12538819/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145351061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Cochrane Evidence Synthesis and Methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1