首页 > 最新文献

Journal of Clinical Epidemiology最新文献

英文 中文
Including nonrandomized evidence in living systematic reviews: lessons learned from the COVID-NMA initiative 在活体系统评价中纳入非随机证据:从COVID-NMA倡议中吸取的经验教训。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2025-11-21 DOI: 10.1016/j.jclinepi.2025.112071
Hillary Bonnet , Julian P.T. Higgins , Anna Chaimani , Theodoros Evrenoglou , Lina Ghosn , Carolina Graña , Elodie Perrodeau , Sally Yaacoub , Gabriel Rada , Hanna Bergman , Brian Buckley , Elise Cogo , Gemma Villanueva , Nicholas Henschke , Rouba Assi , Carolina Riveros , Rosie Cornish , Francesca Spiga , Silvia Minozzi , David Tovey , Isabelle Boutron
<div><h3>Background and Objectives</h3><div>Randomized controlled trials (RCTs) are more likely to be included in evidence syntheses of health interventions due to their methodological rigor. However, the integration of nonrandomized studies (NRSs) may be necessary, as was seen during the COVID-19 pandemic due to the emergence of variants of concern. We aimed to examine the body of evidence, randomized and nonrandomized, on COVID-19 vaccine effectiveness (VE) during the emergence of the Delta variant and to share lessons learned from including nonrandomized evidence alongside randomized evidence in the COVID-NMA living systematic review.</div></div><div><h3>Study Design and Setting</h3><div>The COVID-NMA initiative is an international, living systematic review and meta-analysis that continually synthesized evidence on COVID-19 interventions. For this study, we identified all RCTs and comparative NRSs reporting on VE against the Delta variant from December 2020 (its initial detection) through November 2021 (date of last COVID-NMA NRS search). We conducted two parallel systematic reviews: one focusing on RCTs and the other on NRSs to compare available evidence on VE against the Delta variant. We also compared the publication timelines of the included studies with the global prevalence of the Delta variant, and documented the specific methodological challenges and solutions when including NRSs in living systematic reviews.</div></div><div><h3>Results</h3><div>From December 2020 to November 2021, only one RCT reported vaccine efficacy against Delta in a subgroup of 6325 participants, while, during the same period, 52 NRSs including 68,010,961 participants reported VE against this variant. Nevertheless, including NRSs in our living systematic review posed several challenges. We faced difficulties in identifying eligible studies, encountered overlapping studies (ie, NRSs using the same database), and inconsistent definitions of Delta variant cases. Moreover, multiple analyses and metrics for the same outcome were reported without a pre-specified primary analysis in a registry or protocol. In addition, assessing the risk of bias required expertise, standardization, and training.</div></div><div><h3>Conclusion</h3><div>To remain responsive during public health emergencies, living systematic reviews should implement processes that enable the timely identification, evaluation, and integration of both randomized and nonrandomized evidence where appropriate.</div></div><div><h3>Plain Language Summary</h3><div>When new health treatments are tested, the best way to see how well they work is through randomized controlled trials (RCTs). These are carefully designed studies that help reduce bias. However, during the COVID-19 pandemic, scientists also had to rely on other types of studies called nonrandomized studies (NRS) based on real-world data because the virus was changing quickly and required urgent action. Our living systematic review examined how effective
背景:随机对照试验(RCTs)由于其方法的严谨性更有可能被纳入卫生干预措施的证据综合。然而,整合非随机研究(NRSs)可能是必要的,正如在COVID-19大流行期间所看到的那样,由于出现了令人担忧的变体。目的:研究在Delta变异出现期间COVID-19疫苗有效性(VE)的随机和非随机证据,并分享在COVID-NMA活体系统评价中纳入非随机证据和随机证据的经验教训。方法:COVID-NMA倡议是一项持续合成COVID-19干预措施证据的国际实时系统评价和荟萃分析。在本研究中,我们确定了从2020年12月(其初始检测)到2021年11月(最后一次COVID-NMA NRS搜索日期)报告VE与Delta变体的所有rct和比较NRS。我们进行了两项平行的系统评价:一项侧重于rct,另一项侧重于NRSs,以比较VE与Delta变体的现有证据。我们还将纳入研究的发表时间与Delta变异的全球流行率进行了比较,并记录了在活体系统评价中纳入NRSs时的具体方法挑战和解决方案。结果:从2020年12月到2021年11月,只有一项RCT报告了6,325名参与者的疫苗对Delta的有效性,而在同一时期,52项nrs(包括68,010,961名参与者)报告了对该变体的VE。然而,在我们的生活系统评价中包括nrs提出了几个挑战。我们在确定符合条件的研究时遇到了困难,遇到了重叠的研究(即使用相同数据库的NRSs),以及Delta变异病例的不一致定义。此外,报告了相同结果的多个分析和指标,而没有在注册表或协议中预先指定主要分析。此外,评估偏见风险需要专业知识、标准化和培训。结论:为了在突发公共卫生事件中保持反应能力,实时系统评价应实施能够及时识别、评估和整合随机和非随机证据的程序。
{"title":"Including nonrandomized evidence in living systematic reviews: lessons learned from the COVID-NMA initiative","authors":"Hillary Bonnet ,&nbsp;Julian P.T. Higgins ,&nbsp;Anna Chaimani ,&nbsp;Theodoros Evrenoglou ,&nbsp;Lina Ghosn ,&nbsp;Carolina Graña ,&nbsp;Elodie Perrodeau ,&nbsp;Sally Yaacoub ,&nbsp;Gabriel Rada ,&nbsp;Hanna Bergman ,&nbsp;Brian Buckley ,&nbsp;Elise Cogo ,&nbsp;Gemma Villanueva ,&nbsp;Nicholas Henschke ,&nbsp;Rouba Assi ,&nbsp;Carolina Riveros ,&nbsp;Rosie Cornish ,&nbsp;Francesca Spiga ,&nbsp;Silvia Minozzi ,&nbsp;David Tovey ,&nbsp;Isabelle Boutron","doi":"10.1016/j.jclinepi.2025.112071","DOIUrl":"10.1016/j.jclinepi.2025.112071","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Background and Objectives&lt;/h3&gt;&lt;div&gt;Randomized controlled trials (RCTs) are more likely to be included in evidence syntheses of health interventions due to their methodological rigor. However, the integration of nonrandomized studies (NRSs) may be necessary, as was seen during the COVID-19 pandemic due to the emergence of variants of concern. We aimed to examine the body of evidence, randomized and nonrandomized, on COVID-19 vaccine effectiveness (VE) during the emergence of the Delta variant and to share lessons learned from including nonrandomized evidence alongside randomized evidence in the COVID-NMA living systematic review.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Study Design and Setting&lt;/h3&gt;&lt;div&gt;The COVID-NMA initiative is an international, living systematic review and meta-analysis that continually synthesized evidence on COVID-19 interventions. For this study, we identified all RCTs and comparative NRSs reporting on VE against the Delta variant from December 2020 (its initial detection) through November 2021 (date of last COVID-NMA NRS search). We conducted two parallel systematic reviews: one focusing on RCTs and the other on NRSs to compare available evidence on VE against the Delta variant. We also compared the publication timelines of the included studies with the global prevalence of the Delta variant, and documented the specific methodological challenges and solutions when including NRSs in living systematic reviews.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;div&gt;From December 2020 to November 2021, only one RCT reported vaccine efficacy against Delta in a subgroup of 6325 participants, while, during the same period, 52 NRSs including 68,010,961 participants reported VE against this variant. Nevertheless, including NRSs in our living systematic review posed several challenges. We faced difficulties in identifying eligible studies, encountered overlapping studies (ie, NRSs using the same database), and inconsistent definitions of Delta variant cases. Moreover, multiple analyses and metrics for the same outcome were reported without a pre-specified primary analysis in a registry or protocol. In addition, assessing the risk of bias required expertise, standardization, and training.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclusion&lt;/h3&gt;&lt;div&gt;To remain responsive during public health emergencies, living systematic reviews should implement processes that enable the timely identification, evaluation, and integration of both randomized and nonrandomized evidence where appropriate.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Plain Language Summary&lt;/h3&gt;&lt;div&gt;When new health treatments are tested, the best way to see how well they work is through randomized controlled trials (RCTs). These are carefully designed studies that help reduce bias. However, during the COVID-19 pandemic, scientists also had to rely on other types of studies called nonrandomized studies (NRS) based on real-world data because the virus was changing quickly and required urgent action. Our living systematic review examined how effective","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112071"},"PeriodicalIF":5.2,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approaches for reporting and interpreting statistically nonsignificant findings in evidence syntheses: a systematic review 在证据综合中报告和解释统计上不显著的发现的方法:系统回顾。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2025-11-21 DOI: 10.1016/j.jclinepi.2025.112083
Amin Sharifan , Andreea Dobrescu , Curtis Harrod , Irma Klerings , Ariel Yuhan Ong , Etienne Ngeh , Yu-Tian Xiao , Gerald Gartlehner
<div><h3>Objectives</h3><div>To systematically review approaches for reporting and interpreting statistically nonsignificant findings with clinical relevance in evidence synthesis and to assess their methodological quality and the extent of their empirical validation.</div></div><div><h3>Study Design and Setting</h3><div>We searched Ovid MEDLINE ALL, Scopus, PsycInfo, Library of Guidance for Health Scientists, and MathSciNet for published studies in English from January 1, 2000, to January 30, 2025, for (1) best practices in guidance documents for evidence synthesis when interpreting clinically relevant nonsignificant findings, (2) statistical methods to support the interpretation, and (3) reporting practices. To identify relevant reporting guidelines, we also searched the Enhancing the QUAlity and Transparency Of health Research Network. The quality assessment applied the Mixed Methods Appraisal Tool, Appraisal tool for Cross-Sectional Studies, and checklists for expert opinion and systematic reviews from the Joanna Briggs Institute. At least two reviewers independently conducted all procedures, and a large language model facilitated data extraction and quality appraisal.</div></div><div><h3>Results</h3><div>Of the 5332 records, 37 were eligible for inclusion. Of these, 15 were editorials or opinion pieces, nine addressed methods, eight were cross-sectional or mixed-methods studies, four were journal guidance documents, and one was a systematic review. Twenty-seven records met the quality criteria of the appraisal tool relevant to their study design or publication type, while 10 records, comprising one systematic review, two editorials or opinion pieces, and seven cross-sectional studies, did not. Relevant methodological approaches to evidence synthesis included utilization of uncertainty intervals and their integration with various statistical measures (15 of 37, 41%), Bayes factors (six of 37, 16%), likelihood ratios (three of 37, 8%), effect conversion measures (two of 37, 5%), equivalence testing (two of 37, 5%), modified Fisher's test (one of 37, 3%), and reverse fragility index (one of 37, 3%). Reporting practices included problematic “null acceptance” language (14 of 37, 38%), with some records discouraging the inappropriate claim of no effect based on nonsignificant findings (nine of 37, 24%). None of the proposed methods were empirically tested with interest holders.</div></div><div><h3>Conclusion</h3><div>Although various approaches have been proposed to improve the presentation and interpretation of statistically nonsignificant findings, a widely accepted consensus has not emerged, as these approaches have yet to be systematically tested for their practicality and validity. This review provides a comprehensive review of available methodological approaches spanning both the frequentist and Bayesian statistical frameworks and identifies critical gaps in empirical validation of some approaches, namely the lack of thresholds to guide the
目的:系统地回顾在证据合成中报告和解释具有临床相关性的统计上不显著的发现的方法,并评估其方法学质量和经验验证的程度。研究设计和设置:我们检索了Ovid MEDLINE ALL、Scopus、PsycInfo、健康科学家指南图书馆和MathSciNet,检索了2000年1月1日至2025年1月30日期间发表的英文研究,以获得以下内容:(1)解释临床相关非显著发现时证据合成指导文件中的最佳实践,(2)支持解释的统计方法,以及(3)报告实践。为了确定相关的报告指南,我们还搜索了提高卫生研究网络的质量和透明度。质量评估采用了混合方法评估工具、横截面研究评估工具以及乔安娜布里格斯研究所的专家意见和系统审查清单。至少有两名审稿人独立执行所有程序,大型语言模型促进了数据提取和质量评估。结果:5332条记录中,37条符合纳入条件。其中,15篇是社论或评论文章,9篇涉及方法,8篇是横断面或混合方法研究,4篇是期刊指导文件,1篇是系统综述。27条记录符合与其研究设计或出版类型相关的评价工具的质量标准,而10条记录(包括1篇系统评价、2篇社论或评论文章和7篇横断面研究)不符合质量标准。相关的证据合成方法包括利用不确定区间及其与各种统计测度(15/ 37,41%)、贝叶斯因子(6/ 37,16%)、似然比(3/ 37,8%)、效应转换测度(2/ 37,5%)、等价检验(2/ 37,5%)、修正Fisher检验(1/ 37,3%)和反向脆弱性指数(1/ 37,3%)的整合。报告实践包括有问题的“无效接受”语言(14/ 37,38%),一些记录阻止了基于非显著发现的不适当的无效果声明(9/ 37,24%)。所有提出的方法都没有经过利益相关者的实证检验。结论:尽管已经提出了各种方法来改善统计上不显著的发现的呈现和解释,但尚未出现广泛接受的共识,因为这些方法尚未对其实用性和有效性进行系统测试。本综述全面回顾了频率论和贝叶斯统计框架中可用的方法方法,并确定了一些方法在经验验证中的关键差距,即缺乏指导结果解释的阈值。这些发现突出表明,需要与利益相关者一起对拟议的方法进行系统测试,并制定以证据为基础的指南,以支持对证据合成中不重要结果的适当解释。
{"title":"Approaches for reporting and interpreting statistically nonsignificant findings in evidence syntheses: a systematic review","authors":"Amin Sharifan ,&nbsp;Andreea Dobrescu ,&nbsp;Curtis Harrod ,&nbsp;Irma Klerings ,&nbsp;Ariel Yuhan Ong ,&nbsp;Etienne Ngeh ,&nbsp;Yu-Tian Xiao ,&nbsp;Gerald Gartlehner","doi":"10.1016/j.jclinepi.2025.112083","DOIUrl":"10.1016/j.jclinepi.2025.112083","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Objectives&lt;/h3&gt;&lt;div&gt;To systematically review approaches for reporting and interpreting statistically nonsignificant findings with clinical relevance in evidence synthesis and to assess their methodological quality and the extent of their empirical validation.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Study Design and Setting&lt;/h3&gt;&lt;div&gt;We searched Ovid MEDLINE ALL, Scopus, PsycInfo, Library of Guidance for Health Scientists, and MathSciNet for published studies in English from January 1, 2000, to January 30, 2025, for (1) best practices in guidance documents for evidence synthesis when interpreting clinically relevant nonsignificant findings, (2) statistical methods to support the interpretation, and (3) reporting practices. To identify relevant reporting guidelines, we also searched the Enhancing the QUAlity and Transparency Of health Research Network. The quality assessment applied the Mixed Methods Appraisal Tool, Appraisal tool for Cross-Sectional Studies, and checklists for expert opinion and systematic reviews from the Joanna Briggs Institute. At least two reviewers independently conducted all procedures, and a large language model facilitated data extraction and quality appraisal.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;div&gt;Of the 5332 records, 37 were eligible for inclusion. Of these, 15 were editorials or opinion pieces, nine addressed methods, eight were cross-sectional or mixed-methods studies, four were journal guidance documents, and one was a systematic review. Twenty-seven records met the quality criteria of the appraisal tool relevant to their study design or publication type, while 10 records, comprising one systematic review, two editorials or opinion pieces, and seven cross-sectional studies, did not. Relevant methodological approaches to evidence synthesis included utilization of uncertainty intervals and their integration with various statistical measures (15 of 37, 41%), Bayes factors (six of 37, 16%), likelihood ratios (three of 37, 8%), effect conversion measures (two of 37, 5%), equivalence testing (two of 37, 5%), modified Fisher's test (one of 37, 3%), and reverse fragility index (one of 37, 3%). Reporting practices included problematic “null acceptance” language (14 of 37, 38%), with some records discouraging the inappropriate claim of no effect based on nonsignificant findings (nine of 37, 24%). None of the proposed methods were empirically tested with interest holders.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclusion&lt;/h3&gt;&lt;div&gt;Although various approaches have been proposed to improve the presentation and interpretation of statistically nonsignificant findings, a widely accepted consensus has not emerged, as these approaches have yet to be systematically tested for their practicality and validity. This review provides a comprehensive review of available methodological approaches spanning both the frequentist and Bayesian statistical frameworks and identifies critical gaps in empirical validation of some approaches, namely the lack of thresholds to guide the ","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112083"},"PeriodicalIF":5.2,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Use of structured tools by peer reviewers of systematic reviews: a cross-sectional study reveals high familiarity with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) but limited use of other tools 系统评审的同行审稿人对结构化工具的使用:一项横断面研究显示对PRISMA的高度熟悉,但对其他工具的使用有限。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2025-11-20 DOI: 10.1016/j.jclinepi.2025.112084
Livia Puljak , Sara Pintur , Tanja Rombey , Craig Lockwood , Dawid Pieper
<div><h3>Objectives</h3><div>Systematic reviews (SRs) are pivotal to evidence-based medicine. Structured tools exist to guide their reporting and appraisal, such as Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and A Measurement Tool to Assess Systematic Reviews (AMSTAR). However, there are limited data on whether peer reviewers of SRs use such tools when assessing manuscripts. This study aimed to investigate the use of structured tools by peer reviewers when assessing SRs of interventions, identify which tools are used, and explore perceived needs for structured tools to support the peer-review process.</div></div><div><h3>Study Design and Setting</h3><div>In 2025, we conducted a cross-sectional study targeting individuals who peer-reviewed at least 1 SR of interventions in the past year. The online survey collected data on demographics, use, and familiarity with structured tools, as well as open-ended responses on potential needs.</div></div><div><h3>Results</h3><div>Two hundred seventeen peer reviewers took part in the study. PRISMA was the most familiar tool (99% familiar or very familiar) and most frequently used during peer review (53% always used). The use of other tools such as AMSTAR, Peer Review of Electronic Search Strategies (PRESS), A Risk of Bias Assessment Tool for Systematic Reviews (ROBIS), and JBI checklist was infrequent. Seventeen percent reported using other structured tools beyond those listed. Most participants indicated that journals rarely required use of structured tools, except PRISMA. A notable proportion (55%) expressed concerns about time constraints, and 25% noted the lack of a comprehensive tool. Nearly half (45%) expressed a need for a dedicated structured tool for SR peer review, with checklists in PDF or embedded formats preferred. Participants expressed both advantages and concerns related to such tools.</div></div><div><h3>Conclusion</h3><div>Most peer reviewers used PRISMA when assessing SRs, while other structured tools were seldom applied. Only a few journals provided or required such tools, revealing inconsistent editorial practices. Participants reported barriers, including time constraints and a lack of suitable instruments. These findings highlight the need for a practical, validated tool, built upon existing instruments and integrated into editorial workflows. Such a tool could make peer review of SRs more consistent and transparent.</div></div><div><h3>Plain Language Summary</h3><div>Systematic reviews (SRs) are a type of research that synthesizes results from primary studies. Several structured tools, such as PRISMA for reporting and AMSTAR 2 for methodological quality, exist to guide how SRs are written and appraised. When manuscripts that report SRs are submitted to scholarly journals, editors invite expert peer reviewers to assess these SRs. In this study, researchers aimed to analyze which tools peer reviewers actually use when evaluating SR manuscripts, their percep
目的:系统评价(SRs)是循证医学的关键。现有的结构化工具可以指导他们的报告和评估,例如PRISMA和AMSTAR。然而,关于SRs的同行审稿人在评估手稿时是否使用这些工具的数据有限。本研究旨在调查同行评议者在评估干预措施的SRs时对结构化工具的使用情况,确定使用了哪些工具,并探索对结构化工具的感知需求,以支持同行评议过程。研究设计和背景:在2025年,我们进行了一项横断面研究,目标是在过去一年中同行评审过至少一项干预措施SR的个人。这项在线调查收集了人口统计数据、使用情况和对结构化工具的熟悉程度,以及对潜在需求的开放式回应。结果:共有217名同行评议人参与研究。PRISMA是最熟悉的工具(99%熟悉或非常熟悉),也是同行评议中最常用的工具(53%总是使用)。其他工具如AMSTAR、PRESS、ROBIS和JBI的使用很少。17%的人表示他们使用了上述工具之外的其他结构化工具。大多数与会者表示,除了PRISMA之外,期刊很少要求使用结构化工具。值得注意的是,55%的人表示担心时间限制,25%的人表示缺乏全面的工具。近一半(45%)的受访者表示需要一个专门的结构化工具来进行SR同行评审,首选PDF或嵌入式格式的清单。与会者表示了与这些工具有关的优点和关切。结论:大多数同行审稿人在评估系统评审时使用PRISMA,而其他结构化工具很少使用。只有少数期刊提供或需要这样的工具,这暴露了不一致的编辑实践。与会者报告了各种障碍,包括时间限制和缺乏合适的工具。这些发现强调需要一种实用的、经过验证的工具,建立在现有工具的基础上,并融入编辑工作流程。这样一个工具可以使同行评议的系统性更加一致和透明。
{"title":"Use of structured tools by peer reviewers of systematic reviews: a cross-sectional study reveals high familiarity with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) but limited use of other tools","authors":"Livia Puljak ,&nbsp;Sara Pintur ,&nbsp;Tanja Rombey ,&nbsp;Craig Lockwood ,&nbsp;Dawid Pieper","doi":"10.1016/j.jclinepi.2025.112084","DOIUrl":"10.1016/j.jclinepi.2025.112084","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Objectives&lt;/h3&gt;&lt;div&gt;Systematic reviews (SRs) are pivotal to evidence-based medicine. Structured tools exist to guide their reporting and appraisal, such as Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and A Measurement Tool to Assess Systematic Reviews (AMSTAR). However, there are limited data on whether peer reviewers of SRs use such tools when assessing manuscripts. This study aimed to investigate the use of structured tools by peer reviewers when assessing SRs of interventions, identify which tools are used, and explore perceived needs for structured tools to support the peer-review process.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Study Design and Setting&lt;/h3&gt;&lt;div&gt;In 2025, we conducted a cross-sectional study targeting individuals who peer-reviewed at least 1 SR of interventions in the past year. The online survey collected data on demographics, use, and familiarity with structured tools, as well as open-ended responses on potential needs.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;div&gt;Two hundred seventeen peer reviewers took part in the study. PRISMA was the most familiar tool (99% familiar or very familiar) and most frequently used during peer review (53% always used). The use of other tools such as AMSTAR, Peer Review of Electronic Search Strategies (PRESS), A Risk of Bias Assessment Tool for Systematic Reviews (ROBIS), and JBI checklist was infrequent. Seventeen percent reported using other structured tools beyond those listed. Most participants indicated that journals rarely required use of structured tools, except PRISMA. A notable proportion (55%) expressed concerns about time constraints, and 25% noted the lack of a comprehensive tool. Nearly half (45%) expressed a need for a dedicated structured tool for SR peer review, with checklists in PDF or embedded formats preferred. Participants expressed both advantages and concerns related to such tools.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclusion&lt;/h3&gt;&lt;div&gt;Most peer reviewers used PRISMA when assessing SRs, while other structured tools were seldom applied. Only a few journals provided or required such tools, revealing inconsistent editorial practices. Participants reported barriers, including time constraints and a lack of suitable instruments. These findings highlight the need for a practical, validated tool, built upon existing instruments and integrated into editorial workflows. Such a tool could make peer review of SRs more consistent and transparent.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Plain Language Summary&lt;/h3&gt;&lt;div&gt;Systematic reviews (SRs) are a type of research that synthesizes results from primary studies. Several structured tools, such as PRISMA for reporting and AMSTAR 2 for methodological quality, exist to guide how SRs are written and appraised. When manuscripts that report SRs are submitted to scholarly journals, editors invite expert peer reviewers to assess these SRs. In this study, researchers aimed to analyze which tools peer reviewers actually use when evaluating SR manuscripts, their percep","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112084"},"PeriodicalIF":5.2,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145582736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A scoping review of critical appraisal tools and user guides for systematic reviews with network meta-analysis: methodological gaps and directions for tool development 网络元分析系统评价的关键评估工具和用户指南的范围审查:方法差距和工具开发方向。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2025-11-20 DOI: 10.1016/j.jclinepi.2025.112056
K.M. Mondragon , C.S. Tan-Lim , R. Velasco Jr. , C.P. Cordero , H.M. Strebel , L. Palileo-Villanueva , J.V. Mantaring
<div><h3>Background</h3><div>Systematic reviews (SRs) with network meta-analyses (NMAs) are increasingly used to inform guidelines, health technology assessments (HTAs), and policy decisions. Their methodological complexity, as well as the difficulty in assessing the exchangeability assumption and the large amount of results, makes appraisal more challenging than for SRs with pairwise NMAs. Numerous SR- and NMA-specific appraisal tools exist, but they vary in scope, intended users, and methodological guidance, and few have been validated.</div></div><div><h3>Objectives</h3><div>To identify and describe appraisal instruments and interpretive guides for SRs and NMAs specifically, summarizing their characteristics, domain coverage, development methods, and measurement-property evaluations.</div></div><div><h3>Methods</h3><div>We conducted a methodological scoping review which included structured appraisal instruments or interpretive guides for SRs with or without NMA-specific domains, aimed at review authors, clinicians, guideline developers, or HTA assessors from published or gray literature in English. Searches (inception–August 2025) covered major databases, registries, organizational websites, and reference lists. Two reviewers independently screened records; data were extracted by one and checked by a second. We synthesized the findings narratively. First, we classified tools as either structured instruments or interpretive guides. Second, we grouped them according to their intended audience and scope. Third, we assessed available measurement-property data using relevant COnsensus-based Standards for the selection of health Measurement INstruments items.</div></div><div><h3>Results</h3><div>Thirty-four articles described 22 instruments (11 NMA-specific, nine systematic reviews with meta-analysis-specific, 2 encompassing both systematic reviews with meta-analysis and NMA). NMA tools added domains such as network geometry, transitivity, and coherence, but guidance on transitivity evaluation, publication bias, and ranking was either limited or ineffective. Reviewer-focused tools were structured with explicit response options, whereas clinician-oriented guides posed appraisal questions with explanations but no prescribed response. Nine instruments reported measurement-property data, with validity and reliability varying widely.</div></div><div><h3>Conclusion</h3><div>This first comprehensive map of systematic reviews with meta-analysis and NMA appraisal resources highlights the need for clearer operational criteria, structured decision rules, and integrated rater training to improve reliability and align foundational SR domains with NMA-specific content.</div></div><div><h3>Plain Language Summary</h3><div>NMA is a way to compare many treatments at once by combining results from multiple studies—even when some treatments have not been directly compared head-to-head. Because NMAs are complex, users need clear tools to judge whether an analysis is tru
背景:系统评价(SR)与网络荟萃分析(NMA)越来越多地用于指导方针、卫生技术评估和政策决策。它们的方法复杂性,以及评估可交换性假设和大量结果的难度,使得评估比两两荟萃分析的SRs更具挑战性。存在许多特定于SR和nma的评估工具,但它们在范围、预期用户和方法指导上各不相同,并且很少得到验证。目的:明确和描述SRs和nma的评估工具和解释指南,总结它们的特征、领域覆盖、开发方法和测量属性评估。方法:我们进行了方法学范围评价,包括有或没有nma特定领域的系统评价的结构化评价工具或解释性指南,目标是来自已发表或灰色英文文献的综述作者、临床医生、指南制定者或HTA评估者。搜索(启动至2025年8月)涵盖主要数据库、注册表、组织网站和参考列表。两名审稿人独立筛选记录;数据由一个人提取,另一个人检查。我们以叙述的方式综合了这些发现。首先,我们将工具分为结构化工具和解释性指南。其次,我们根据目标受众和范围对它们进行分组。第三,我们使用相关的COSMIN项目评估可用的测量属性数据。结果:36篇文章描述了21种仪器(11种nma专用仪器,9种SR/ ma专用仪器,1种通用仪器)。NMA工具增加了网络几何、及物性和连贯性等领域,但对及物性评估、发表偏倚和排名的指导要么有限,要么无效。以审稿人为中心的工具具有明确的回答选项,而以临床医生为导向的指南提出了带有解释的评估问题,但没有规定的回答。9种仪器报告了测量性能数据,其有效性和可靠性差异很大。结论:这是第一张综合SR/MA和NMA评估资源的地图,强调需要更清晰的操作标准、结构化的决策规则和综合的评估师培训,以提高可靠性,并使基础SR领域与NMA特定内容保持一致。简单的语言总结:网络荟萃分析(NMA)是一种通过结合多个研究的结果来同时比较许多治疗方法的方法,即使有些治疗方法没有直接进行正面比较。由于nma很复杂,用户需要明确的工具来判断分析是否可信。我们回顾并绘制了过去三十年中发表的22种用于评估或解释系统评价(SRs)和nma的工具。大约一半是专门为NMAS设计的;其余的是适用于nma的一般SR工具。大多数工具涵盖了良好评论的基础(明确的问题、公平的搜索、偏见评估和透明的综合)。nma专用工具还解决了网络特有的问题,例如网络如何连接,间接和直接证据是否一致(一致性),以及如何解释治疗排名。然而,重要的差距仍然存在。很少有工具对传递性/一致性、网络级发表偏差或排名不确定性进行逐步检查,并且评级者之间报告的可靠性不一致。报告核对表(例如,PRISMA-NMA)规定了应该报告什么信息,但没有规定应该如何呈现。确定性框架(例如GRADE或CINeMA)概述了如何跨领域评估结果的可信度,例如不一致或不精确,但它们没有解释或标准化这些领域评估的不同方式。这意味着:指南制定者、HTA评估人员和临床医生应该同时使用SR和NMA工具,寻求与在NMA方面经验丰富的统计学家合作,并支持具有明确决策规则和用户培训的工具。经过更好测试、更清晰的工具将使NMA评估更加一致和可信。
{"title":"A scoping review of critical appraisal tools and user guides for systematic reviews with network meta-analysis: methodological gaps and directions for tool development","authors":"K.M. Mondragon ,&nbsp;C.S. Tan-Lim ,&nbsp;R. Velasco Jr. ,&nbsp;C.P. Cordero ,&nbsp;H.M. Strebel ,&nbsp;L. Palileo-Villanueva ,&nbsp;J.V. Mantaring","doi":"10.1016/j.jclinepi.2025.112056","DOIUrl":"10.1016/j.jclinepi.2025.112056","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Background&lt;/h3&gt;&lt;div&gt;Systematic reviews (SRs) with network meta-analyses (NMAs) are increasingly used to inform guidelines, health technology assessments (HTAs), and policy decisions. Their methodological complexity, as well as the difficulty in assessing the exchangeability assumption and the large amount of results, makes appraisal more challenging than for SRs with pairwise NMAs. Numerous SR- and NMA-specific appraisal tools exist, but they vary in scope, intended users, and methodological guidance, and few have been validated.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Objectives&lt;/h3&gt;&lt;div&gt;To identify and describe appraisal instruments and interpretive guides for SRs and NMAs specifically, summarizing their characteristics, domain coverage, development methods, and measurement-property evaluations.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Methods&lt;/h3&gt;&lt;div&gt;We conducted a methodological scoping review which included structured appraisal instruments or interpretive guides for SRs with or without NMA-specific domains, aimed at review authors, clinicians, guideline developers, or HTA assessors from published or gray literature in English. Searches (inception–August 2025) covered major databases, registries, organizational websites, and reference lists. Two reviewers independently screened records; data were extracted by one and checked by a second. We synthesized the findings narratively. First, we classified tools as either structured instruments or interpretive guides. Second, we grouped them according to their intended audience and scope. Third, we assessed available measurement-property data using relevant COnsensus-based Standards for the selection of health Measurement INstruments items.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;div&gt;Thirty-four articles described 22 instruments (11 NMA-specific, nine systematic reviews with meta-analysis-specific, 2 encompassing both systematic reviews with meta-analysis and NMA). NMA tools added domains such as network geometry, transitivity, and coherence, but guidance on transitivity evaluation, publication bias, and ranking was either limited or ineffective. Reviewer-focused tools were structured with explicit response options, whereas clinician-oriented guides posed appraisal questions with explanations but no prescribed response. Nine instruments reported measurement-property data, with validity and reliability varying widely.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclusion&lt;/h3&gt;&lt;div&gt;This first comprehensive map of systematic reviews with meta-analysis and NMA appraisal resources highlights the need for clearer operational criteria, structured decision rules, and integrated rater training to improve reliability and align foundational SR domains with NMA-specific content.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Plain Language Summary&lt;/h3&gt;&lt;div&gt;NMA is a way to compare many treatments at once by combining results from multiple studies—even when some treatments have not been directly compared head-to-head. Because NMAs are complex, users need clear tools to judge whether an analysis is tru","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112056"},"PeriodicalIF":5.2,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145582728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to ‘Implementing a randomization consent to enable Trials within Cohorts in the Swiss HIV Cohort Study – A mixed-methods study’ [Journal of Clinical Epidemiology, 188 (2025) 111973] “在瑞士HIV队列研究中实施随机化同意试验——一项混合方法研究”的更正[Journal of Clinical Epidemiology, 188(2025) 111973]。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2025-11-20 DOI: 10.1016/j.jclinepi.2025.112042
Elias R. Zehnder , Christof Manuel Schönenberger , Julia Hüllstrung , Mona Elalfy , Beverley Nickolls , Frédérique Chammartin , David Hans-Ulrich Haerry , Ellen Cart-Richter , David Jackson-Perry , Samuel Aggeler , Julian Steinmann , Sandra E. Chaudron , Katharina Kusejko , Marcel Stoeckle , Alexandra Calmy , Matthias Cavassini , Enos Bernasconi , Dominique Braun , Johannes Nemeth , Irene Abela , Matthias Briel
{"title":"Corrigendum to ‘Implementing a randomization consent to enable Trials within Cohorts in the Swiss HIV Cohort Study – A mixed-methods study’ [Journal of Clinical Epidemiology, 188 (2025) 111973]","authors":"Elias R. Zehnder ,&nbsp;Christof Manuel Schönenberger ,&nbsp;Julia Hüllstrung ,&nbsp;Mona Elalfy ,&nbsp;Beverley Nickolls ,&nbsp;Frédérique Chammartin ,&nbsp;David Hans-Ulrich Haerry ,&nbsp;Ellen Cart-Richter ,&nbsp;David Jackson-Perry ,&nbsp;Samuel Aggeler ,&nbsp;Julian Steinmann ,&nbsp;Sandra E. Chaudron ,&nbsp;Katharina Kusejko ,&nbsp;Marcel Stoeckle ,&nbsp;Alexandra Calmy ,&nbsp;Matthias Cavassini ,&nbsp;Enos Bernasconi ,&nbsp;Dominique Braun ,&nbsp;Johannes Nemeth ,&nbsp;Irene Abela ,&nbsp;Matthias Briel","doi":"10.1016/j.jclinepi.2025.112042","DOIUrl":"10.1016/j.jclinepi.2025.112042","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112042"},"PeriodicalIF":5.2,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guideline organizations’ guidance documents paper 4: interest-holder engagement 指引机构的指引文件第4号文件:利益相关者参与。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2025-11-20 DOI: 10.1016/j.jclinepi.2025.112085
Joanne Khabsa , Vanessa Helou , Hussein A. Noureldine , Reem Hoteit , Aya Hassoun , Ali H. Dakroub , Lea Assaf , Ahmed Mohamed , Tala Chehaitly , Leana Ellaham , Elie A. Akl

Background and Objectives

Interest-holder engagement is increasingly recognized as essential to the relevance and uptake of practice guidelines. “Interest-holders” are groups with legitimate interests in the health issue under consideration. The interests' legitimacy arises from the fact that these groups are responsible for or affected by health-related decisions. The objective of this study was to describe interest-holder engagement approaches for practice guideline development as described in guidance documents by guideline-producing organizations.

Methods

We compiled a list of guideline-producing organizations and searched for their guidance documents on guideline development. We abstracted data on interest-holder engagement details for each subtopic in the Guidelines International Network (GIN)-McMaster Guideline Development Checklist (a total of 23 subtopics following the division of some original checklist topics).

Results

Of the 133 identified organizations, 129 (97%) describe in their guidance documents engaging at least 1 interest-holder group in at least 1 GIN-McMaster checklist subtopic. The subtopics with most engagement are “developing recommendations and determining their strength” (96%) and “peer review” (81%), while the subtopics with the least engagement are “establishing guideline group processes” (3%) and “training” (2%). The interest-holder groups with the highest engagement in at least one of the subtopics are providers (95%), principal investigators (78%) and patient representatives (64%), while interest-holder groups with lower engagement are program managers (3%), and peer-reviewed journal editors (1%). Across most subtopics, engagement occurs mostly through panel membership and decision-making level.

Conclusion

A high proportion of organizations engaged at least 1 interest-holder group in at least 1 subtopic of guideline development, with panel membership being the most common approach. However, this engagement was limited to a few interest-holder groups, and to a few subtopics with highest engagement.
导言:越来越多的人认识到利益相关者的参与对实践指南的相关性和吸收至关重要。“利益持有人”是指对所审议的卫生问题具有合法利益的群体。这些利益的合法性源于这些群体对与健康有关的决定负责或受其影响。本研究的目的是描述利益相关者参与实践指南制定的方法,如指南制作组织在指导文件中所描述的那样。方法:编制指南制定机构名单,检索其指南制定的指导性文件。我们提取了GIN-McMaster指南制定清单中每个主题的利益相关者参与细节数据(根据一些原始清单主题的划分,共有23个子主题)。结果:在133个确定的组织中,129个(97%)在其指导文件中描述了在至少一个GIN-McMaster清单子主题中参与至少一个利益相关者群体。参与度最高的子主题是“制定建议并确定其优势”(96%)和“同行评审”(81%),参与度最低的子主题是“建立指导小组流程”(3%)和“培训”(2%)。在至少一个子主题中参与度最高的利益相关者群体是提供者(95%)、主要研究者(78%)和患者代表(64%),而参与度较低的利益相关者群体是项目经理(3%)和同行评议期刊编辑(1%)。在大多数子主题中,参与主要是通过小组成员和决策层来实现的。讨论:在指南制定的至少一个子主题中,高比例的组织参与了至少一个利益相关者小组,小组成员是最常见的方法。然而,这种参与仅限于少数利益相关者集团,以及参与程度最高的几个子主题。
{"title":"Guideline organizations’ guidance documents paper 4: interest-holder engagement","authors":"Joanne Khabsa ,&nbsp;Vanessa Helou ,&nbsp;Hussein A. Noureldine ,&nbsp;Reem Hoteit ,&nbsp;Aya Hassoun ,&nbsp;Ali H. Dakroub ,&nbsp;Lea Assaf ,&nbsp;Ahmed Mohamed ,&nbsp;Tala Chehaitly ,&nbsp;Leana Ellaham ,&nbsp;Elie A. Akl","doi":"10.1016/j.jclinepi.2025.112085","DOIUrl":"10.1016/j.jclinepi.2025.112085","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Interest-holder engagement is increasingly recognized as essential to the relevance and uptake of practice guidelines. “Interest-holders” are groups with legitimate interests in the health issue under consideration. The interests' legitimacy arises from the fact that these groups are responsible for or affected by health-related decisions. The objective of this study was to describe interest-holder engagement approaches for practice guideline development as described in guidance documents by guideline-producing organizations.</div></div><div><h3>Methods</h3><div>We compiled a list of guideline-producing organizations and searched for their guidance documents on guideline development. We abstracted data on interest-holder engagement details for each subtopic in the Guidelines International Network (GIN)-McMaster Guideline Development Checklist (a total of 23 subtopics following the division of some original checklist topics).</div></div><div><h3>Results</h3><div>Of the 133 identified organizations, 129 (97%) describe in their guidance documents engaging at least 1 interest-holder group in at least 1 GIN-McMaster checklist subtopic. The subtopics with most engagement are “developing recommendations and determining their strength” (96%) and “peer review” (81%), while the subtopics with the least engagement are “establishing guideline group processes” (3%) and “training” (2%). The interest-holder groups with the highest engagement in at least one of the subtopics are providers (95%), principal investigators (78%) and patient representatives (64%), while interest-holder groups with lower engagement are program managers (3%), and peer-reviewed journal editors (1%). Across most subtopics, engagement occurs mostly through panel membership and decision-making level.</div></div><div><h3>Conclusion</h3><div>A high proportion of organizations engaged at least 1 interest-holder group in at least 1 subtopic of guideline development, with panel membership being the most common approach. However, this engagement was limited to a few interest-holder groups, and to a few subtopics with highest engagement.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112085"},"PeriodicalIF":5.2,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145582742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictors of citation rates and the problem of citation bias: a scoping review 引文率的预测因子和引文偏倚的问题——一个范围综述。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2025-11-19 DOI: 10.1016/j.jclinepi.2025.112057
Birgitte Nørgaard , Karen E. Lie , Hans Lund
<div><h3>Objectives</h3><div>To systematically map the factors associated with citation rates, to categorize the types of studies evaluating these factors, and to obtain an overall status of citation bias in scientific health literature.</div></div><div><h3>Study Design and Setting</h3><div>A scoping review was reported following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses scoping review extension checklist. Four electronic databases were searched, and the reference-lists of all included articles were screened. Empirical meta-research studies reporting any source of predictors of citation rates and/or citation bias within health care were included. Data are presented by descriptive statistics such as frequencies, portions, and percentages.</div></div><div><h3>Results</h3><div>A total of 165 studies were included. Fifty-four distinct factors of citation rates were evaluated in 786 quantitative analyses. Regardless of using the same basic methodological approach to calculate citation rate, 78 studies (48%) aimed to examined citation bias, whereas 79 studies (48%) aimed to optimizing article characteristics to enhance authors’ own citation rates. The remaining seven studies (4%) analyzed infrastructural characteristics at publication level to make all studies more accessible.</div></div><div><h3>Conclusion</h3><div>Seventy-nine of the 165 included studies (48%) explicitly recommended modifying paper characteristics—such as title length or author count—to boost citations rather than prioritizing scientific contribution. Such recommendations may conflict with principles of scientific integrity, which emphasize relevance and methodological rigor over strategic citation practices. Given the high proportion of analyses identifying a significant increase in citation rates, publication bias cannot be ruled out.</div></div><div><h3>Plain Language Summary</h3><div>Why was the study done? Within scientific research, it is important to cite previous research. This is done for specific reasons, including crediting earlier authors and providing a credible and trustworthy background for conducting the study. However, findings suggest that citations are not always chosen for their intended purpose. This is known as citation bias. What did the researchers do? The researchers searched for all existing studies evaluating predictors of citation rate, ie, how often is a specific study referred to by other researchers. They systematically mapped these studies to find out both the level of citation bias and the types of citation bias present in scientific health literature. To find these studies, the researchers searched four electronic databases and screened the reference lists of all included studies to be sure to include as many studies as possible. What did the researchers find? The researchers found a total of 165 studies that evaluated predictors of citation rate in no less than 786 analyses. However, the researchers found that the studie
目的:系统地绘制与被引率相关的因素,对评估这些因素的研究类型进行分类,并获得科学卫生文献中引文偏倚的总体状况。研究设计和设置:根据PRISMA范围审查扩展清单报告范围审查。检索了四个电子数据库,并筛选了所有纳入文章的参考文献清单。报告医疗保健中引用率和/或引用偏倚预测因素的任何来源的经验元研究被纳入。数据由描述性统计数据表示,如频率、部分和百分比。结果:共纳入165项研究。在纳入的786项定量分析中,评估了54个不同的被引率因素。无论使用相同的基本方法来计算引用率,78项研究(48%)旨在检查引用偏倚,而79项研究(48%)旨在优化文章特征以提高作者自己的引用率。其余7项研究(4%)分析了发表水平的基础设施特征,使所有研究更容易获得。结论:在纳入的165项研究中,有79项(48%)明确建议修改论文特征,如标题长度或作者数量,以提高引用率,而不是优先考虑科学贡献。这些建议可能与科学诚信原则相冲突,科学诚信原则强调相关性和方法的严谨性,而不是战略性的引用实践。鉴于高比例的分析表明引用率显著增加,不能排除发表偏倚。简单的语言总结:为什么要做这项研究?在科学研究中,引用前人的研究是很重要的。这样做有特定的原因,包括归功于早期的作者,并为进行研究提供可信和值得信赖的背景。然而,研究结果表明,引文并不总是根据其预期目的而选择的。这就是所谓的引文偏倚。
{"title":"Predictors of citation rates and the problem of citation bias: a scoping review","authors":"Birgitte Nørgaard ,&nbsp;Karen E. Lie ,&nbsp;Hans Lund","doi":"10.1016/j.jclinepi.2025.112057","DOIUrl":"10.1016/j.jclinepi.2025.112057","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Objectives&lt;/h3&gt;&lt;div&gt;To systematically map the factors associated with citation rates, to categorize the types of studies evaluating these factors, and to obtain an overall status of citation bias in scientific health literature.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Study Design and Setting&lt;/h3&gt;&lt;div&gt;A scoping review was reported following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses scoping review extension checklist. Four electronic databases were searched, and the reference-lists of all included articles were screened. Empirical meta-research studies reporting any source of predictors of citation rates and/or citation bias within health care were included. Data are presented by descriptive statistics such as frequencies, portions, and percentages.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;div&gt;A total of 165 studies were included. Fifty-four distinct factors of citation rates were evaluated in 786 quantitative analyses. Regardless of using the same basic methodological approach to calculate citation rate, 78 studies (48%) aimed to examined citation bias, whereas 79 studies (48%) aimed to optimizing article characteristics to enhance authors’ own citation rates. The remaining seven studies (4%) analyzed infrastructural characteristics at publication level to make all studies more accessible.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclusion&lt;/h3&gt;&lt;div&gt;Seventy-nine of the 165 included studies (48%) explicitly recommended modifying paper characteristics—such as title length or author count—to boost citations rather than prioritizing scientific contribution. Such recommendations may conflict with principles of scientific integrity, which emphasize relevance and methodological rigor over strategic citation practices. Given the high proportion of analyses identifying a significant increase in citation rates, publication bias cannot be ruled out.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Plain Language Summary&lt;/h3&gt;&lt;div&gt;Why was the study done? Within scientific research, it is important to cite previous research. This is done for specific reasons, including crediting earlier authors and providing a credible and trustworthy background for conducting the study. However, findings suggest that citations are not always chosen for their intended purpose. This is known as citation bias. What did the researchers do? The researchers searched for all existing studies evaluating predictors of citation rate, ie, how often is a specific study referred to by other researchers. They systematically mapped these studies to find out both the level of citation bias and the types of citation bias present in scientific health literature. To find these studies, the researchers searched four electronic databases and screened the reference lists of all included studies to be sure to include as many studies as possible. What did the researchers find? The researchers found a total of 165 studies that evaluated predictors of citation rate in no less than 786 analyses. However, the researchers found that the studie","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112057"},"PeriodicalIF":5.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guideline organizations' guidance documents paper 1: Introduction 指引机构指引文件文件1:简介。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2025-11-19 DOI: 10.1016/j.jclinepi.2025.112063
Joanne Khabsa , Mariam Nour Eldine , Sally Yaacoub , Rayane El-Khoury , Noha El Yaman , Wojtek Wiercioch , Holger J. Schünemann , Elie A. Akl

Background and Objectives

Given the role of practice guidelines in impacting practice and health outcomes, it is important that their development follows rigorous methodology. We present a series of papers exploring various aspects of practice guideline development based on a descriptive summary of guidance documents from guideline-producing organizations. The overall aim is to describe the methods employed by these organizations in developing practice guidelines. This first paper of the series aims to (1) describe the methodology followed in the descriptive summary, including the identification process of a sample of guideline-producing organizations with publicly available guidance documents on guideline development; (2) characterize the included guideline-producing organizations and their guidance documents; and (3) assess the extent to which these organizations cover the topics of the GIN-McMaster Guideline Development Checklist in their guidance documents.

Methods

We conducted a descriptive summary of guideline-producing organizations' publicly available guidance documents on guideline development (eg, guideline handbooks). We exhaustively sampled a list of guideline-producing organizations from multiple sources and searched their websites and the peer-reviewed literature for publicly available guidance documents on their guideline development process. We abstracted data in duplicate and independently on both the organizations and the documents' general characteristics and on whether the organizations covered the topics of the GIN-McMaster Guideline Development Checklist in their guidance documents. We subdivided some of 18 main topics of the checklist to disaggregate key concepts. Based on a discussion between the lead authors, this resulted in 27 examined subtopics. We conducted descriptive statistical analyses.

Results

Our final sample consisted of 133 guideline-producing organizations. The majority were professional associations (59%), based in North America (51%), and from the clinical field (84%). Out of the 27 GIN-McMaster Guideline Development Checklist subtopics, the median number covered was 20 (interquartile range (IQR): 15–24). The subtopics most frequently covered were “consumer and stakeholder engagement” (97%), “conflict of interest considerations” (92%), and “guideline group membership” (92%). The subtopics least covered were “training” (40%) and “considering additional information” (42%).

Conclusion

The number of GIN-McMaster Guideline Development Checklist subtopics covered by a sample of guideline-producing organizations in their guidance documents is both variable and suboptimal.
导言:鉴于实践指南在影响实践和健康结果方面的作用,制定实践指南必须遵循严格的方法。我们提出了一系列的论文,探讨了实践指南制定的各个方面,这些方面基于对指南制定组织的指导文件的描述性总结。总体目标是描述这些组织在制定实践指南时采用的方法。本系列的第一篇论文旨在(1)描述描述性摘要中遵循的方法,包括对具有公开指南制定指导文件的指南制作组织样本的识别过程;(2)描述从纳入的指南制定组织中确定的指南文件样本的特征;(3)评估这些组织在其指导文件中涵盖GIN-McMaster指南制定清单主题的程度。研究设计:我们对制定指南的组织公开提供的指南文件(例如指南手册)进行了描述性总结。我们从多个来源详尽地抽样了一份指南制定组织的名单,并搜索了他们的网站和同行评审的文献,以获取有关其指南制定过程的公开指导文件。我们提取了两份独立的数据,包括组织和文件的一般特征,以及它们是否涵盖了GIN-McMaster指南制定清单的主题。我们对清单中的18个主要主题进行细分,以分解关键概念。根据主要作者之间的讨论,这产生了27个检查的子主题。我们进行了描述性统计分析。结果:我们最终的样本包括133个指南制作组织。大多数是专业协会(59%),总部设在北美(51%),来自临床领域(84%)。在27个GIN-McMaster指南发展清单子主题中,涵盖的中位数为20 (IQR: 15 - 23)。最常涉及的子主题是“消费者和利益相关者参与”(97%)、“利益冲突考虑”(92%)和“指南小组成员资格”(92%)。涉及最少的子主题是“培训”(40%)和“考虑额外信息”(42%)。讨论:吉恩-麦克马斯特指南开发清单子主题的数量由指南生产组织样本在其指导文件中涵盖,这既是可变的,也是次优的。
{"title":"Guideline organizations' guidance documents paper 1: Introduction","authors":"Joanne Khabsa ,&nbsp;Mariam Nour Eldine ,&nbsp;Sally Yaacoub ,&nbsp;Rayane El-Khoury ,&nbsp;Noha El Yaman ,&nbsp;Wojtek Wiercioch ,&nbsp;Holger J. Schünemann ,&nbsp;Elie A. Akl","doi":"10.1016/j.jclinepi.2025.112063","DOIUrl":"10.1016/j.jclinepi.2025.112063","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Given the role of practice guidelines in impacting practice and health outcomes, it is important that their development follows rigorous methodology. We present a series of papers exploring various aspects of practice guideline development based on a descriptive summary of guidance documents from guideline-producing organizations. The overall aim is to describe the methods employed by these organizations in developing practice guidelines. This first paper of the series aims to (1) describe the methodology followed in the descriptive summary, including the identification process of a sample of guideline-producing organizations with publicly available guidance documents on guideline development; (2) characterize the included guideline-producing organizations and their guidance documents; and (3) assess the extent to which these organizations cover the topics of the GIN-McMaster Guideline Development Checklist in their guidance documents.</div></div><div><h3>Methods</h3><div>We conducted a descriptive summary of guideline-producing organizations' publicly available guidance documents on guideline development (eg, guideline handbooks). We exhaustively sampled a list of guideline-producing organizations from multiple sources and searched their websites and the peer-reviewed literature for publicly available guidance documents on their guideline development process. We abstracted data in duplicate and independently on both the organizations and the documents' general characteristics and on whether the organizations covered the topics of the GIN-McMaster Guideline Development Checklist in their guidance documents. We subdivided some of 18 main topics of the checklist to disaggregate key concepts. Based on a discussion between the lead authors, this resulted in 27 examined subtopics. We conducted descriptive statistical analyses.</div></div><div><h3>Results</h3><div>Our final sample consisted of 133 guideline-producing organizations. The majority were professional associations (59%), based in North America (51%), and from the clinical field (84%). Out of the 27 GIN-McMaster Guideline Development Checklist subtopics, the median number covered was 20 (interquartile range (IQR): 15–24). The subtopics most frequently covered were “consumer and stakeholder engagement” (97%), “conflict of interest considerations” (92%), and “guideline group membership” (92%). The subtopics least covered were “training” (40%) and “considering additional information” (42%).</div></div><div><h3>Conclusion</h3><div>The number of GIN-McMaster Guideline Development Checklist subtopics covered by a sample of guideline-producing organizations in their guidance documents is both variable and suboptimal.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112063"},"PeriodicalIF":5.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The measurement properties reliability and measurement error explained – a COSMIN perspective 从COSMIN的角度解释了测量特性、可靠性和测量误差。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2025-11-19 DOI: 10.1016/j.jclinepi.2025.112058
Lidwine B. Mokkink , Iris Eekhout
Reliability and measurement error are related but distinct measurement properties. They are connected because both can be evaluated using the same data, typically collected from studies involving repeated measurements in individuals who are stable on the outcome of interest. However, they are calculated using different statistical methods and refer to different quality aspects of measurement instruments. We explain that a measurement error refers to the precision of a measurement, that is, how similar or close the scores are across repeated measurements in a stable individual (variation within individuals). In contrast, reliability indicates an instrument's ability to distinguish between individuals, which depends both on the variation between individuals (ie, heterogeneity in the outcome being measured in the population) and the precision of the score, ie, the measurement error. Evaluating reliability helps to understand if a particular source of variation (eg, occasion, type of machine, or rater) influences the score, and whether the measurement can be improved by better standardizing this source. Intraclass-correlation coefficients, standards error of measurement, and variance components are explained and illustrated with an example.
可靠性和测量误差是两个相关但不同的测量特性。它们是相互联系的,因为两者都可以使用相同的数据进行评估,这些数据通常是从涉及重复测量的研究中收集的,这些研究涉及对感兴趣的结果稳定的个体。然而,它们是用不同的统计方法计算的,涉及测量仪器的不同质量方面。我们解释说,测量误差是指测量的精度,也就是说,在一个稳定的个体(个体内部的变化)中重复测量的分数是多么相似或接近。相比之下,可靠性表明一种工具区分个体的能力,这既取决于个体之间的差异(即在总体中测量结果的异质性),也取决于得分的精度,即测量误差。评估可靠性有助于了解特定的变化来源(例如场合、机器类型等)是否会影响得分,以及是否可以通过更好地标准化该来源来改进测量。对类内相关系数、测量标准误差和方差成分进行了解释和举例说明。
{"title":"The measurement properties reliability and measurement error explained – a COSMIN perspective","authors":"Lidwine B. Mokkink ,&nbsp;Iris Eekhout","doi":"10.1016/j.jclinepi.2025.112058","DOIUrl":"10.1016/j.jclinepi.2025.112058","url":null,"abstract":"<div><div>Reliability and measurement error are related but distinct measurement properties. They are connected because both can be evaluated using the same data, typically collected from studies involving repeated measurements in individuals who are stable on the outcome of interest. However, they are calculated using different statistical methods and refer to different quality aspects of measurement instruments. We explain that a measurement error refers to the precision of a measurement, that is, how similar or close the scores are across repeated measurements in a stable individual (variation within individuals). In contrast, reliability indicates an instrument's ability to distinguish between individuals, which depends both on the variation between individuals (ie, heterogeneity in the outcome being measured in the population) and the precision of the score, ie, the measurement error. Evaluating reliability helps to understand if a particular source of variation (eg, occasion, type of machine, or rater) influences the score, and whether the measurement can be improved by better standardizing this source. Intraclass-correlation coefficients, standards error of measurement, and variance components are explained and illustrated with an example.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"190 ","pages":"Article 112058"},"PeriodicalIF":5.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guideline organizations’ guidance documents paper 3: contributions and authorship 指导组织的指导文件文件3:贡献和作者。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2025-11-19 DOI: 10.1016/j.jclinepi.2025.112065
Joanne Khabsa , Mohamed M. Khamis , Rachad Ghazal , Noha El Yaman , Reem Hoteit , Elsa Hebbo , Sally Yaacoub , Wojtek Wiercioch , Elie A. Akl

Background and Objectives

Determining the types of contributions to guideline development, as well as acknowledging these contributions groups, are critical steps in the guideline development process. The objective of this study was to describe types of contributions to guideline development and authorship policies of guideline-producing organizations as described in their guidance documents on guideline development.

Methods

We conducted a descriptive summary of guidance documents on guideline development. Using multiple sources, we initially compiled a list of guideline-producing organizations and then searched for their publicly available guidance documents on guideline development (eg, guideline handbooks). Authors abstracted data in duplicate and independently on the organizations’ characteristics, types of contributions to guideline development, and authorship policies.

Results

We identified 133 guideline-producing organizations with publicly available guidance documents, of which the majority were professional associations (59%) from the clinical field (84%). Types of contributions to guideline development described by the organizations could be categorized as related to: management; content expertise; technical expertise; or dissemination, implementation, and quality measures. Commonly reported specific contributions included panel membership (99%), executive (83%), evidence synthesis (86%), and peer review (92%). A minority of organizations mentioned entities specifically dedicated to conflict-of-interest management (20%) and to dissemination, implementation, and quality measures (24%). For most organizations, panelists were involved in either supporting or conducting the evidence synthesis (73%). Sixty percent of organizations mentioned that panels should be multidisciplinary, and 44% mentioned that they should be balanced according to at least one characteristic (eg, geographical region) (44%). A minority of organizations had a guideline authorship policy (38%). Out of those, a majority specified types of contributions eligible for authorship (76%), a minority specified criteria for exclusion from authorship (18%), and rules for authorship order (27%).

Conclusion

Guidance documents of guideline-developing organizations consistently describe four types of contributions (panel membership, executive, evidence synthesis, and peer review), while others are less commonly described. They also lack important details on authorship policies.
导言:确定对指南制定的贡献类型以及承认这些贡献群体是指南制定过程中的关键步骤。本研究的目的是描述指南制定的贡献类型和指南制定组织在其指南制定的指导文件中所描述的作者政策。方法:我们对指南制定的指导性文件进行了描述性总结。使用多种来源,我们最初编制了一个指南生产组织的列表,然后搜索他们关于指南开发的公开可用的指导文件(例如,指南手册)。作者根据组织的特征、对指南制定的贡献类型和作者政策,独立地提取了两份数据。结果:我们纳入了来自133个指南制定组织的指导文件,其中大多数是临床领域(84%)的专业协会(59%)。各组织对准则制定的贡献类型可分为以下几个方面:管理;内容专业知识;专业技术;或传播、实施和质量措施。通常报告的具体贡献包括小组成员(99%)、执行(83%)、证据合成(86%)和同行评审(92%)。少数组织提到了专门致力于利益冲突管理(20%)和传播、实施和质量措施(24%)的实体。对于大多数组织,小组成员参与支持或指导证据合成(73%)。60%的组织提到小组应是多学科的,44%的组织提到小组应根据至少一种特征(例如地理区域)加以平衡(44%)。少数组织有指导作者政策(38%)。其中,大多数指定了符合作者资格的贡献类型(76%),少数指定了排除作者资格的标准(18%),以及作者顺序规则(27%)。讨论:指导方针制定组织的指导文件一致地描述了四种类型的贡献(小组成员、执行、证据合成和同行评审),而其他类型的贡献则较少被描述。它们也缺乏关于作者身份政策的重要细节。
{"title":"Guideline organizations’ guidance documents paper 3: contributions and authorship","authors":"Joanne Khabsa ,&nbsp;Mohamed M. Khamis ,&nbsp;Rachad Ghazal ,&nbsp;Noha El Yaman ,&nbsp;Reem Hoteit ,&nbsp;Elsa Hebbo ,&nbsp;Sally Yaacoub ,&nbsp;Wojtek Wiercioch ,&nbsp;Elie A. Akl","doi":"10.1016/j.jclinepi.2025.112065","DOIUrl":"10.1016/j.jclinepi.2025.112065","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Determining the types of contributions to guideline development, as well as acknowledging these contributions groups, are critical steps in the guideline development process. The objective of this study was to describe types of contributions to guideline development and authorship policies of guideline-producing organizations as described in their guidance documents on guideline development.</div></div><div><h3>Methods</h3><div>We conducted a descriptive summary of guidance documents on guideline development. Using multiple sources, we initially compiled a list of guideline-producing organizations and then searched for their publicly available guidance documents on guideline development (eg, guideline handbooks). Authors abstracted data in duplicate and independently on the organizations’ characteristics, types of contributions to guideline development, and authorship policies.</div></div><div><h3>Results</h3><div>We identified 133 guideline-producing organizations with publicly available guidance documents, of which the majority were professional associations (59%) from the clinical field (84%). Types of contributions to guideline development described by the organizations could be categorized as related to: management; content expertise; technical expertise; or dissemination, implementation, and quality measures. Commonly reported specific contributions included panel membership (99%), executive (83%), evidence synthesis (86%), and peer review (92%). A minority of organizations mentioned entities specifically dedicated to conflict-of-interest management (20%) and to dissemination, implementation, and quality measures (24%). For most organizations, panelists were involved in either supporting or conducting the evidence synthesis (73%). Sixty percent of organizations mentioned that panels should be multidisciplinary, and 44% mentioned that they should be balanced according to at least one characteristic (eg, geographical region) (44%). A minority of organizations had a guideline authorship policy (38%). Out of those, a majority specified types of contributions eligible for authorship (76%), a minority specified criteria for exclusion from authorship (18%), and rules for authorship order (27%).</div></div><div><h3>Conclusion</h3><div>Guidance documents of guideline-developing organizations consistently describe four types of contributions (panel membership, executive, evidence synthesis, and peer review), while others are less commonly described. They also lack important details on authorship policies.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112065"},"PeriodicalIF":5.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Clinical Epidemiology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1