首页 > 最新文献

Journal of Clinical Epidemiology最新文献

英文 中文
Evaluation of tools used to assess adherence to PRISMA 2020 reveals inconsistent methods and poor tool implementability: part I of a systematic review 对用于评估遵守PRISMA 2020的工具的评估显示,方法不一致,工具可实施性差:系统评价的第一部分。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2026-01-09 DOI: 10.1016/j.jclinepi.2026.112133
Daniel G. Hamilton , Joanne E. McKenzie , Camilla H. Nejstgaard , Sue E. Brennan , David Moher , Matthew J. Page

Background and Objectives

Numerous studies have assessed the adherence of published systematic reviews to the PRISMA 2020 statement. We aimed to summarize the characteristics and methods of development of the tools used to assess adherence in these studies.

Methods

MEDLINE, Embase, and PsycINFO (all via Ovid) were searched on January 20, 2025, to locate studies that assessed adherence of systematic reviews of health interventions to the PRISMA 2020 statement. Two authors independently screened all records and extracted data. We examined three aspects of the tools used to assess adherence to PRISMA 2020: i) characteristics of the assessment tool, ii) methods used to develop and validate the tool, and iii) processes used to apply the tool. We classified a tool as “implementable” by researchers external to the tool developers if authors reported the exact wording of each item, its response options, guidance on how to operationalize all response options, and the algorithms used to aggregate judgments and quantify adherence.

Results

We included 24 meta-research studies that had assessed adherence to PRISMA 2020 in 2766 systematic reviews published between 1989 and 2024. Most authors assessed adherence to PRISMA 2020 in its entirety (N = 15/24, 63%), with the remaining nine (37%) assessing adherence to one or a subset of domains (eg, abstract, search methods, and risk of bias assessment methods). Psychometric testing of the assessment tool was reported by five studies (21%), all of which assessed the inter-rater reliability of the tool. Only one (4%) reported how response options for all items were operationalized. According to our criteria, only one assessment tool was classified as implementable (N = 1/24, 4%). No authorship team used the same methods to assess adherence to the PRISMA 2020 statement. However, information on some tool characteristics was unavailable for several studies.

Conclusion

Our findings demonstrate variation and inadequacies in the methods and reporting of tools used to assess adherence to the PRISMA 2020 statement. We have commenced work on a standardized PRISMA 2020 assessment tool to facilitate accurate and consistent assessments of adherence of systematic reviews to PRISMA. In the interim, we provide some recommendations for how meta-researchers interested in assessing adherence of systematic reviews to the PRISMA 2020 statement can transparently report the findings of their assessments.
背景:许多研究评估了已发表的系统评价对PRISMA 2020声明的依从性。我们的目的是总结这些研究中用于评估依从性的工具的特点和开发方法。方法:于2025年1月20日检索MEDLINE、Embase和PsycINFO(均通过Ovid),以定位评估卫生干预系统评价对PRISMA 2020声明依从性的研究。两位作者独立筛选所有记录并提取数据。我们检查了用于评估遵守PRISMA 2020的工具的三个方面:1)评估工具的特征,2)用于开发和验证工具的方法,3)用于应用工具的过程。如果作者报告了每个项目的确切措辞,其响应选项,如何操作所有响应选项的指导,以及用于汇总判断和量化依从性的算法,我们将工具归类为工具开发人员外部的研究人员“可实施”的工具。结果:我们纳入了24项元研究,这些研究在1989年至2024年间发表的2766篇系统综述中评估了PRISMA 2020的依从性。大多数作者评估了对PRISMA 2020的整体依从性(N=15/24, 63%),其余9位(37%)评估了对一个或一个子集的依从性(例如,摘要、搜索方法、偏倚风险评估方法)。五项研究(21%)报告了评估工具的心理测量测试,所有研究都评估了该工具的评估者间可靠性。只有一个(4%)报告了所有项目的响应选项是如何运作的。根据我们的标准,只有一个评估工具被归类为可实施的(N=1/ 24,4%)。没有一个作者团队使用相同的方法来评估对PRISMA 2020声明的遵守情况。然而,关于某些工具特性的信息在一些研究中是不可用的。结论:我们的研究结果表明,用于评估遵守PRISMA 2020声明的方法和报告工具存在差异和不足。我们已经开始制定标准化的PRISMA 2020评估工具,以促进对系统审查遵守PRISMA的情况进行准确和一致的评估。在此期间,我们提供了一些建议,以便对评估系统评价对PRISMA 2020声明的依从性感兴趣的元研究人员能够透明地报告他们的评估结果。
{"title":"Evaluation of tools used to assess adherence to PRISMA 2020 reveals inconsistent methods and poor tool implementability: part I of a systematic review","authors":"Daniel G. Hamilton ,&nbsp;Joanne E. McKenzie ,&nbsp;Camilla H. Nejstgaard ,&nbsp;Sue E. Brennan ,&nbsp;David Moher ,&nbsp;Matthew J. Page","doi":"10.1016/j.jclinepi.2026.112133","DOIUrl":"10.1016/j.jclinepi.2026.112133","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Numerous studies have assessed the adherence of published systematic reviews to the PRISMA 2020 statement. We aimed to summarize the characteristics and methods of development of the tools used to assess adherence in these studies.</div></div><div><h3>Methods</h3><div>MEDLINE, Embase, and PsycINFO (all via Ovid) were searched on January 20, 2025, to locate studies that assessed adherence of systematic reviews of health interventions to the PRISMA 2020 statement. Two authors independently screened all records and extracted data. We examined three aspects of the tools used to assess adherence to PRISMA 2020: i) characteristics of the assessment tool, ii) methods used to develop and validate the tool, and iii) processes used to apply the tool. We classified a tool as “implementable” by researchers external to the tool developers if authors reported the exact wording of each item, its response options, guidance on how to operationalize all response options, and the algorithms used to aggregate judgments and quantify adherence.</div></div><div><h3>Results</h3><div>We included 24 meta-research studies that had assessed adherence to PRISMA 2020 in 2766 systematic reviews published between 1989 and 2024. Most authors assessed adherence to PRISMA 2020 in its entirety (<em>N</em> = 15/24, 63%), with the remaining nine (37%) assessing adherence to one or a subset of domains (eg, abstract, search methods, and risk of bias assessment methods). Psychometric testing of the assessment tool was reported by five studies (21%), all of which assessed the inter-rater reliability of the tool. Only one (4%) reported how response options for all items were operationalized. According to our criteria, only one assessment tool was classified as implementable (<em>N</em> = 1/24, 4%). No authorship team used the same methods to assess adherence to the PRISMA 2020 statement. However, information on some tool characteristics was unavailable for several studies.</div></div><div><h3>Conclusion</h3><div>Our findings demonstrate variation and inadequacies in the methods and reporting of tools used to assess adherence to the PRISMA 2020 statement. We have commenced work on a standardized PRISMA 2020 assessment tool to facilitate accurate and consistent assessments of adherence of systematic reviews to PRISMA. In the interim, we provide some recommendations for how meta-researchers interested in assessing adherence of systematic reviews to the PRISMA 2020 statement can transparently report the findings of their assessments.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"192 ","pages":"Article 112133"},"PeriodicalIF":5.2,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145953728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Use of minimum and maximum pretest probabilities to conclude with confidence after obtaining a diagnostic test result 在获得诊断测试结果后,使用最小或最大预测试概率有把握地得出结论。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2026-01-09 DOI: 10.1016/j.jclinepi.2026.112134
Loic Desquilbet , Maxime Kurtz , Morgane Canonne-Guibert , Solen Kerneis , Ghita Benchekroun
Interpreting positive and negative predictive values of diagnostic tests is crucial for clinical decision-making as they quantify the clinician's confidence in an individual's disease status after testing. For a given diagnostic test, these values depend on the pretest probability (ie, the probability that an individual has the disease before testing), which differs across individuals. Therefore, they should not be presented as a single pair for clinical use. To account for this individual variability in pretest probability and the minimum confidence level required to conclude on an individual's disease status, we propose the use of the minimum and maximum pretest probabilities (“PTP+conf” and “PTP-conf”). These thresholds depend on the test's sensitivity and specificity, as well as the clinician's predefined confidence level. They represent the pretest probability above (or below) which a positive (or negative) test result allows the clinician to reach that minimum confidence level (“conf”) regarding the presence or absence of disease. These “PTP+conf” and “PTP-conf” values can be considered as intrinsic characteristics of a diagnostic test for a given confidence threshold. Clinicians then only need to compare their bedside estimate of the individual's pretest probability with “PTP+conf” (if positive result) or “PTP-conf” (if negative result) to determine whether they can conclude with sufficient confidence after obtaining the test result.
解释诊断测试的阳性和阴性预测值对临床决策至关重要,因为它们量化了临床医生在测试后对个体疾病状态的信心。对于给定的诊断测试,这些值取决于测试前概率(即,个体在测试前患病的概率),这在个体之间是不同的。因此,它们不应该作为一对用于临床。为了解释这种个体在预测概率和对个体疾病状态得出结论所需的最低置信水平方面的差异,我们建议使用最小和最大预测概率(“PTP+conf”和“PTP-conf”)。这些阈值取决于测试的敏感性和特异性,以及临床医生预定义的置信水平。它们代表测试前的概率高于(或低于),阳性(或阴性)测试结果允许临床医生达到关于存在或不存在疾病的最低置信水平(“conf”)。对于给定的置信阈值,这些“PTP+conf”和“PTP-conf”值可以被认为是诊断测试的内在特征。然后,临床医生只需要将他们对个体测试前概率的床边估计与“PTP+conf”(如果结果为阳性)或“PTP-conf”(如果结果为阴性)进行比较,就可以确定他们在获得测试结果后是否可以有足够的信心得出结论。当使用医学检查时,其结果应有助于临床医生判断病人是否可能患病。检测结果的阳性预测值(如果检测结果为阳性,患者患病的几率)和阴性预测值(如果检测结果为阴性,患者不患病的几率)通常以单个数字报告。然而,这些数字的变化取决于疾病在检测前的可能性(“检测前概率”),这在患者之间是不同的。因此,只报告一对预测值可能会产生误导。我们建议一种简单的方法来帮助临床医生决定他们是否可以信任个别患者的诊断测试结果。对于任何测试,我们都可以计算两个有用的值:信任阳性结果所需的最小预测试概率,以及信任阴性结果所需的最大预测试概率。这些值被称为PTP+conf和PTP-conf,取决于测试的敏感性和特异性,以及临床医生在做出决定之前希望拥有的置信度。在实践中,临床医生只需要在检测前估计患者患病的可能性,然后将这一估计与PTP+conf(如果检测为阳性)或PTP-conf(如果检测为阴性)进行比较。这种比较使我们更容易知道测试结果是否可信,是否需要更多的信息来得出结论。
{"title":"Use of minimum and maximum pretest probabilities to conclude with confidence after obtaining a diagnostic test result","authors":"Loic Desquilbet ,&nbsp;Maxime Kurtz ,&nbsp;Morgane Canonne-Guibert ,&nbsp;Solen Kerneis ,&nbsp;Ghita Benchekroun","doi":"10.1016/j.jclinepi.2026.112134","DOIUrl":"10.1016/j.jclinepi.2026.112134","url":null,"abstract":"<div><div>Interpreting positive and negative predictive values of diagnostic tests is crucial for clinical decision-making as they quantify the clinician's confidence in an individual's disease status after testing. For a given diagnostic test, these values depend on the pretest probability (ie, the probability that an individual has the disease before testing), which differs across individuals. Therefore, they should not be presented as a single pair for clinical use. To account for this individual variability in pretest probability and the minimum confidence level required to conclude on an individual's disease status, we propose the use of the minimum and maximum pretest probabilities (“PTP+<sub>conf</sub>” and “PTP-<sub>conf</sub>”). These thresholds depend on the test's sensitivity and specificity, as well as the clinician's predefined confidence level. They represent the pretest probability above (or below) which a positive (or negative) test result allows the clinician to reach that minimum confidence level (“conf”) regarding the presence or absence of disease. These “PTP+<sub>conf</sub>” and “PTP-<sub>conf</sub>” values can be considered as intrinsic characteristics of a diagnostic test for a given confidence threshold. Clinicians then only need to compare their bedside estimate of the individual's pretest probability with “PTP+<sub>conf</sub>” (if positive result) or “PTP-<sub>conf</sub>” (if negative result) to determine whether they can conclude with sufficient confidence after obtaining the test result.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"192 ","pages":"Article 112134"},"PeriodicalIF":5.2,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145953814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modifiable methodological and reporting practices are associated with reproducibility of health sciences research: a systematic review and evidence and gap map. 可修改的方法和报告做法与卫生科学研究的可重复性有关:系统审查和证据与差距图。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2026-01-08 DOI: 10.1016/j.jclinepi.2026.112135
Juliane Kennett, Stephana Julia Moss, Jeanna Parsons Leigh, Niklas Bobrovitz, Henry T Stelfox
<p><strong>Objectives: </strong>Map the evidence on factors (eg, research practices) associated with reproducibility of methods and results reported in health sciences research.</p><p><strong>Study design and setting: </strong>Five bibliographic databases were searched from January 2000 to May 2023 with supplemental searches of high-impact journals and relevant records. We included health science records of observational, interventional, or knowledge synthesis studies reporting data on factors related to research reproducibility. Data were coded using inductive qualitative content analysis, and empirical evidence was synthesized with evidence and gap maps. Factors were categorized as modifiable or nonmodifiable; reproducibility outcomes were categorized as related to methods or results. Statistical tests of association between factors and reproducibility outcomes were summarized.</p><p><strong>Results: </strong>Our review included 148 studies, primarily from biomedical/preclinical (n = 62) and clinical (n = 71) domains. Factors were classified into 12 modifiable (eg, sample size and power) and three nonmodifiable (eg, publication year) categories. Of 234 reported evaluations of factors, 76 (32%) assessed methodological reproducibility and 158 (68%) assessed results reproducibility. The most frequently reported factor was transparency and reporting (38 of 234 assessments). A total of 155 factors (66%) were evaluated for statistical associations with reproducibility outcomes. Statistical associations were most frequently conducted for analytical methods (24 of 26 reporting significance), sample size and power (21 of 23 reporting significance), and participants characteristics and study materials (10 of 12 reporting significance).</p><p><strong>Conclusion: </strong>Several modifiable factors were associated with reproducibility of health sciences research and represent opportunities for intervention. Applying more stringent statistical testing procedures and thresholds, conducting appropriate sample size and power calculations, and improving transparency and completeness of reporting should be top priorities for improving reproducibility. Experimental studies to test interventions to improve reproducibility are needed.</p><p><strong>Plain language summary: </strong>Many research findings in medicine and health cannot be reproduced by other researchers. This makes it harder to know what evidence to trust when making patient care and health policy decisions. We systematically reviewed 148 studies that examined how specific research practices are linked to whether research findings can be reproduced. We found that several modifiable features of study design, analysis, and reporting were often associated with better reproducibility. Using larger sample sizes, applying more stringent statistical methods, and providing transparent, complete descriptions of methods and results were frequently linked to more reproducible findings. These results suggest that
目标:绘制与卫生科学研究中报告的方法和结果的可重复性相关的因素(例如,研究实践)的证据图。研究设计与设置:检索5个文献数据库,检索时间为2000年1月至2023年5月,并补充检索高影响力期刊和相关记录。我们纳入了观察性、干预性或知识综合研究的健康科学记录,这些研究报告了与研究可重复性相关因素的数据。采用归纳定性内容分析对数据进行编码,利用证据和差距图对经验证据进行综合。因子分为可改变因子和不可改变因子;可重复性结果按方法或结果进行分类。总结了各因素与再现性结果相关的统计检验。结果:我们的综述包括148项研究,主要来自生物医学/临床前(n=62)和临床(n=71)领域。因子分为12个可修改(如样本量和功率)和3个不可修改(如出版年份)类别。在报告的234项因素评价中,76项(32%)评估了方法学的可重复性,158项(68%)评估了结果的可重复性。最常报告的因素是透明度和报告(38/234次评估)。共评估155个因素(66%)与可重复性结果的统计学相关性。分析方法(24/26报告显著性)、样本量和功率(21/23报告显著性)、参与者特征和研究材料(10/12报告显著性)的统计关联最为频繁。结论:几个可修改的因素与卫生科学研究的可重复性有关,并代表了干预的机会。应用更严格的统计测试程序和阈值,进行适当的样本量和功率计算,以及提高报告的透明度和完整性应该是提高再现性的首要优先事项。需要进行实验研究来测试干预措施以提高可重复性。简单的语言总结:医学和卫生领域的许多研究成果不能被其他研究人员复制。这使得在制定病人护理和卫生政策决策时,更难以知道应该相信哪些证据。我们系统地回顾了148项研究,这些研究考察了具体的研究实践如何与研究结果是否可以被复制联系在一起。我们发现研究设计、分析和报告的几个可修改的特征通常与更好的可重复性相关。使用更大的样本量,采用更严格的统计方法,并对方法和结果提供透明、完整的描述,往往与更可重复的发现联系在一起。这些结果表明,改进研究计划、分析和报告的方式可能会增加卫生研究的可信度和有用性。
{"title":"Modifiable methodological and reporting practices are associated with reproducibility of health sciences research: a systematic review and evidence and gap map.","authors":"Juliane Kennett, Stephana Julia Moss, Jeanna Parsons Leigh, Niklas Bobrovitz, Henry T Stelfox","doi":"10.1016/j.jclinepi.2026.112135","DOIUrl":"10.1016/j.jclinepi.2026.112135","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Objectives: &lt;/strong&gt;Map the evidence on factors (eg, research practices) associated with reproducibility of methods and results reported in health sciences research.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Study design and setting: &lt;/strong&gt;Five bibliographic databases were searched from January 2000 to May 2023 with supplemental searches of high-impact journals and relevant records. We included health science records of observational, interventional, or knowledge synthesis studies reporting data on factors related to research reproducibility. Data were coded using inductive qualitative content analysis, and empirical evidence was synthesized with evidence and gap maps. Factors were categorized as modifiable or nonmodifiable; reproducibility outcomes were categorized as related to methods or results. Statistical tests of association between factors and reproducibility outcomes were summarized.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;Our review included 148 studies, primarily from biomedical/preclinical (n = 62) and clinical (n = 71) domains. Factors were classified into 12 modifiable (eg, sample size and power) and three nonmodifiable (eg, publication year) categories. Of 234 reported evaluations of factors, 76 (32%) assessed methodological reproducibility and 158 (68%) assessed results reproducibility. The most frequently reported factor was transparency and reporting (38 of 234 assessments). A total of 155 factors (66%) were evaluated for statistical associations with reproducibility outcomes. Statistical associations were most frequently conducted for analytical methods (24 of 26 reporting significance), sample size and power (21 of 23 reporting significance), and participants characteristics and study materials (10 of 12 reporting significance).&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusion: &lt;/strong&gt;Several modifiable factors were associated with reproducibility of health sciences research and represent opportunities for intervention. Applying more stringent statistical testing procedures and thresholds, conducting appropriate sample size and power calculations, and improving transparency and completeness of reporting should be top priorities for improving reproducibility. Experimental studies to test interventions to improve reproducibility are needed.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Plain language summary: &lt;/strong&gt;Many research findings in medicine and health cannot be reproduced by other researchers. This makes it harder to know what evidence to trust when making patient care and health policy decisions. We systematically reviewed 148 studies that examined how specific research practices are linked to whether research findings can be reproduced. We found that several modifiable features of study design, analysis, and reporting were often associated with better reproducibility. Using larger sample sizes, applying more stringent statistical methods, and providing transparent, complete descriptions of methods and results were frequently linked to more reproducible findings. These results suggest that","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"112135"},"PeriodicalIF":5.2,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145949384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Challenges in handling allogeneic stem cell transplantation in randomized clinical trials 长标题:随机临床试验中处理同种异体干细胞移植的挑战。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2026-01-06 DOI: 10.1016/j.jclinepi.2026.112132
Roxane Couturier , Loïc Vasseur , Nicolas Boissel , Hervé Dombret , Jérôme Lambert , Sylvie Chevret

Background

In randomized clinical trials (RCTs) for hematological malignancies, patients may undergo allogeneic hematopoietic stem cell transplantation (allo-HSCT) as part of standard clinical pathways. Allo-HSCT is a potentially curative but high-risk procedure performed after randomization and thus constitutes an important intercurrent event that can substantially influence survival outcomes. However, its handling in statistical analyses is not standardized.

Objective

To review current statistical methods used to handle postrandomization allo-HSCT as an intercurrent event in RCTs, and to highlight how each method corresponds to a different estimand, reflecting distinct clinical questions.

Methods

We reviewed 93 RCTs published between January 1, 2014, and April 1, 2024 that reported survival outcomes with postrandomization allo-HSCT.

Results

Three different statistical methods were employed to estimate the treatment effects: censoring at the time of allo-HSCT (64 analyses), a time-dependent covariate in a Cox model (24 analyses), or ignoring allo-HSCT status (17 analyses). Each method estimates the treatment effect in response to a different clinical question and estimand, with specific assumptions that must be considered when interpreting the results. Censoring corresponds to the “hypothetical” estimand, but its validity requires 2 things: first, that the likelihood of receiving allo-HSCT is similar across treatment arms; and second, that patients who undergo transplantation have a similar prognosis to those who do not. Time-dependent covariate incorporates the effect of allo-HSCT but is not associated with a specific estimand and requires careful interpretation. Ignoring allo-HSCT corresponds to the “treatment policy” strategy, of comparing the treatment strategy, whichever allo-HSCT or not, without additional assumptions.

Conclusion

There is no consensus on handling allo-HSCT as an intercurrent event in survival analyses. Censoring, although common, may introduce bias if treatment or prognostic covariates influence allo-HSCT use. The treatment policy estimand should be preferred when allo-HSCT is part of the therapeutic strategy.
背景:在血液系统恶性肿瘤的随机临床试验(rct)中,患者可能接受同种异体造血干细胞移植(alloo - hsct)作为标准临床途径的一部分。同种异体造血干细胞移植是一种可能治愈但高风险的随机化手术,因此构成了一个重要的交互事件,可以实质性地影响生存结果。然而,在统计分析中对其处理并不规范。目的:回顾目前用于处理随机化后的同种异体造血干细胞移植作为随机对照试验中并发事件的统计方法,并强调每种方法如何对应不同的估计,反映不同的临床问题。方法:我们回顾了2014年1月1日至2024年4月1日发表的93项随机对照试验,这些试验报告了随机化后的同种异体造血干细胞移植的生存结果。结果:采用了三种不同的统计方法来估计治疗效果:在异基因造血干细胞移植时进行筛选(64项分析),在Cox模型中使用时间相关协变量(24项分析)或忽略异基因造血干细胞移植状态(17项分析)。每种方法根据不同的临床问题和评估来评估治疗效果,在解释结果时必须考虑特定的假设。审查符合“假设”的估计,但其有效性需要两个条件:首先,接受同种异体造血干细胞移植的可能性在治疗组之间是相似的;第二,接受移植的患者与未接受移植的患者预后相似。时间相关协变量包含同种异体造血干细胞移植的影响,但与特定的估计无关,需要仔细解释。忽略同种异体造血干细胞移植对应于“治疗策略”策略,即比较治疗策略,无论是否同种异体造血干细胞移植,没有额外的额外假设。结论:在生存分析中,将同种异体造血干细胞移植作为并发事件处理尚无共识。筛选虽然常见,但如果治疗或预后协变量影响同种异体造血干细胞移植的使用,则可能引入偏倚。当同种异体造血干细胞移植是治疗策略的一部分时,应优先选择治疗策略。
{"title":"Challenges in handling allogeneic stem cell transplantation in randomized clinical trials","authors":"Roxane Couturier ,&nbsp;Loïc Vasseur ,&nbsp;Nicolas Boissel ,&nbsp;Hervé Dombret ,&nbsp;Jérôme Lambert ,&nbsp;Sylvie Chevret","doi":"10.1016/j.jclinepi.2026.112132","DOIUrl":"10.1016/j.jclinepi.2026.112132","url":null,"abstract":"<div><h3>Background</h3><div>In randomized clinical trials (RCTs) for hematological malignancies, patients may undergo allogeneic hematopoietic stem cell transplantation (allo-HSCT) as part of standard clinical pathways. Allo-HSCT is a potentially curative but high-risk procedure performed after randomization and thus constitutes an important intercurrent event that can substantially influence survival outcomes. However, its handling in statistical analyses is not standardized.</div></div><div><h3>Objective</h3><div>To review current statistical methods used to handle postrandomization allo-HSCT as an intercurrent event in RCTs, and to highlight how each method corresponds to a different estimand, reflecting distinct clinical questions.</div></div><div><h3>Methods</h3><div>We reviewed 93 RCTs published between January 1, 2014, and April 1, 2024 that reported survival outcomes with postrandomization allo-HSCT.</div></div><div><h3>Results</h3><div>Three different statistical methods were employed to estimate the treatment effects: censoring at the time of allo-HSCT (64 analyses), a time-dependent covariate in a Cox model (24 analyses), or ignoring allo-HSCT status (17 analyses). Each method estimates the treatment effect in response to a different clinical question and estimand, with specific assumptions that must be considered when interpreting the results. Censoring corresponds to the “hypothetical” estimand, but its validity requires 2 things: first, that the likelihood of receiving allo-HSCT is similar across treatment arms; and second, that patients who undergo transplantation have a similar prognosis to those who do not. Time-dependent covariate incorporates the effect of allo-HSCT but is not associated with a specific estimand and requires careful interpretation. Ignoring allo-HSCT corresponds to the “treatment policy” strategy, of comparing the treatment strategy, whichever allo-HSCT or not, without additional assumptions.</div></div><div><h3>Conclusion</h3><div>There is no consensus on handling allo-HSCT as an intercurrent event in survival analyses. Censoring, although common, may introduce bias if treatment or prognostic covariates influence allo-HSCT use. The treatment policy estimand should be preferred when allo-HSCT is part of the therapeutic strategy.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"191 ","pages":"Article 112132"},"PeriodicalIF":5.2,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145935893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Systematic reviews and meta-analysis: continued failure to achieve research integrity 系统评价和荟萃分析:实现研究完整性的持续失败。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2026-01-01 DOI: 10.1016/j.jclinepi.2025.112017
Howard Bauchner
{"title":"Systematic reviews and meta-analysis: continued failure to achieve research integrity","authors":"Howard Bauchner","doi":"10.1016/j.jclinepi.2025.112017","DOIUrl":"10.1016/j.jclinepi.2025.112017","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112017"},"PeriodicalIF":5.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145310051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The loss of efficacy of fluoxetine in pediatric depression: explanations, lack of acknowledgment, and implications for other treatments 氟西汀对儿童抑郁症疗效的丧失:解释、缺乏认识以及对其他治疗的影响
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2026-01-01 DOI: 10.1016/j.jclinepi.2025.112016
Martin Plöderl , Richard Lyus , Mark A. Horowitz , Joanna Moncrieff

Objectives

Fluoxetine is among the most used antidepressants for children and adolescents and frequently recommended as first-line pharmacological treatment for pediatric depression. However, in contrast to earlier studies and reviews, a Cochrane network meta-analysis from 2021 concluded that the estimated efficacy of fluoxetine was no longer clinically meaningful. We aimed to explain the discrepant findings between the recent Cochrane review and earlier reviews, and to explore if this was acknowledged in guidelines and treatment recommendations appearing since then.

Study Design and Setting

Meta-analytical aggregation of trial results over time, exploring potential biases, and a nonsystematic search for recent treatment guidelines/recommendations from major medical organizations.

Results

The estimated efficacy of fluoxetine in clinical trials declined over time into the range of clinical equivalence with placebo when more recent studies were included in analyses and when considering common thresholds of clinical significance. This remains unacknowledged in treatment guidelines and related publications, including some that continue to recommend fluoxetine as first-line pharmacological treatment. Finally, we find that the loss of efficacy over time is likely explained by biases such as the novelty bias or by variations of expectancy effects.

Conclusion

The seeming lack of clinically meaningful efficacy of fluoxetine for the treatment of pediatric depression needs to be considered by those who develop treatment recommendations as well as by patients and clinicians. The biases we observed are not only relevant in the evaluation of fluoxetine and other antidepressants for pediatric depression, but also for any new treatment.
目的氟西汀是儿童和青少年最常用的抗抑郁药物之一,经常被推荐作为儿童抑郁症的一线药物治疗。然而,与早期的研究和综述相反,2021年的Cochrane网络荟萃分析得出结论,氟西汀的估计疗效不再具有临床意义。我们的目的是解释最近Cochrane综述与早期综述之间的差异发现,并探讨自那以后出现的指南和治疗建议是否承认这一点。研究设计和设置对长期试验结果进行荟萃分析,探索潜在的偏倚,并对主要医疗机构最近的治疗指南/建议进行非系统搜索。结果当更多近期的研究纳入分析并考虑到临床意义的共同阈值时,氟西汀在临床试验中的估计疗效随着时间的推移而下降到与安慰剂的临床等效范围。这在治疗指南和相关出版物中仍未得到承认,包括一些继续推荐氟西汀作为一线药物治疗的出版物。最后,我们发现,随着时间的推移,有效性的丧失可能是由诸如新颖性偏见或期望效应的变化等偏见来解释的。结论氟西汀治疗儿童抑郁症似乎缺乏临床意义的疗效,需要制定治疗建议的人员以及患者和临床医生加以考虑。我们观察到的偏差不仅与氟西汀和其他抗抑郁药物治疗儿童抑郁症的评估有关,也与任何新疗法有关。
{"title":"The loss of efficacy of fluoxetine in pediatric depression: explanations, lack of acknowledgment, and implications for other treatments","authors":"Martin Plöderl ,&nbsp;Richard Lyus ,&nbsp;Mark A. Horowitz ,&nbsp;Joanna Moncrieff","doi":"10.1016/j.jclinepi.2025.112016","DOIUrl":"10.1016/j.jclinepi.2025.112016","url":null,"abstract":"<div><h3>Objectives</h3><div>Fluoxetine is among the most used antidepressants for children and adolescents and frequently recommended as first-line pharmacological treatment for pediatric depression. However, in contrast to earlier studies and reviews, a Cochrane network meta-analysis from 2021 concluded that the estimated efficacy of fluoxetine was no longer clinically meaningful. We aimed to explain the discrepant findings between the recent Cochrane review and earlier reviews, and to explore if this was acknowledged in guidelines and treatment recommendations appearing since then.</div></div><div><h3>Study Design and Setting</h3><div>Meta-analytical aggregation of trial results over time, exploring potential biases, and a nonsystematic search for recent treatment guidelines/recommendations from major medical organizations.</div></div><div><h3>Results</h3><div>The estimated efficacy of fluoxetine in clinical trials declined over time into the range of clinical equivalence with placebo when more recent studies were included in analyses and when considering common thresholds of clinical significance. This remains unacknowledged in treatment guidelines and related publications, including some that continue to recommend fluoxetine as first-line pharmacological treatment. Finally, we find that the loss of efficacy over time is likely explained by biases such as the novelty bias or by variations of expectancy effects.</div></div><div><h3>Conclusion</h3><div>The seeming lack of clinically meaningful efficacy of fluoxetine for the treatment of pediatric depression needs to be considered by those who develop treatment recommendations as well as by patients and clinicians. The biases we observed are not only relevant in the evaluation of fluoxetine and other antidepressants for pediatric depression, but also for any new treatment.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112016"},"PeriodicalIF":5.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The distinction between causal, predictive, and descriptive research—there is still room for improvement 因果研究、预测研究和描述性研究之间的区别仍有改进的余地。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2026-01-01 DOI: 10.1016/j.jclinepi.2025.111960
Brett P. Dyer
It has been proposed that medical research questions can be categorised into three classes: causal, predictive, and descriptive. This distinction was proposed to encourage researchers to think clearly about how study design, analysis, interpretation, and clinical implications should differ according to the type of research question being investigated. This article highlights four common mistakes that remain in observational research regarding the classification of research questions as causal, predictive, or descriptive, and provides suggestions about how they may be rectified. The four common mistakes are (1) Adjustment for “confounders” in predictive and descriptive research, (2) Interpreting “effects” in prediction models, (3) The use of non-specific terminology that does not indicate which class of research question is being investigated, and (4) Prioritising parsimony over confounder adjustment in causal models.
有人提出,医学研究问题可分为三类:因果性、预测性和描述性。提出这种区分是为了鼓励研究人员清楚地思考研究设计、分析、解释和临床意义应如何根据所调查的研究问题的类型而有所不同。这篇文章强调了在观察性研究中将研究问题分类为因果性、预测性或描述性的四个常见错误,并提供了如何纠正这些错误的建议。四个常见的错误是:(1)在预测和描述性研究中对“混杂因素”进行调整,(2)在预测模型中解释“效应”,(3)使用非特定术语,不能表明正在调查的研究问题的类别,以及(4)在因果模型中优先考虑简约而不是混杂因素调整。
{"title":"The distinction between causal, predictive, and descriptive research—there is still room for improvement","authors":"Brett P. Dyer","doi":"10.1016/j.jclinepi.2025.111960","DOIUrl":"10.1016/j.jclinepi.2025.111960","url":null,"abstract":"<div><div>It has been proposed that medical research questions can be categorised into three classes: causal, predictive, and descriptive. This distinction was proposed to encourage researchers to think clearly about how study design, analysis, interpretation, and clinical implications should differ according to the type of research question being investigated. This article highlights four common mistakes that remain in observational research regarding the classification of research questions as causal, predictive, or descriptive, and provides suggestions about how they may be rectified. The four common mistakes are (1) Adjustment for “confounders” in predictive and descriptive research, (2) Interpreting “effects” in prediction models, (3) The use of non-specific terminology that does not indicate which class of research question is being investigated, and (4) Prioritising parsimony over confounder adjustment in causal models.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 111960"},"PeriodicalIF":5.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144994314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resourcing and validation of the GRADE ontology: reply to Dedeepya et al. GRADE本体的资源和验证:回复Dedeepya等人。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2026-01-01 DOI: 10.1016/j.jclinepi.2025.112024
Brian S. Alper, Joanne Dehnbostel, Holger Schünemann, Paul Whaley
{"title":"Resourcing and validation of the GRADE ontology: reply to Dedeepya et al.","authors":"Brian S. Alper,&nbsp;Joanne Dehnbostel,&nbsp;Holger Schünemann,&nbsp;Paul Whaley","doi":"10.1016/j.jclinepi.2025.112024","DOIUrl":"10.1016/j.jclinepi.2025.112024","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112024"},"PeriodicalIF":5.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145370543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Systematic reviews on the same topic are common but often fail to meet key methodological standards: a research-on-research study 对同一主题的系统评论很常见,但往往不能满足关键的方法标准:研究对研究的研究。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2026-01-01 DOI: 10.1016/j.jclinepi.2025.112018
Wilfred Kwok , Titiane Dallant , Guillaume Martin, Gabriel Fournier, Blandine Kervennic, Ophélie Pingeon, Agnès Dechartres

Objectives

To 1) assess the frequency of overlapping systematic reviews (SRs) on the same topic including overlap in outcomes, 2) assess whether SRs meet some key methodological characteristics, and 3) describe discrepancies in results.

Study Design and Setting

For this research-on-research study, we gathered a random sample of SRs with meta-analysis (MA) published in 2022, identified the questions they addressed and, for each question, searched all SRs with MA published from 2018 to 2023 to assess the frequency of overlap. We assessed whether SRs met a minimum set of six key methodological characteristics: protocol registration, search of major electronic databases, search of trial registries, double selection and extraction, use of the Cochrane Risk-of-Bias tool, and Grading of Recommendations, Assessment, Development, and Evaluations assessment.

Results

From a sample of 107 SRs with MA published in 2022, we extracted 105 different questions and identified 123 other SRs with MA published from 2018 to 2023. There were overlapping SRs for 33 questions (31.4%, 95% CI: 22.9–41.3), with a median of three overlapping SRs per question (IQR 2–6; range 2–19). Of the 230 SRs, 15 (6.5%) met the minimum set of six key methodological characteristics, and 12 (11.4%) questions had at least one SR meeting this criterion. Among the 33 questions with overlapping SRs, for 7 (21.2%), the SRs had discrepant results.

Conclusion

One-third of the SRs published in 2022 had at least one overlapping SR published from 2018 to 2023, and most did not meet a minimum set of methodological standards. For one-fifth of the questions, overlapping SRs provided discrepant results.
目的:评估1)同一主题的重叠系统评价(SRs)的频率,包括结果的重叠,2)SRs是否符合关键的方法学特征,3)描述结果的差异。研究设计和设置:在这项研究中,我们随机收集了2022年发表的meta分析(MA) sr样本,确定了它们所解决的问题,并针对每个问题检索了2018年至2023年发表的所有MA sr,以评估重叠的频率。我们评估了SRs是否满足至少6个关键方法学特征:方案注册、主要电子数据库的检索、试验注册的检索、双重选择和提取、Cochrane风险偏倚工具的使用和GRADE评估。结果:从2022年发表的107份带有MA的sr样本中,我们提取了105个不同的问题,并确定了2018年至2023年发表的123份其他带有MA的sr。有33个问题有重叠的sr (31.4%, 95% CI: 22.9-41.3),每个问题有3个重叠的sr(四分位数范围2-6;范围2-19)。在230个问题中,15个(6.5%)问题符合6个关键方法学特征的最小集合,12个(11.4%)问题至少有一个SR符合该标准。在33个有重叠SRs的问题中,有7个(21.2%)的SRs结果不一致。结论:在2022年发表的SRs中,有三分之一的SRs在2018年至2023年期间至少有一个重叠的SR,并且大多数不符合最低的方法标准。对于五分之一的问题,重叠的SRs提供了不同的结果。
{"title":"Systematic reviews on the same topic are common but often fail to meet key methodological standards: a research-on-research study","authors":"Wilfred Kwok ,&nbsp;Titiane Dallant ,&nbsp;Guillaume Martin,&nbsp;Gabriel Fournier,&nbsp;Blandine Kervennic,&nbsp;Ophélie Pingeon,&nbsp;Agnès Dechartres","doi":"10.1016/j.jclinepi.2025.112018","DOIUrl":"10.1016/j.jclinepi.2025.112018","url":null,"abstract":"<div><h3>Objectives</h3><div>To 1) assess the frequency of overlapping systematic reviews (SRs) on the same topic including overlap in outcomes, 2) assess whether SRs meet some key methodological characteristics, and 3) describe discrepancies in results.</div></div><div><h3>Study Design and Setting</h3><div>For this research-on-research study, we gathered a random sample of SRs with meta-analysis (MA) published in 2022, identified the questions they addressed and, for each question, searched all SRs with MA published from 2018 to 2023 to assess the frequency of overlap. We assessed whether SRs met a minimum set of six key methodological characteristics: protocol registration, search of major electronic databases, search of trial registries, double selection and extraction, use of the Cochrane Risk-of-Bias tool, and Grading of Recommendations, Assessment, Development, and Evaluations assessment.</div></div><div><h3>Results</h3><div>From a sample of 107 SRs with MA published in 2022, we extracted 105 different questions and identified 123 other SRs with MA published from 2018 to 2023. There were overlapping SRs for 33 questions (31.4%, 95% CI: 22.9–41.3), with a median of three overlapping SRs per question (IQR 2–6; range 2–19). Of the 230 SRs, 15 (6.5%) met the minimum set of six key methodological characteristics, and 12 (11.4%) questions had at least one SR meeting this criterion. Among the 33 questions with overlapping SRs, for 7 (21.2%), the SRs had discrepant results.</div></div><div><h3>Conclusion</h3><div>One-third of the SRs published in 2022 had at least one overlapping SR published from 2018 to 2023, and most did not meet a minimum set of methodological standards. For one-fifth of the questions, overlapping SRs provided discrepant results.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112018"},"PeriodicalIF":5.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145330918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical power is an essential element for replication 统计能力是复制的基本要素。
IF 5.2 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Pub Date : 2026-01-01 DOI: 10.1016/j.jclinepi.2025.112021
Marc Bennett Stone
{"title":"Statistical power is an essential element for replication","authors":"Marc Bennett Stone","doi":"10.1016/j.jclinepi.2025.112021","DOIUrl":"10.1016/j.jclinepi.2025.112021","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"189 ","pages":"Article 112021"},"PeriodicalIF":5.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Clinical Epidemiology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1