首页 > 最新文献

Statistics in Medicine最新文献

英文 中文
An Improved Bayesian Pick-the-Winner (IBPW) Design for Randomized Phase II Clinical Trials. 随机II期临床试验的改进贝叶斯选择赢家(IBPW)设计。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70348
Wanni Lei, Maosen Peng, Nasser Altorki, Xi Kathy Zhou

Phase II clinical trials play a pivotal role in drug development by screening a large number of drug candidates to identify those with promising preliminary efficacy for phase III testing. Trial designs that enable efficient decision-making with small sample sizes and early futility stopping while controlling for type I and type II errors in hypothesis testing, such as Simon's two-stage design, are preferred. Randomized multi-arm trials are increasingly used in phase II settings to overcome the limitations associated with using historical controls as the reference. However, how to effectively balance efficiency and accurate decision-making continues to be an important research topic. A notable development in phase II randomized design methodology is the Bayesian pick-the-winner (BPW) design proposed by Chen et al. [1]. Despite multiple appealing features, this method cannot easily control for overall type I and type II errors for winner selection. Here, we introduce an improved randomized two-stage Bayesian pick-the-winner (IBPW) design that formalizes the winner-selection based hypothesis testing, optimizes sample sizes and decision cut-offs by strictly controlling the type I and type II errors under a set of flexible hypotheses for winner-selection across two treatment arms. Simulation studies demonstrate that our new design offers improved operating characteristics for winner selection while retaining the desirable features of the BPW design.

II期临床试验在药物开发中起着关键作用,通过筛选大量候选药物来确定那些有希望进行III期试验的初步疗效。在控制假设检验中的I型和II型错误的同时,能够以小样本量和早期无效停止进行有效决策的试验设计,如Simon的两阶段设计,是首选。随机多组试验越来越多地用于II期研究,以克服使用历史对照作为参考的局限性。然而,如何有效地平衡效率和准确决策仍然是一个重要的研究课题。第二阶段随机设计方法的一个显著发展是Chen等人提出的贝叶斯选择赢家(BPW)设计。尽管有许多吸引人的特点,但这种方法不能轻易控制获胜者选择的整体I型和II型错误。在这里,我们引入了一种改进的随机两阶段贝叶斯选择赢家(IBPW)设计,该设计形式化了基于赢家选择的假设检验,通过严格控制两个治疗组中赢家选择的一组灵活假设下的I型和II型错误来优化样本量和决策截止点。仿真研究表明,我们的新设计在保留BPW设计的理想特性的同时,为获胜者选择提供了改进的操作特性。
{"title":"An Improved Bayesian Pick-the-Winner (IBPW) Design for Randomized Phase II Clinical Trials.","authors":"Wanni Lei, Maosen Peng, Nasser Altorki, Xi Kathy Zhou","doi":"10.1002/sim.70348","DOIUrl":"10.1002/sim.70348","url":null,"abstract":"<p><p>Phase II clinical trials play a pivotal role in drug development by screening a large number of drug candidates to identify those with promising preliminary efficacy for phase III testing. Trial designs that enable efficient decision-making with small sample sizes and early futility stopping while controlling for type I and type II errors in hypothesis testing, such as Simon's two-stage design, are preferred. Randomized multi-arm trials are increasingly used in phase II settings to overcome the limitations associated with using historical controls as the reference. However, how to effectively balance efficiency and accurate decision-making continues to be an important research topic. A notable development in phase II randomized design methodology is the Bayesian pick-the-winner (BPW) design proposed by Chen et al. [1]. Despite multiple appealing features, this method cannot easily control for overall type I and type II errors for winner selection. Here, we introduce an improved randomized two-stage Bayesian pick-the-winner (IBPW) design that formalizes the winner-selection based hypothesis testing, optimizes sample sizes and decision cut-offs by strictly controlling the type I and type II errors under a set of flexible hypotheses for winner-selection across two treatment arms. Simulation studies demonstrate that our new design offers improved operating characteristics for winner selection while retaining the desirable features of the BPW design.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70348"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826356/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146030892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Overview and Practical Recommendations on Using Shapley Values for Identifying Predictive Biomarkers via CATE Modeling. 概述和实用建议使用沙普利值识别预测性生物标志物通过CATE建模。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70375
David Svensson, Erik Hermansson, Nikolaos Nikolaou, Konstantinos Sechidis, Ilya Lipkovich

In recent years, two parallel research trends have emerged in machine learning, yet their intersections remain largely unexplored. On one hand, there has been a significant increase in literature focused on Individual Treatment Effect (ITE) modeling, particularly targeting the Conditional Average Treatment Effect (CATE) using meta-learner techniques. These approaches often aim to identify causal effects from observational data. On the other hand, the field of Explainable Machine Learning (XML) has gained traction, with various approaches developed to explain complex models and make their predictions more interpretable. A prominent technique in this area is Shapley Additive Explanations (SHAP), which has become mainstream in data science for analyzing supervised learning models. However, there has been limited exploration of SHAP's application in identifying predictive biomarkers through CATE models, a crucial aspect in pharmaceutical precision medicine. We address inherent challenges associated with the SHAP concept in multi-stage CATE strategies and introduce a surrogate estimation approach that is agnostic to the choice of CATE strategy, effectively reducing computational burdens in high-dimensional data. Using this approach, we conduct simulation benchmarking to evaluate the ability to accurately identify biomarkers using SHAP values derived from various CATE meta-learners and Causal Forest.

近年来,机器学习领域出现了两种平行的研究趋势,但它们的交集在很大程度上仍未被探索。一方面,关注个体治疗效应(ITE)建模的文献显著增加,特别是针对使用元学习者技术的条件平均治疗效应(CATE)。这些方法通常旨在从观测数据中确定因果关系。另一方面,可解释机器学习(XML)领域获得了牵引力,开发了各种方法来解释复杂模型并使其预测更具可解释性。该领域的一个突出技术是Shapley加性解释(SHAP),它已成为数据科学中分析监督学习模型的主流。然而,SHAP在通过CATE模型识别预测性生物标志物方面的应用探索有限,这是制药精准医疗的一个关键方面。我们解决了与多阶段CATE策略中SHAP概念相关的固有挑战,并引入了一种与CATE策略选择无关的代理估计方法,有效地减少了高维数据中的计算负担。使用这种方法,我们进行了模拟基准测试,以评估使用来自各种CATE元学习器和因果森林的SHAP值准确识别生物标志物的能力。
{"title":"Overview and Practical Recommendations on Using Shapley Values for Identifying Predictive Biomarkers via CATE Modeling.","authors":"David Svensson, Erik Hermansson, Nikolaos Nikolaou, Konstantinos Sechidis, Ilya Lipkovich","doi":"10.1002/sim.70375","DOIUrl":"10.1002/sim.70375","url":null,"abstract":"<p><p>In recent years, two parallel research trends have emerged in machine learning, yet their intersections remain largely unexplored. On one hand, there has been a significant increase in literature focused on Individual Treatment Effect (ITE) modeling, particularly targeting the Conditional Average Treatment Effect (CATE) using meta-learner techniques. These approaches often aim to identify causal effects from observational data. On the other hand, the field of Explainable Machine Learning (XML) has gained traction, with various approaches developed to explain complex models and make their predictions more interpretable. A prominent technique in this area is Shapley Additive Explanations (SHAP), which has become mainstream in data science for analyzing supervised learning models. However, there has been limited exploration of SHAP's application in identifying predictive biomarkers through CATE models, a crucial aspect in pharmaceutical precision medicine. We address inherent challenges associated with the SHAP concept in multi-stage CATE strategies and introduce a surrogate estimation approach that is agnostic to the choice of CATE strategy, effectively reducing computational burdens in high-dimensional data. Using this approach, we conduct simulation benchmarking to evaluate the ability to accurately identify biomarkers using SHAP values derived from various CATE meta-learners and Causal Forest.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70375"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146019743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Variable Selection for High-Dimensional Mediation Analysis: Application to Metabolomics Data in Epidemiological Studies. 高维中介分析的贝叶斯变量选择:在流行病学研究中代谢组学数据的应用。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70365
Youngho Bae, Chanmin Kim, Fenglei Wang, Qi Sun, Kyu Ha Lee

This research is motivated by integrated epidemiological and blood biomarker studies, investigating the relationship between long-term adherence to a Mediterranean diet and cardiometabolic health, with plasma metabolomes as potential mediators. Analyzing causal mediation in high-dimensional omics data presents challenges, including complex dependencies among mediators and the need for advanced regularization or Bayesian techniques to ensure stable and interpretable estimation and selection of indirect effects. To this end, we propose a novel Bayesian framework to identify active pathways and estimate indirect effects in high-dimensional mediation analysis. Central to our method is the introduction of a set of priors for the selection indicators in the mediator and outcome models. A Markov random field prior leverages mediator correlations, enhancing power in detecting mediated effects. Sequential subsetting priors encourage simultaneous selection of relevant mediators and their indirect effects, ensuring a more coherent and efficient variable selection framework. Comprehensive simulation studies demonstrate that the proposed method provides superior power in detecting active mediating pathways. We further illustrate the practical utility of the method by applying it to metabolome data from two sub-studies within the Health Professionals Follow-up Study and Nurses' Health Study II, highlighting its effectiveness in a real-data setting.

这项研究的动机是综合流行病学和血液生物标志物研究,调查长期坚持地中海饮食和心脏代谢健康之间的关系,血浆代谢组作为潜在的介质。分析高维组学数据中的因果中介存在挑战,包括中介之间的复杂依赖关系,以及需要先进的正则化或贝叶斯技术来确保稳定和可解释的间接效应估计和选择。为此,我们提出了一个新的贝叶斯框架来识别高维中介分析中的主动通路并估计间接影响。我们方法的核心是在中介和结果模型中引入一组选择指标的先验。马尔可夫随机场先验利用中介相关性,增强了检测中介效应的能力。顺序子集先验鼓励同时选择相关中介及其间接影响,确保更连贯和有效的变量选择框架。综合仿真研究表明,该方法在检测主动中介通路方面具有优越的性能。通过将该方法应用于卫生专业人员随访研究和护士健康研究II中的两个子研究的代谢组数据,我们进一步说明了该方法的实际效用,突出了其在真实数据设置中的有效性。
{"title":"Bayesian Variable Selection for High-Dimensional Mediation Analysis: Application to Metabolomics Data in Epidemiological Studies.","authors":"Youngho Bae, Chanmin Kim, Fenglei Wang, Qi Sun, Kyu Ha Lee","doi":"10.1002/sim.70365","DOIUrl":"10.1002/sim.70365","url":null,"abstract":"<p><p>This research is motivated by integrated epidemiological and blood biomarker studies, investigating the relationship between long-term adherence to a Mediterranean diet and cardiometabolic health, with plasma metabolomes as potential mediators. Analyzing causal mediation in high-dimensional omics data presents challenges, including complex dependencies among mediators and the need for advanced regularization or Bayesian techniques to ensure stable and interpretable estimation and selection of indirect effects. To this end, we propose a novel Bayesian framework to identify active pathways and estimate indirect effects in high-dimensional mediation analysis. Central to our method is the introduction of a set of priors for the selection indicators in the mediator and outcome models. A Markov random field prior leverages mediator correlations, enhancing power in detecting mediated effects. Sequential subsetting priors encourage simultaneous selection of relevant mediators and their indirect effects, ensuring a more coherent and efficient variable selection framework. Comprehensive simulation studies demonstrate that the proposed method provides superior power in detecting active mediating pathways. We further illustrate the practical utility of the method by applying it to metabolome data from two sub-studies within the Health Professionals Follow-up Study and Nurses' Health Study II, highlighting its effectiveness in a real-data setting.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70365"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146030936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Confidence Interval Construction for Causally Generalized Estimates With Target Sample Summary Information. 具有目标样本汇总信息的因果广义估计的置信区间构造。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70358
Yi Chen, Guanhua Chen, Menggang Yu

Generalizing causal findings, such as the average treatment effect (ATE), from a source to a target population is a critical topic in biomedical research. Differences in the distributions of treatment effect modifiers between these populations, known as covariate shift, can lead to varying ATEs. Chen et al. [1] introduced a weighting method to estimate the target ATE using only summary-level information from a target sample while accounting for the possible covariate shifts. However, the asymptotic variance of the estimate was shown to depend on individual-level data from the target sample, hindering statistical inference. In this article, we propose a resampling-based perturbation method for confidence interval construction for the estimated target ATE, utilizing additional summary-level information. We demonstrate the effectiveness of our approach through simulation and real data settings when only summary-level information is available.

概括因果结果,如平均治疗效果(ATE),从一个来源到目标人群是生物医学研究中的一个关键课题。这些人群之间治疗效果修饰因子分布的差异,称为协变量移位,可导致不同的ATEs。Chen等人引入了一种加权方法,仅使用来自目标样本的摘要级信息来估计目标ATE,同时考虑到可能的协变量移位。然而,估计的渐近方差显示依赖于目标样本的个人水平数据,阻碍了统计推断。在本文中,我们提出了一种基于重采样的摄动方法,用于估计目标ATE的置信区间构建,利用额外的汇总级信息。当只有摘要级信息可用时,我们通过模拟和真实数据设置证明了我们的方法的有效性。
{"title":"Confidence Interval Construction for Causally Generalized Estimates With Target Sample Summary Information.","authors":"Yi Chen, Guanhua Chen, Menggang Yu","doi":"10.1002/sim.70358","DOIUrl":"10.1002/sim.70358","url":null,"abstract":"<p><p>Generalizing causal findings, such as the average treatment effect (ATE), from a source to a target population is a critical topic in biomedical research. Differences in the distributions of treatment effect modifiers between these populations, known as covariate shift, can lead to varying ATEs. Chen et al. [1] introduced a weighting method to estimate the target ATE using only summary-level information from a target sample while accounting for the possible covariate shifts. However, the asymptotic variance of the estimate was shown to depend on individual-level data from the target sample, hindering statistical inference. In this article, we propose a resampling-based perturbation method for confidence interval construction for the estimated target ATE, utilizing additional summary-level information. We demonstrate the effectiveness of our approach through simulation and real data settings when only summary-level information is available.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70358"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826351/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146030960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sequential Parallel Comparison Design for Assessing Induction, Maintenance, Long-Term, and Other Treatment Effects on a Binary Endpoint. 用于评估诱导、维持、长期和其他治疗效果的顺序平行比较设计。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70382
Hui Quan, Zhixing Xu, Xun Chen

For a chronic disease, besides the treatment induction effect, it is also important to demonstrate the maintenance effect of long-term treatment use. To fulfill these and other objectives for a clinical study, we often apply one of three designs: the active treatment lead-in followed by randomized maintenance design, the randomized induction followed by re-randomized withdrawal maintenance design and the treat-through design (FDA 2022). Separately, a two-stage sequential parallel comparison design (SPCD) is frequently used in therapeutic areas where placebo has a large effect. In this paper, we use a SPCD for a clinical trial with a binary endpoint for induction, maintenance, long-term and other treatment effect assessments. This SPCD can actually be treated as a hybrid of the above three designs and has some additional advantages. For example, compared to the re-randomized withdrawal maintenance design, the SPCD does not need a re-randomization to simplify trial operation and it also provides controlled data for formal long-term efficacy and safety analyses. To fully utilize all available data of the two stages for an overall treatment effect evaluation, a weighted combination test is considered with the incorporation of correlations of the components. Further, a multiple imputation approach is applied to handle missing not at random data. Simulations are conducted to evaluate the performances of the methods and a data example is employed to illustrate the applications of the methods.

对于一种慢性疾病,除了治疗诱导效果外,证明长期治疗使用的维持效果也很重要。为了实现临床研究的这些目标和其他目标,我们通常采用三种设计之一:主动治疗引入后随机维持设计,随机诱导后再随机停药维持设计和治疗通过设计(FDA 2022)。另外,两阶段连续平行比较设计(SPCD)经常用于安慰剂效果较大的治疗领域。在本文中,我们使用SPCD进行临床试验,具有双终点,用于诱导,维持,长期和其他治疗效果评估。这种SPCD实际上可以看作是上述三种设计的混合体,并且具有一些额外的优点。例如,与再随机化停药维持设计相比,SPCD不需要再随机化来简化试验操作,并且为正式的长期疗效和安全性分析提供了对照数据。为了充分利用两个阶段的所有可用数据进行总体治疗效果评价,考虑了加权组合检验,并结合各成分的相关性。在此基础上,对非随机缺失数据进行了多重插值处理。通过仿真对方法的性能进行了评价,并通过一个数据实例说明了方法的应用。
{"title":"Sequential Parallel Comparison Design for Assessing Induction, Maintenance, Long-Term, and Other Treatment Effects on a Binary Endpoint.","authors":"Hui Quan, Zhixing Xu, Xun Chen","doi":"10.1002/sim.70382","DOIUrl":"https://doi.org/10.1002/sim.70382","url":null,"abstract":"<p><p>For a chronic disease, besides the treatment induction effect, it is also important to demonstrate the maintenance effect of long-term treatment use. To fulfill these and other objectives for a clinical study, we often apply one of three designs: the active treatment lead-in followed by randomized maintenance design, the randomized induction followed by re-randomized withdrawal maintenance design and the treat-through design (FDA 2022). Separately, a two-stage sequential parallel comparison design (SPCD) is frequently used in therapeutic areas where placebo has a large effect. In this paper, we use a SPCD for a clinical trial with a binary endpoint for induction, maintenance, long-term and other treatment effect assessments. This SPCD can actually be treated as a hybrid of the above three designs and has some additional advantages. For example, compared to the re-randomized withdrawal maintenance design, the SPCD does not need a re-randomization to simplify trial operation and it also provides controlled data for formal long-term efficacy and safety analyses. To fully utilize all available data of the two stages for an overall treatment effect evaluation, a weighted combination test is considered with the incorporation of correlations of the components. Further, a multiple imputation approach is applied to handle missing not at random data. Simulations are conducted to evaluate the performances of the methods and a data example is employed to illustrate the applications of the methods.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70382"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146019765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Saddlepoint Framework for Accurate Inference in Multicenter Clinical Trials With Imbalanced Clusters. 一个鞍点框架用于多中心临床试验中不平衡群集的准确推断。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70408
Haidy A Newer

Statistical inference in multicenter clinical trials is often compromised when relying on asymptotic normal approximations, particularly in designs characterized by a small number of centers or severe imbalance in patient enrollment. Such deviations from asymptotic assumptions frequently result in unreliable p-values and a breakdown of error control. To resolve this, we introduce a high-precision saddlepoint approximation framework for aggregate permutation tests within hierarchically structured data. The theoretical core of our approach is the derivation of a multilevel nested cumulant generating function that explicitly models the trial hierarchy, analytically integrating patient-level linear rank statistics with the stochastic aggregation process across centers. A significant innovation of this work is the extension to the bivariate setting to address co-primary endpoints, providing a robust inferential solution for mixed continuous (efficacy) and discrete (safety) outcomes where standard multivariate normality is unattainable. The resulting framework yields simulation-free, highly accurate tail probabilities even in finite-sample regimes. Extensive simulation studies confirm that our method maintains strict Type I error control in scenarios where asymptotic methods exhibit substantial inflation. Furthermore, an application to the multicenter diabetes prevention program trial demonstrates the method's practical utility: it correctly identifies a significant cardiovascular risk factor that standard approximations failed to detect, thereby preventing a critical Type II error and ensuring valid clinical conclusions.

当依赖于渐近正态近似时,多中心临床试验中的统计推断常常受到损害,特别是在以少量中心或患者入组严重不平衡为特征的设计中。这种对渐近假设的偏离经常导致不可靠的p值和错误控制的崩溃。为了解决这个问题,我们引入了一个高精度鞍点近似框架,用于分层结构数据中的聚合排列测试。我们的方法的理论核心是推导出一个多层嵌套累积生成函数,该函数明确地模拟了试验层次,分析地将患者水平的线性秩统计与跨中心的随机聚集过程结合起来。这项工作的一个重要创新是扩展到双变量设置,以解决共同主要终点,为标准多变量正态性无法实现的混合连续(有效性)和离散(安全性)结果提供稳健的推理解决方案。由此产生的框架即使在有限样本情况下也能产生不需要模拟的、高度精确的尾部概率。大量的模拟研究证实,我们的方法在渐近方法表现出大量膨胀的情况下保持严格的I型误差控制。此外,在多中心糖尿病预防项目试验中的应用证明了该方法的实用性:它正确识别了标准近似无法检测到的重要心血管危险因素,从而防止了关键的II型错误,并确保了有效的临床结论。
{"title":"A Saddlepoint Framework for Accurate Inference in Multicenter Clinical Trials With Imbalanced Clusters.","authors":"Haidy A Newer","doi":"10.1002/sim.70408","DOIUrl":"https://doi.org/10.1002/sim.70408","url":null,"abstract":"<p><p>Statistical inference in multicenter clinical trials is often compromised when relying on asymptotic normal approximations, particularly in designs characterized by a small number of centers or severe imbalance in patient enrollment. Such deviations from asymptotic assumptions frequently result in unreliable p-values and a breakdown of error control. To resolve this, we introduce a high-precision saddlepoint approximation framework for aggregate permutation tests within hierarchically structured data. The theoretical core of our approach is the derivation of a multilevel nested cumulant generating function that explicitly models the trial hierarchy, analytically integrating patient-level linear rank statistics with the stochastic aggregation process across centers. A significant innovation of this work is the extension to the bivariate setting to address co-primary endpoints, providing a robust inferential solution for mixed continuous (efficacy) and discrete (safety) outcomes where standard multivariate normality is unattainable. The resulting framework yields simulation-free, highly accurate tail probabilities even in finite-sample regimes. Extensive simulation studies confirm that our method maintains strict Type I error control in scenarios where asymptotic methods exhibit substantial inflation. Furthermore, an application to the multicenter diabetes prevention program trial demonstrates the method's practical utility: it correctly identifies a significant cardiovascular risk factor that standard approximations failed to detect, thereby preventing a critical Type II error and ensuring valid clinical conclusions.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70408"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146030887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Informative Futility Rules Based on Conditional Assurance. 基于条件保证的信息无效性规则。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70330
Vladimir Dragalin

For the pharmaceutical industry, the main utility of futility rules is to allow early stopping of a trial when it seems unlikely to achieve its primary efficacy objectives, and it is mainly motivated by financial and ethical considerations. After a brief overview of available approaches in setting a futility rule, I will illustrate, using a case study, different rules based on conditional power, predictive probability of success, and Bayesian predictive probability of success, and will emphasize the main shortcomings that arise when using these measures, especially in sample size re-estimation designs. I propose, as an alternative, the conditional assurance that is the probability of achieving success at the final analysis when the study was not stopped for futility. It depends on the sample size for the interim, sample size at the final analysis, and the threshold for the futility rule. But it does not need the knowledge of the observed treatment effect estimate at the interim analysis. This makes the conditional assurance very appropriate for building informative futility rules. It balances the probability of stopping for futility (when there is no treatment effect), conditional assurance, and overall power. Decision makers can better understand the levels of risk associated with stopping for futility and make informed decisions about where to spend risk based on what is acceptable to the organization.

对于制药业来说,无效规则的主要用途是允许在似乎不可能实现其主要功效目标时提前停止试验,其动机主要是出于财务和伦理考虑。在简要概述设置无效规则的可用方法之后,我将使用案例研究来说明基于条件功率、预测成功概率和贝叶斯预测成功概率的不同规则,并强调使用这些度量时出现的主要缺点,特别是在样本量重新估计设计中。我建议,作为一种选择,有条件的保证,即在研究没有因徒劳而停止的情况下,在最终分析中取得成功的可能性。它取决于中期的样本量、最终分析时的样本量以及无效规则的阈值。但在中期分析时不需要知道观察到的治疗效果估计。这使得条件保证非常适合于构建信息无效性规则。它平衡了因无效而停止的概率(当没有治疗效果时)、条件保证和总体功率。决策者可以更好地理解与无用停止相关的风险级别,并根据组织可接受的程度做出明智的决定,决定在何处花费风险。
{"title":"Informative Futility Rules Based on Conditional Assurance.","authors":"Vladimir Dragalin","doi":"10.1002/sim.70330","DOIUrl":"https://doi.org/10.1002/sim.70330","url":null,"abstract":"<p><p>For the pharmaceutical industry, the main utility of futility rules is to allow early stopping of a trial when it seems unlikely to achieve its primary efficacy objectives, and it is mainly motivated by financial and ethical considerations. After a brief overview of available approaches in setting a futility rule, I will illustrate, using a case study, different rules based on conditional power, predictive probability of success, and Bayesian predictive probability of success, and will emphasize the main shortcomings that arise when using these measures, especially in sample size re-estimation designs. I propose, as an alternative, the conditional assurance that is the probability of achieving success at the final analysis when the study was not stopped for futility. It depends on the sample size for the interim, sample size at the final analysis, and the threshold for the futility rule. But it does not need the knowledge of the observed treatment effect estimate at the interim analysis. This makes the conditional assurance very appropriate for building informative futility rules. It balances the probability of stopping for futility (when there is no treatment effect), conditional assurance, and overall power. Decision makers can better understand the levels of risk associated with stopping for futility and make informed decisions about where to spend risk based on what is acceptable to the organization.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70330"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146030901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonparametric Bayesian Adjustment of Unmeasured Confounders in Cox Proportional Hazards Models. Cox比例风险模型中未测量混杂因素的非参数贝叶斯调整。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70360
Shunichiro Orihara, Shonosuke Sugasawa, Tomohiro Ohigashi, Keita Hirano, Tomoyuki Nakagawa, Masataka Taguri

Unmeasured confounders pose a major challenge in accurately estimating causal effects in observational studies. To address this issue when estimating hazard ratios (HRs) using Cox proportional hazards models, several methods, including instrumental variables (IVs) approaches, have been proposed. However, these methods often face limitations, such as weak IV problems and restrictive assumptions regarding unmeasured confounder distributions. In this study, we introduce a novel nonparametric Bayesian procedure that provides accurate HR estimates while addressing these limitations. A key assumption of our approach is that unmeasured confounders exhibit a cluster structure. Under this assumption, we integrate two remarkable Bayesian techniques, the Dirichlet process mixture (DPM) and general Bayes (GB), to simultaneously (1) detect latent clusters based on the likelihood of exposure and outcome variables and (2) estimate HRs using the likelihood constructed within each cluster. Notably, leveraging DPM, our procedure eliminates the need for IVs by identifying unmeasured confounders under an alternative condition. Additionally, GB techniques remove the need for explicit modeling of the baseline hazard function, distinguishing our procedure from traditional Bayesian approaches. Simulation experiments demonstrate that the proposed Bayesian procedure outperforms existing methods in some performance metrics. Moreover, it achieves statistical efficiency comparable to the efficient estimator while accurately identifying cluster structures. These features highlight its ability to overcome challenges associated with traditional IV approaches for time-to-event data.

在观察性研究中,未测量的混杂因素对准确估计因果效应提出了重大挑战。为了在使用Cox比例风险模型估计风险比(hr)时解决这一问题,已经提出了几种方法,包括工具变量(IVs)方法。然而,这些方法往往面临局限性,如弱IV问题和关于未测量混杂分布的限制性假设。在这项研究中,我们引入了一种新的非参数贝叶斯过程,在解决这些限制的同时提供准确的人力资源估计。我们方法的一个关键假设是,未测量的混杂因素表现出群集结构。在此假设下,我们将Dirichlet过程混合(DPM)和一般贝叶斯(GB)两种显著的贝叶斯技术结合起来,同时(1)基于暴露可能性和结果变量检测潜在聚类,(2)使用每个聚类内构建的似然估计hr。值得注意的是,利用DPM,我们的程序通过在替代条件下识别未测量的混杂因素,消除了对IVs的需求。此外,GB技术消除了对基线危险函数的显式建模的需要,将我们的程序与传统的贝叶斯方法区分开来。仿真实验表明,所提出的贝叶斯方法在某些性能指标上优于现有方法。在准确识别聚类结构的同时,达到了与高效估计器相当的统计效率。这些特点突出了其克服传统IV方法在获取事件时间数据方面的挑战的能力。
{"title":"Nonparametric Bayesian Adjustment of Unmeasured Confounders in Cox Proportional Hazards Models.","authors":"Shunichiro Orihara, Shonosuke Sugasawa, Tomohiro Ohigashi, Keita Hirano, Tomoyuki Nakagawa, Masataka Taguri","doi":"10.1002/sim.70360","DOIUrl":"10.1002/sim.70360","url":null,"abstract":"<p><p>Unmeasured confounders pose a major challenge in accurately estimating causal effects in observational studies. To address this issue when estimating hazard ratios (HRs) using Cox proportional hazards models, several methods, including instrumental variables (IVs) approaches, have been proposed. However, these methods often face limitations, such as weak IV problems and restrictive assumptions regarding unmeasured confounder distributions. In this study, we introduce a novel nonparametric Bayesian procedure that provides accurate HR estimates while addressing these limitations. A key assumption of our approach is that unmeasured confounders exhibit a cluster structure. Under this assumption, we integrate two remarkable Bayesian techniques, the Dirichlet process mixture (DPM) and general Bayes (GB), to simultaneously (1) detect latent clusters based on the likelihood of exposure and outcome variables and (2) estimate HRs using the likelihood constructed within each cluster. Notably, leveraging DPM, our procedure eliminates the need for IVs by identifying unmeasured confounders under an alternative condition. Additionally, GB techniques remove the need for explicit modeling of the baseline hazard function, distinguishing our procedure from traditional Bayesian approaches. Simulation experiments demonstrate that the proposed Bayesian procedure outperforms existing methods in some performance metrics. Moreover, it achieves statistical efficiency comparable to the efficient estimator while accurately identifying cluster structures. These features highlight its ability to overcome challenges associated with traditional IV approaches for time-to-event data.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70360"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826352/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146019733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finite Mixtures of Multivariate t $$ t $$ Linear Mixed-Effects Models for Censored Longitudinal Data With Concomitant Covariates. 多元有限混合t $$ t $$含协变量截尾纵向数据的线性混合效应模型。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70392
Tsung-I Lin, Wan-Lun Wang

Clustering longitudinal biomarkers in clinical trials uncovers associations between clinical outcomes, disease progression, and treatment effects. Finite mixtures of multivariate t $$ t $$ linear mixed-effects (FM-MtLME) models have proven effective for modeling and clustering multiple longitudinal trajectories that exhibit grouped patterns with strong within-group similarity. Motivated by an AIDS study with plasma viral loads measured under assay-specific detection limits, this article extends the FM-MtLME model to account for censored outcomes. The proposed model is called the FM-MtLME with censoring (FM-MtLMEC). To allow covariate-dependent mixing proportions, we further extend it with a logistic link, resulting in the EFM-MtLMEC model. Two efficient EM-based algorithms are developed for parameter estimation of both FM-MtLMEC and EFM-MtLMEC models. The utility of our methods is demonstrated through comprehensive analyses of the AIDS data and simulation studies.

临床试验中的聚类纵向生物标志物揭示了临床结果、疾病进展和治疗效果之间的关联。多元t $$ t $$线性混合效应(FM-MtLME)模型的有限混合已被证明对具有强组内相似性的分组模式的多个纵向轨迹的建模和聚类是有效的。受一项艾滋病研究的启发,在检测特异性检测限下测量血浆病毒载量,本文扩展了FM-MtLME模型,以解释审查结果。该模型被称为带删减的FM-MtLMEC (FM-MtLMEC)。为了允许协变量相关的混合比例,我们用逻辑链接进一步扩展它,从而得到EFM-MtLMEC模型。针对FM-MtLMEC和EFM-MtLMEC模型的参数估计,提出了两种高效的基于em的算法。通过对艾滋病数据的综合分析和模拟研究,证明了我们方法的实用性。
{"title":"<ArticleTitle xmlns:ns0=\"http://www.w3.org/1998/Math/MathML\">Finite Mixtures of Multivariate <ns0:math> <ns0:semantics><ns0:mrow><ns0:mi>t</ns0:mi></ns0:mrow> <ns0:annotation>$$ t $$</ns0:annotation></ns0:semantics> </ns0:math> Linear Mixed-Effects Models for Censored Longitudinal Data With Concomitant Covariates.","authors":"Tsung-I Lin, Wan-Lun Wang","doi":"10.1002/sim.70392","DOIUrl":"10.1002/sim.70392","url":null,"abstract":"<p><p>Clustering longitudinal biomarkers in clinical trials uncovers associations between clinical outcomes, disease progression, and treatment effects. Finite mixtures of multivariate <math> <semantics><mrow><mi>t</mi></mrow> <annotation>$$ t $$</annotation></semantics> </math> linear mixed-effects (FM-MtLME) models have proven effective for modeling and clustering multiple longitudinal trajectories that exhibit grouped patterns with strong within-group similarity. Motivated by an AIDS study with plasma viral loads measured under assay-specific detection limits, this article extends the FM-MtLME model to account for censored outcomes. The proposed model is called the FM-MtLME with censoring (FM-MtLMEC). To allow covariate-dependent mixing proportions, we further extend it with a logistic link, resulting in the EFM-MtLMEC model. Two efficient EM-based algorithms are developed for parameter estimation of both FM-MtLMEC and EFM-MtLMEC models. The utility of our methods is demonstrated through comprehensive analyses of the AIDS data and simulation studies.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70392"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146019607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Criterion Approach Incorporating Historical Information to Seek Accelerated Approval With Application in Time-to-Event Group Sequential Trials. 结合历史信息的双标准方法在事件时间组序贯试验中寻求加速审批。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70361
Marco Ratta, Gaëlle Saint-Hilary, Valentine Barboux, Mauro Gasparini, Donia Skanji, Pavel Mozgunov

The urgency of delivering novel, effective treatments against life-threatening diseases has brought various health authorities to allow for Accelerated Approvals (AAs). AA is the "fast track" program where promising treatments are evaluated based on surrogate (short term) endpoints likely to predict clinical benefit. This allows treatments to get an early approval, subject to providing further evidence of efficacy, for example, on the primary (long term) endpoint. Despite this procedure being quite consolidated, a number of conditionally approved treatments do not obtain full approval (FA), mainly due to lack of correlation between surrogate and primary endpoint. This implies a need to improve the criteria for controlling the risk of AAs for noneffective treatments, while maximizing the chance of AAs for effective ones. We first propose a novel adaptive group sequential design that includes an early dual-criterion "Accelerated Approval" interim analysis, where efficacy on a surrogate endpoint is tested jointly with a predictive metric based on the primary endpoint. Secondarily, we explore how the predictive criterion may be strengthened by historical information borrowing, in particular using: (i) historical control data on the primary endpoint, and (ii) the estimated historical relationship between the surrogate and the primary endpoints. We propose various metrics to characterize the risk of correct and incorrect early AAs and demonstrate how the proposed design allows explicit control of these risks, with particular attention to the family-wise error rate (FWER). The methodology is then evaluated through a simulation study motivated by a Phase-III trial in metastatic colorectal cancer (mCRC).

针对危及生命的疾病,迫切需要提供新颖有效的治疗方法,这使得各卫生当局允许加速批准(AAs)。AA是一个“快速通道”项目,在这个项目中,有希望的治疗方法是基于可能预测临床益处的替代(短期)终点进行评估的。这使得治疗能够获得早期批准,前提是提供进一步的疗效证据,例如,在主要(长期)终点。尽管这一程序得到了相当的巩固,但许多有条件批准的治疗并没有获得完全批准(FA),主要原因是替代终点和主要终点之间缺乏相关性。这意味着需要改进控制无效治疗中AAs风险的标准,同时最大限度地提高有效治疗中AAs的机会。我们首先提出了一种新的自适应组序贯设计,其中包括早期双标准“加速批准”中期分析,其中替代终点的疗效与基于主要终点的预测指标联合测试。其次,我们探讨了如何通过借鉴历史信息来加强预测标准,特别是使用:(i)主要终点的历史控制数据,以及(ii)替代终点和主要终点之间的估计历史关系。我们提出了各种度量来描述正确和不正确的早期aa的风险,并演示了所建议的设计如何允许对这些风险进行显式控制,特别注意家庭错误率(FWER)。然后通过一项由转移性结直肠癌(mCRC) iii期试验驱动的模拟研究来评估该方法。
{"title":"Dual-Criterion Approach Incorporating Historical Information to Seek Accelerated Approval With Application in Time-to-Event Group Sequential Trials.","authors":"Marco Ratta, Gaëlle Saint-Hilary, Valentine Barboux, Mauro Gasparini, Donia Skanji, Pavel Mozgunov","doi":"10.1002/sim.70361","DOIUrl":"10.1002/sim.70361","url":null,"abstract":"<p><p>The urgency of delivering novel, effective treatments against life-threatening diseases has brought various health authorities to allow for Accelerated Approvals (AAs). AA is the \"fast track\" program where promising treatments are evaluated based on surrogate (short term) endpoints likely to predict clinical benefit. This allows treatments to get an early approval, subject to providing further evidence of efficacy, for example, on the primary (long term) endpoint. Despite this procedure being quite consolidated, a number of conditionally approved treatments do not obtain full approval (FA), mainly due to lack of correlation between surrogate and primary endpoint. This implies a need to improve the criteria for controlling the risk of AAs for noneffective treatments, while maximizing the chance of AAs for effective ones. We first propose a novel adaptive group sequential design that includes an early dual-criterion \"Accelerated Approval\" interim analysis, where efficacy on a surrogate endpoint is tested jointly with a predictive metric based on the primary endpoint. Secondarily, we explore how the predictive criterion may be strengthened by historical information borrowing, in particular using: (i) historical control data on the primary endpoint, and (ii) the estimated historical relationship between the surrogate and the primary endpoints. We propose various metrics to characterize the risk of correct and incorrect early AAs and demonstrate how the proposed design allows explicit control of these risks, with particular attention to the family-wise error rate (FWER). The methodology is then evaluated through a simulation study motivated by a Phase-III trial in metastatic colorectal cancer (mCRC).</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70361"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12828486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146030962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Statistics in Medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1