首页 > 最新文献

Statistics in Medicine最新文献

英文 中文
Dirichlet Distribution Parameter Estimation With Applications in Microbiome Analyses. Dirichlet分布参数估计及其在微生物组分析中的应用。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70454
Daniel T Fuller, Sumona Mondal, Shantanu Sur, Nabendu Pal

Microbiome analysis is the process of identifying the composition and function of a community of microorganisms in a particular location, which is essential in understanding human and environmental health. Properly quantifying microbial composition, however, remains challenging and relies on statistical modeling of either the raw taxonomic abundances or the relative abundances. Relative abundance measures are commonly preferred over the absolute abundances for microbiome analysis because absolute abundance values are dependent on the sequencing depth and sequencing method. Despite this, literature on modeling relative abundance by meaningful probability distribution, followed by subsequent statistical inferences, is limited. In this work, the Dirichlet distribution is proposed to model the relative abundances of taxa directly without the use of any further transformation (e.g., additive log-ratio transform, isometric log-ratio transform). In a comprehensive simulation study, we have compared biases and standard errors of two methods of moments estimators (MMEs) and the maximum likelihood estimator (MLE) of the Dirichlet distribution. comparison of these estimators is done over three cases of differing sample size and dimension: (i) Small dimension and small sample size; (ii) small dimension and large sample size; (iii) large dimension with both small and large sample sizes. As expected, the MLE shows the overall best performance because there is no loss of information since this estimator is based on the (minimal) sufficient statistics. We then explore the asymptotic properties of the MLE utilizing the Fisher information alongside our simulation results. We demonstrate the applicability of Dirichlet modeling methodology with four real world microbiome datasets and show how the estimated mean relative abundances obtained from the Dirichlet MLE (DMLE) differ from those obtained by a commonly used method, that is-Bayesian Dirichlet-multinomial estimator (BDME), which works with absolute abundances. For all the four datasets, the DMLE results are comparable to the BDME results while requiring much less computational time for both single uses and for large simulations.

微生物组分析是确定特定位置微生物群落的组成和功能的过程,这对于了解人类和环境健康至关重要。然而,正确量化微生物组成仍然具有挑战性,并且依赖于原始分类丰度或相对丰度的统计建模。相对丰度测量通常优于微生物组分析的绝对丰度,因为绝对丰度值依赖于测序深度和测序方法。尽管如此,通过有意义的概率分布来建模相对丰度的文献是有限的,然后是随后的统计推断。在这项工作中,提出了Dirichlet分布来直接模拟分类群的相对丰度,而不使用任何进一步的变换(例如,加性对数比变换,等距对数比变换)。在一项全面的仿真研究中,我们比较了Dirichlet分布的矩估计(MMEs)和极大似然估计(MLE)两种方法的偏差和标准误差。在三种不同样本量和尺寸的情况下对这些估计量进行比较:(i)尺寸小,样本量小;(二)规模小、样本量大;(iii)大尺寸,小样本和大样本都有。正如预期的那样,MLE显示了最佳的总体性能,因为没有信息丢失,因为这个估计器是基于(最小的)足够的统计数据。然后,我们利用Fisher信息和我们的模拟结果探索MLE的渐近性质。我们用四个真实世界的微生物组数据集证明了Dirichlet建模方法的适用性,并展示了从Dirichlet MLE (DMLE)获得的估计平均相对丰度与常用方法(即贝叶斯Dirichlet多项估计器(BDME))获得的估计平均相对丰度有什么不同。对于所有四种数据集,DMLE结果与BDME结果相当,而单次使用和大型模拟所需的计算时间都要少得多。
{"title":"Dirichlet Distribution Parameter Estimation With Applications in Microbiome Analyses.","authors":"Daniel T Fuller, Sumona Mondal, Shantanu Sur, Nabendu Pal","doi":"10.1002/sim.70454","DOIUrl":"https://doi.org/10.1002/sim.70454","url":null,"abstract":"<p><p>Microbiome analysis is the process of identifying the composition and function of a community of microorganisms in a particular location, which is essential in understanding human and environmental health. Properly quantifying microbial composition, however, remains challenging and relies on statistical modeling of either the raw taxonomic abundances or the relative abundances. Relative abundance measures are commonly preferred over the absolute abundances for microbiome analysis because absolute abundance values are dependent on the sequencing depth and sequencing method. Despite this, literature on modeling relative abundance by meaningful probability distribution, followed by subsequent statistical inferences, is limited. In this work, the Dirichlet distribution is proposed to model the relative abundances of taxa directly without the use of any further transformation (e.g., additive log-ratio transform, isometric log-ratio transform). In a comprehensive simulation study, we have compared biases and standard errors of two methods of moments estimators (MMEs) and the maximum likelihood estimator (MLE) of the Dirichlet distribution. comparison of these estimators is done over three cases of differing sample size and dimension: (i) Small dimension and small sample size; (ii) small dimension and large sample size; (iii) large dimension with both small and large sample sizes. As expected, the MLE shows the overall best performance because there is no loss of information since this estimator is based on the (minimal) sufficient statistics. We then explore the asymptotic properties of the MLE utilizing the Fisher information alongside our simulation results. We demonstrate the applicability of Dirichlet modeling methodology with four real world microbiome datasets and show how the estimated mean relative abundances obtained from the Dirichlet MLE (DMLE) differ from those obtained by a commonly used method, that is-Bayesian Dirichlet-multinomial estimator (BDME), which works with absolute abundances. For all the four datasets, the DMLE results are comparable to the BDME results while requiring much less computational time for both single uses and for large simulations.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70454"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146221398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bayesian Treatment Selection Design for Phase II Randomised Cancer Clinical Trials. II期随机癌症临床试验的贝叶斯治疗选择设计
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70444
Moka Komaki, Satoru Shinoda, Haiyan Zheng, Kouji Yamamoto

It is crucial to design Phase II cancer clinical trials that balance the efficiency of treatment selection with clinical practicality. Sargent and Goldberg proposed a frequentist design that allows decision-making even when the primary endpoint is ambiguous. However, frequentist approaches rely on fixed thresholds and long-run frequency properties, which can limit flexibility in practical applications. In contrast, the Bayesian decision rule, based on posterior probabilities, enables transparent decision-making by incorporating prior knowledge and updating beliefs with new data, addressing some of the inherent limitations of frequentist designs. In this study, we propose a novel Bayesian design, allowing the selection of the best-performing treatment. Specifically, concerning phase II clinical trials with a binary outcome, our decision rule employs posterior interval probability by integrating the joint distribution over all values, for which the 'success rate' of the best-performing treatment is greater than that of the others. This design can then determine which treatment should proceed to the next phase, given predefined decision thresholds. Furthermore, we propose two sample size determination methods to empower such treatment selection designs implemented in a Bayesian framework. Through simulation studies and real-data applications, we demonstrate how this approach can overcome challenges related to sample size constraints in randomised trials. In addition, we present a user-friendly R Shiny application, enabling clinicians to conduct Bayesian designs. Both our methodology and the software application can advance the design and analysis of clinical trials for evaluating cancer treatments.

设计能够平衡治疗选择效率和临床实用性的II期癌症临床试验至关重要。萨金特和戈德堡提出了一种频率设计,即使在主要终点不明确的情况下,也可以做出决策。然而,频率学方法依赖于固定的阈值和长期频率特性,这限制了实际应用的灵活性。相比之下,基于后验概率的贝叶斯决策规则通过结合先验知识和用新数据更新信念来实现透明决策,解决了频率论设计的一些固有局限性。在这项研究中,我们提出了一种新的贝叶斯设计,允许选择最佳的治疗方法。具体而言,对于具有二元结果的II期临床试验,我们的决策规则通过整合所有值的联合分布来采用后验间隔概率,其中最佳治疗的“成功率”大于其他治疗。然后,根据预定义的决策阈值,该设计可以确定哪个处理应该进入下一阶段。此外,我们提出了两种样本量确定方法,以授权在贝叶斯框架中实施的治疗选择设计。通过模拟研究和实际数据应用,我们展示了这种方法如何克服随机试验中与样本量限制相关的挑战。此外,我们提出了一个用户友好的R Shiny应用程序,使临床医生能够进行贝叶斯设计。我们的方法和软件应用都可以促进评估癌症治疗的临床试验的设计和分析。
{"title":"A Bayesian Treatment Selection Design for Phase II Randomised Cancer Clinical Trials.","authors":"Moka Komaki, Satoru Shinoda, Haiyan Zheng, Kouji Yamamoto","doi":"10.1002/sim.70444","DOIUrl":"10.1002/sim.70444","url":null,"abstract":"<p><p>It is crucial to design Phase II cancer clinical trials that balance the efficiency of treatment selection with clinical practicality. Sargent and Goldberg proposed a frequentist design that allows decision-making even when the primary endpoint is ambiguous. However, frequentist approaches rely on fixed thresholds and long-run frequency properties, which can limit flexibility in practical applications. In contrast, the Bayesian decision rule, based on posterior probabilities, enables transparent decision-making by incorporating prior knowledge and updating beliefs with new data, addressing some of the inherent limitations of frequentist designs. In this study, we propose a novel Bayesian design, allowing the selection of the best-performing treatment. Specifically, concerning phase II clinical trials with a binary outcome, our decision rule employs posterior interval probability by integrating the joint distribution over all values, for which the 'success rate' of the best-performing treatment is greater than that of the others. This design can then determine which treatment should proceed to the next phase, given predefined decision thresholds. Furthermore, we propose two sample size determination methods to empower such treatment selection designs implemented in a Bayesian framework. Through simulation studies and real-data applications, we demonstrate how this approach can overcome challenges related to sample size constraints in randomised trials. In addition, we present a user-friendly R Shiny application, enabling clinicians to conduct Bayesian designs. Both our methodology and the software application can advance the design and analysis of clinical trials for evaluating cancer treatments.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70444"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12911244/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146214233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Response-Adaptive Randomization for Cluster Randomized Controlled Trials. 聚类随机对照试验的贝叶斯响应-自适应随机化。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70386
Yunyi Liu, Maile Young Karris, Sonia Jain

Cluster randomized controlled trials where groups (or clusters) of individuals, rather than single individuals, are randomized are especially useful when individual-level randomization is not feasible or when interventions are naturally delivered at the group level. Balanced randomization in the cluster randomized trial setting can pose logistical challenges and strain resources if subjects are randomized to a non-optimal arm. We propose a Bayesian response-adaptive randomization design for cluster randomized controlled trials based on Thompson sampling, which dynamically allocates clusters to the most efficacious treatment arm based on the interim posterior distributions of treatment effects using Markov chain Monte Carlo sampling. Our design also incorporates early stopping rules for efficacy and futility determined by prespecified posterior probability thresholds. The performance of the proposed design is evaluated across various operating characteristics under multiple settings, including varying intra-cluster correlation coefficients, cluster sizes, and effect sizes. Our adaptive approach is also compared with a standard, parallel two-arm cluster randomized controlled clinical trial design, highlighting improvements in both ethical considerations and efficiency. From our simulation studies based on an HIV behavioral trial, we demonstrate these improvements by preferentially assigning more clusters to the more efficacious intervention while maintaining robust statistical power and controlling false positive rates.

当个体水平的随机化不可行的时候,或者干预措施自然地在群体水平上进行的时候,对个体群体(或群体)而不是单个个体进行随机化的聚类随机对照试验特别有用。如果受试者被随机分配到非最优组,在集群随机试验设置中的平衡随机化可能会带来后勤挑战和资源紧张。本文提出了一种基于汤普森抽样的聚类随机对照试验贝叶斯响应自适应随机化设计,该设计基于马尔科夫链蒙特卡罗抽样的治疗效果的中期后验分布动态地将聚类分配到最有效的治疗组。我们的设计还结合了由预先指定的后验概率阈值决定的疗效和无效的早期停止规则。在多种设置下,包括不同的簇内相关系数、簇大小和效应大小,评估了所建议设计的各种操作特征的性能。我们的自适应方法还与标准的平行双臂随机对照临床试验设计进行了比较,突出了伦理考虑和效率的改进。从我们基于HIV行为试验的模拟研究中,我们通过优先分配更多簇来进行更有效的干预,同时保持强大的统计能力并控制假阳性率,从而证明了这些改进。
{"title":"Bayesian Response-Adaptive Randomization for Cluster Randomized Controlled Trials.","authors":"Yunyi Liu, Maile Young Karris, Sonia Jain","doi":"10.1002/sim.70386","DOIUrl":"10.1002/sim.70386","url":null,"abstract":"<p><p>Cluster randomized controlled trials where groups (or clusters) of individuals, rather than single individuals, are randomized are especially useful when individual-level randomization is not feasible or when interventions are naturally delivered at the group level. Balanced randomization in the cluster randomized trial setting can pose logistical challenges and strain resources if subjects are randomized to a non-optimal arm. We propose a Bayesian response-adaptive randomization design for cluster randomized controlled trials based on Thompson sampling, which dynamically allocates clusters to the most efficacious treatment arm based on the interim posterior distributions of treatment effects using Markov chain Monte Carlo sampling. Our design also incorporates early stopping rules for efficacy and futility determined by prespecified posterior probability thresholds. The performance of the proposed design is evaluated across various operating characteristics under multiple settings, including varying intra-cluster correlation coefficients, cluster sizes, and effect sizes. Our adaptive approach is also compared with a standard, parallel two-arm cluster randomized controlled clinical trial design, highlighting improvements in both ethical considerations and efficiency. From our simulation studies based on an HIV behavioral trial, we demonstrate these improvements by preferentially assigning more clusters to the more efficacious intervention while maintaining robust statistical power and controlling false positive rates.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70386"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12824830/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146019792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Improved Bayesian Pick-the-Winner (IBPW) Design for Randomized Phase II Clinical Trials. 随机II期临床试验的改进贝叶斯选择赢家(IBPW)设计。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70348
Wanni Lei, Maosen Peng, Nasser Altorki, Xi Kathy Zhou

Phase II clinical trials play a pivotal role in drug development by screening a large number of drug candidates to identify those with promising preliminary efficacy for phase III testing. Trial designs that enable efficient decision-making with small sample sizes and early futility stopping while controlling for type I and type II errors in hypothesis testing, such as Simon's two-stage design, are preferred. Randomized multi-arm trials are increasingly used in phase II settings to overcome the limitations associated with using historical controls as the reference. However, how to effectively balance efficiency and accurate decision-making continues to be an important research topic. A notable development in phase II randomized design methodology is the Bayesian pick-the-winner (BPW) design proposed by Chen et al. [1]. Despite multiple appealing features, this method cannot easily control for overall type I and type II errors for winner selection. Here, we introduce an improved randomized two-stage Bayesian pick-the-winner (IBPW) design that formalizes the winner-selection based hypothesis testing, optimizes sample sizes and decision cut-offs by strictly controlling the type I and type II errors under a set of flexible hypotheses for winner-selection across two treatment arms. Simulation studies demonstrate that our new design offers improved operating characteristics for winner selection while retaining the desirable features of the BPW design.

II期临床试验在药物开发中起着关键作用,通过筛选大量候选药物来确定那些有希望进行III期试验的初步疗效。在控制假设检验中的I型和II型错误的同时,能够以小样本量和早期无效停止进行有效决策的试验设计,如Simon的两阶段设计,是首选。随机多组试验越来越多地用于II期研究,以克服使用历史对照作为参考的局限性。然而,如何有效地平衡效率和准确决策仍然是一个重要的研究课题。第二阶段随机设计方法的一个显著发展是Chen等人提出的贝叶斯选择赢家(BPW)设计。尽管有许多吸引人的特点,但这种方法不能轻易控制获胜者选择的整体I型和II型错误。在这里,我们引入了一种改进的随机两阶段贝叶斯选择赢家(IBPW)设计,该设计形式化了基于赢家选择的假设检验,通过严格控制两个治疗组中赢家选择的一组灵活假设下的I型和II型错误来优化样本量和决策截止点。仿真研究表明,我们的新设计在保留BPW设计的理想特性的同时,为获胜者选择提供了改进的操作特性。
{"title":"An Improved Bayesian Pick-the-Winner (IBPW) Design for Randomized Phase II Clinical Trials.","authors":"Wanni Lei, Maosen Peng, Nasser Altorki, Xi Kathy Zhou","doi":"10.1002/sim.70348","DOIUrl":"10.1002/sim.70348","url":null,"abstract":"<p><p>Phase II clinical trials play a pivotal role in drug development by screening a large number of drug candidates to identify those with promising preliminary efficacy for phase III testing. Trial designs that enable efficient decision-making with small sample sizes and early futility stopping while controlling for type I and type II errors in hypothesis testing, such as Simon's two-stage design, are preferred. Randomized multi-arm trials are increasingly used in phase II settings to overcome the limitations associated with using historical controls as the reference. However, how to effectively balance efficiency and accurate decision-making continues to be an important research topic. A notable development in phase II randomized design methodology is the Bayesian pick-the-winner (BPW) design proposed by Chen et al. [1]. Despite multiple appealing features, this method cannot easily control for overall type I and type II errors for winner selection. Here, we introduce an improved randomized two-stage Bayesian pick-the-winner (IBPW) design that formalizes the winner-selection based hypothesis testing, optimizes sample sizes and decision cut-offs by strictly controlling the type I and type II errors under a set of flexible hypotheses for winner-selection across two treatment arms. Simulation studies demonstrate that our new design offers improved operating characteristics for winner selection while retaining the desirable features of the BPW design.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70348"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826356/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146030892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Overview and Practical Recommendations on Using Shapley Values for Identifying Predictive Biomarkers via CATE Modeling. 概述和实用建议使用沙普利值识别预测性生物标志物通过CATE建模。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70375
David Svensson, Erik Hermansson, Nikolaos Nikolaou, Konstantinos Sechidis, Ilya Lipkovich

In recent years, two parallel research trends have emerged in machine learning, yet their intersections remain largely unexplored. On one hand, there has been a significant increase in literature focused on Individual Treatment Effect (ITE) modeling, particularly targeting the Conditional Average Treatment Effect (CATE) using meta-learner techniques. These approaches often aim to identify causal effects from observational data. On the other hand, the field of Explainable Machine Learning (XML) has gained traction, with various approaches developed to explain complex models and make their predictions more interpretable. A prominent technique in this area is Shapley Additive Explanations (SHAP), which has become mainstream in data science for analyzing supervised learning models. However, there has been limited exploration of SHAP's application in identifying predictive biomarkers through CATE models, a crucial aspect in pharmaceutical precision medicine. We address inherent challenges associated with the SHAP concept in multi-stage CATE strategies and introduce a surrogate estimation approach that is agnostic to the choice of CATE strategy, effectively reducing computational burdens in high-dimensional data. Using this approach, we conduct simulation benchmarking to evaluate the ability to accurately identify biomarkers using SHAP values derived from various CATE meta-learners and Causal Forest.

近年来,机器学习领域出现了两种平行的研究趋势,但它们的交集在很大程度上仍未被探索。一方面,关注个体治疗效应(ITE)建模的文献显著增加,特别是针对使用元学习者技术的条件平均治疗效应(CATE)。这些方法通常旨在从观测数据中确定因果关系。另一方面,可解释机器学习(XML)领域获得了牵引力,开发了各种方法来解释复杂模型并使其预测更具可解释性。该领域的一个突出技术是Shapley加性解释(SHAP),它已成为数据科学中分析监督学习模型的主流。然而,SHAP在通过CATE模型识别预测性生物标志物方面的应用探索有限,这是制药精准医疗的一个关键方面。我们解决了与多阶段CATE策略中SHAP概念相关的固有挑战,并引入了一种与CATE策略选择无关的代理估计方法,有效地减少了高维数据中的计算负担。使用这种方法,我们进行了模拟基准测试,以评估使用来自各种CATE元学习器和因果森林的SHAP值准确识别生物标志物的能力。
{"title":"Overview and Practical Recommendations on Using Shapley Values for Identifying Predictive Biomarkers via CATE Modeling.","authors":"David Svensson, Erik Hermansson, Nikolaos Nikolaou, Konstantinos Sechidis, Ilya Lipkovich","doi":"10.1002/sim.70375","DOIUrl":"10.1002/sim.70375","url":null,"abstract":"<p><p>In recent years, two parallel research trends have emerged in machine learning, yet their intersections remain largely unexplored. On one hand, there has been a significant increase in literature focused on Individual Treatment Effect (ITE) modeling, particularly targeting the Conditional Average Treatment Effect (CATE) using meta-learner techniques. These approaches often aim to identify causal effects from observational data. On the other hand, the field of Explainable Machine Learning (XML) has gained traction, with various approaches developed to explain complex models and make their predictions more interpretable. A prominent technique in this area is Shapley Additive Explanations (SHAP), which has become mainstream in data science for analyzing supervised learning models. However, there has been limited exploration of SHAP's application in identifying predictive biomarkers through CATE models, a crucial aspect in pharmaceutical precision medicine. We address inherent challenges associated with the SHAP concept in multi-stage CATE strategies and introduce a surrogate estimation approach that is agnostic to the choice of CATE strategy, effectively reducing computational burdens in high-dimensional data. Using this approach, we conduct simulation benchmarking to evaluate the ability to accurately identify biomarkers using SHAP values derived from various CATE meta-learners and Causal Forest.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70375"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146019743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Variable Selection for High-Dimensional Mediation Analysis: Application to Metabolomics Data in Epidemiological Studies. 高维中介分析的贝叶斯变量选择:在流行病学研究中代谢组学数据的应用。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70365
Youngho Bae, Chanmin Kim, Fenglei Wang, Qi Sun, Kyu Ha Lee

This research is motivated by integrated epidemiological and blood biomarker studies, investigating the relationship between long-term adherence to a Mediterranean diet and cardiometabolic health, with plasma metabolomes as potential mediators. Analyzing causal mediation in high-dimensional omics data presents challenges, including complex dependencies among mediators and the need for advanced regularization or Bayesian techniques to ensure stable and interpretable estimation and selection of indirect effects. To this end, we propose a novel Bayesian framework to identify active pathways and estimate indirect effects in high-dimensional mediation analysis. Central to our method is the introduction of a set of priors for the selection indicators in the mediator and outcome models. A Markov random field prior leverages mediator correlations, enhancing power in detecting mediated effects. Sequential subsetting priors encourage simultaneous selection of relevant mediators and their indirect effects, ensuring a more coherent and efficient variable selection framework. Comprehensive simulation studies demonstrate that the proposed method provides superior power in detecting active mediating pathways. We further illustrate the practical utility of the method by applying it to metabolome data from two sub-studies within the Health Professionals Follow-up Study and Nurses' Health Study II, highlighting its effectiveness in a real-data setting.

这项研究的动机是综合流行病学和血液生物标志物研究,调查长期坚持地中海饮食和心脏代谢健康之间的关系,血浆代谢组作为潜在的介质。分析高维组学数据中的因果中介存在挑战,包括中介之间的复杂依赖关系,以及需要先进的正则化或贝叶斯技术来确保稳定和可解释的间接效应估计和选择。为此,我们提出了一个新的贝叶斯框架来识别高维中介分析中的主动通路并估计间接影响。我们方法的核心是在中介和结果模型中引入一组选择指标的先验。马尔可夫随机场先验利用中介相关性,增强了检测中介效应的能力。顺序子集先验鼓励同时选择相关中介及其间接影响,确保更连贯和有效的变量选择框架。综合仿真研究表明,该方法在检测主动中介通路方面具有优越的性能。通过将该方法应用于卫生专业人员随访研究和护士健康研究II中的两个子研究的代谢组数据,我们进一步说明了该方法的实际效用,突出了其在真实数据设置中的有效性。
{"title":"Bayesian Variable Selection for High-Dimensional Mediation Analysis: Application to Metabolomics Data in Epidemiological Studies.","authors":"Youngho Bae, Chanmin Kim, Fenglei Wang, Qi Sun, Kyu Ha Lee","doi":"10.1002/sim.70365","DOIUrl":"10.1002/sim.70365","url":null,"abstract":"<p><p>This research is motivated by integrated epidemiological and blood biomarker studies, investigating the relationship between long-term adherence to a Mediterranean diet and cardiometabolic health, with plasma metabolomes as potential mediators. Analyzing causal mediation in high-dimensional omics data presents challenges, including complex dependencies among mediators and the need for advanced regularization or Bayesian techniques to ensure stable and interpretable estimation and selection of indirect effects. To this end, we propose a novel Bayesian framework to identify active pathways and estimate indirect effects in high-dimensional mediation analysis. Central to our method is the introduction of a set of priors for the selection indicators in the mediator and outcome models. A Markov random field prior leverages mediator correlations, enhancing power in detecting mediated effects. Sequential subsetting priors encourage simultaneous selection of relevant mediators and their indirect effects, ensuring a more coherent and efficient variable selection framework. Comprehensive simulation studies demonstrate that the proposed method provides superior power in detecting active mediating pathways. We further illustrate the practical utility of the method by applying it to metabolome data from two sub-studies within the Health Professionals Follow-up Study and Nurses' Health Study II, highlighting its effectiveness in a real-data setting.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70365"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146030936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Confidence Interval Construction for Causally Generalized Estimates With Target Sample Summary Information. 具有目标样本汇总信息的因果广义估计的置信区间构造。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70358
Yi Chen, Guanhua Chen, Menggang Yu

Generalizing causal findings, such as the average treatment effect (ATE), from a source to a target population is a critical topic in biomedical research. Differences in the distributions of treatment effect modifiers between these populations, known as covariate shift, can lead to varying ATEs. Chen et al. [1] introduced a weighting method to estimate the target ATE using only summary-level information from a target sample while accounting for the possible covariate shifts. However, the asymptotic variance of the estimate was shown to depend on individual-level data from the target sample, hindering statistical inference. In this article, we propose a resampling-based perturbation method for confidence interval construction for the estimated target ATE, utilizing additional summary-level information. We demonstrate the effectiveness of our approach through simulation and real data settings when only summary-level information is available.

概括因果结果,如平均治疗效果(ATE),从一个来源到目标人群是生物医学研究中的一个关键课题。这些人群之间治疗效果修饰因子分布的差异,称为协变量移位,可导致不同的ATEs。Chen等人引入了一种加权方法,仅使用来自目标样本的摘要级信息来估计目标ATE,同时考虑到可能的协变量移位。然而,估计的渐近方差显示依赖于目标样本的个人水平数据,阻碍了统计推断。在本文中,我们提出了一种基于重采样的摄动方法,用于估计目标ATE的置信区间构建,利用额外的汇总级信息。当只有摘要级信息可用时,我们通过模拟和真实数据设置证明了我们的方法的有效性。
{"title":"Confidence Interval Construction for Causally Generalized Estimates With Target Sample Summary Information.","authors":"Yi Chen, Guanhua Chen, Menggang Yu","doi":"10.1002/sim.70358","DOIUrl":"10.1002/sim.70358","url":null,"abstract":"<p><p>Generalizing causal findings, such as the average treatment effect (ATE), from a source to a target population is a critical topic in biomedical research. Differences in the distributions of treatment effect modifiers between these populations, known as covariate shift, can lead to varying ATEs. Chen et al. [1] introduced a weighting method to estimate the target ATE using only summary-level information from a target sample while accounting for the possible covariate shifts. However, the asymptotic variance of the estimate was shown to depend on individual-level data from the target sample, hindering statistical inference. In this article, we propose a resampling-based perturbation method for confidence interval construction for the estimated target ATE, utilizing additional summary-level information. We demonstrate the effectiveness of our approach through simulation and real data settings when only summary-level information is available.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70358"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826351/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146030960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sequential Parallel Comparison Design for Assessing Induction, Maintenance, Long-Term, and Other Treatment Effects on a Binary Endpoint. 用于评估诱导、维持、长期和其他治疗效果的顺序平行比较设计。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70382
Hui Quan, Zhixing Xu, Xun Chen

For a chronic disease, besides the treatment induction effect, it is also important to demonstrate the maintenance effect of long-term treatment use. To fulfill these and other objectives for a clinical study, we often apply one of three designs: the active treatment lead-in followed by randomized maintenance design, the randomized induction followed by re-randomized withdrawal maintenance design and the treat-through design (FDA 2022). Separately, a two-stage sequential parallel comparison design (SPCD) is frequently used in therapeutic areas where placebo has a large effect. In this paper, we use a SPCD for a clinical trial with a binary endpoint for induction, maintenance, long-term and other treatment effect assessments. This SPCD can actually be treated as a hybrid of the above three designs and has some additional advantages. For example, compared to the re-randomized withdrawal maintenance design, the SPCD does not need a re-randomization to simplify trial operation and it also provides controlled data for formal long-term efficacy and safety analyses. To fully utilize all available data of the two stages for an overall treatment effect evaluation, a weighted combination test is considered with the incorporation of correlations of the components. Further, a multiple imputation approach is applied to handle missing not at random data. Simulations are conducted to evaluate the performances of the methods and a data example is employed to illustrate the applications of the methods.

对于一种慢性疾病,除了治疗诱导效果外,证明长期治疗使用的维持效果也很重要。为了实现临床研究的这些目标和其他目标,我们通常采用三种设计之一:主动治疗引入后随机维持设计,随机诱导后再随机停药维持设计和治疗通过设计(FDA 2022)。另外,两阶段连续平行比较设计(SPCD)经常用于安慰剂效果较大的治疗领域。在本文中,我们使用SPCD进行临床试验,具有双终点,用于诱导,维持,长期和其他治疗效果评估。这种SPCD实际上可以看作是上述三种设计的混合体,并且具有一些额外的优点。例如,与再随机化停药维持设计相比,SPCD不需要再随机化来简化试验操作,并且为正式的长期疗效和安全性分析提供了对照数据。为了充分利用两个阶段的所有可用数据进行总体治疗效果评价,考虑了加权组合检验,并结合各成分的相关性。在此基础上,对非随机缺失数据进行了多重插值处理。通过仿真对方法的性能进行了评价,并通过一个数据实例说明了方法的应用。
{"title":"Sequential Parallel Comparison Design for Assessing Induction, Maintenance, Long-Term, and Other Treatment Effects on a Binary Endpoint.","authors":"Hui Quan, Zhixing Xu, Xun Chen","doi":"10.1002/sim.70382","DOIUrl":"https://doi.org/10.1002/sim.70382","url":null,"abstract":"<p><p>For a chronic disease, besides the treatment induction effect, it is also important to demonstrate the maintenance effect of long-term treatment use. To fulfill these and other objectives for a clinical study, we often apply one of three designs: the active treatment lead-in followed by randomized maintenance design, the randomized induction followed by re-randomized withdrawal maintenance design and the treat-through design (FDA 2022). Separately, a two-stage sequential parallel comparison design (SPCD) is frequently used in therapeutic areas where placebo has a large effect. In this paper, we use a SPCD for a clinical trial with a binary endpoint for induction, maintenance, long-term and other treatment effect assessments. This SPCD can actually be treated as a hybrid of the above three designs and has some additional advantages. For example, compared to the re-randomized withdrawal maintenance design, the SPCD does not need a re-randomization to simplify trial operation and it also provides controlled data for formal long-term efficacy and safety analyses. To fully utilize all available data of the two stages for an overall treatment effect evaluation, a weighted combination test is considered with the incorporation of correlations of the components. Further, a multiple imputation approach is applied to handle missing not at random data. Simulations are conducted to evaluate the performances of the methods and a data example is employed to illustrate the applications of the methods.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70382"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146019765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Saddlepoint Framework for Accurate Inference in Multicenter Clinical Trials With Imbalanced Clusters. 一个鞍点框架用于多中心临床试验中不平衡群集的准确推断。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70408
Haidy A Newer

Statistical inference in multicenter clinical trials is often compromised when relying on asymptotic normal approximations, particularly in designs characterized by a small number of centers or severe imbalance in patient enrollment. Such deviations from asymptotic assumptions frequently result in unreliable p-values and a breakdown of error control. To resolve this, we introduce a high-precision saddlepoint approximation framework for aggregate permutation tests within hierarchically structured data. The theoretical core of our approach is the derivation of a multilevel nested cumulant generating function that explicitly models the trial hierarchy, analytically integrating patient-level linear rank statistics with the stochastic aggregation process across centers. A significant innovation of this work is the extension to the bivariate setting to address co-primary endpoints, providing a robust inferential solution for mixed continuous (efficacy) and discrete (safety) outcomes where standard multivariate normality is unattainable. The resulting framework yields simulation-free, highly accurate tail probabilities even in finite-sample regimes. Extensive simulation studies confirm that our method maintains strict Type I error control in scenarios where asymptotic methods exhibit substantial inflation. Furthermore, an application to the multicenter diabetes prevention program trial demonstrates the method's practical utility: it correctly identifies a significant cardiovascular risk factor that standard approximations failed to detect, thereby preventing a critical Type II error and ensuring valid clinical conclusions.

当依赖于渐近正态近似时,多中心临床试验中的统计推断常常受到损害,特别是在以少量中心或患者入组严重不平衡为特征的设计中。这种对渐近假设的偏离经常导致不可靠的p值和错误控制的崩溃。为了解决这个问题,我们引入了一个高精度鞍点近似框架,用于分层结构数据中的聚合排列测试。我们的方法的理论核心是推导出一个多层嵌套累积生成函数,该函数明确地模拟了试验层次,分析地将患者水平的线性秩统计与跨中心的随机聚集过程结合起来。这项工作的一个重要创新是扩展到双变量设置,以解决共同主要终点,为标准多变量正态性无法实现的混合连续(有效性)和离散(安全性)结果提供稳健的推理解决方案。由此产生的框架即使在有限样本情况下也能产生不需要模拟的、高度精确的尾部概率。大量的模拟研究证实,我们的方法在渐近方法表现出大量膨胀的情况下保持严格的I型误差控制。此外,在多中心糖尿病预防项目试验中的应用证明了该方法的实用性:它正确识别了标准近似无法检测到的重要心血管危险因素,从而防止了关键的II型错误,并确保了有效的临床结论。
{"title":"A Saddlepoint Framework for Accurate Inference in Multicenter Clinical Trials With Imbalanced Clusters.","authors":"Haidy A Newer","doi":"10.1002/sim.70408","DOIUrl":"https://doi.org/10.1002/sim.70408","url":null,"abstract":"<p><p>Statistical inference in multicenter clinical trials is often compromised when relying on asymptotic normal approximations, particularly in designs characterized by a small number of centers or severe imbalance in patient enrollment. Such deviations from asymptotic assumptions frequently result in unreliable p-values and a breakdown of error control. To resolve this, we introduce a high-precision saddlepoint approximation framework for aggregate permutation tests within hierarchically structured data. The theoretical core of our approach is the derivation of a multilevel nested cumulant generating function that explicitly models the trial hierarchy, analytically integrating patient-level linear rank statistics with the stochastic aggregation process across centers. A significant innovation of this work is the extension to the bivariate setting to address co-primary endpoints, providing a robust inferential solution for mixed continuous (efficacy) and discrete (safety) outcomes where standard multivariate normality is unattainable. The resulting framework yields simulation-free, highly accurate tail probabilities even in finite-sample regimes. Extensive simulation studies confirm that our method maintains strict Type I error control in scenarios where asymptotic methods exhibit substantial inflation. Furthermore, an application to the multicenter diabetes prevention program trial demonstrates the method's practical utility: it correctly identifies a significant cardiovascular risk factor that standard approximations failed to detect, thereby preventing a critical Type II error and ensuring valid clinical conclusions.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70408"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146030887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Informative Futility Rules Based on Conditional Assurance. 基于条件保证的信息无效性规则。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-01 DOI: 10.1002/sim.70330
Vladimir Dragalin

For the pharmaceutical industry, the main utility of futility rules is to allow early stopping of a trial when it seems unlikely to achieve its primary efficacy objectives, and it is mainly motivated by financial and ethical considerations. After a brief overview of available approaches in setting a futility rule, I will illustrate, using a case study, different rules based on conditional power, predictive probability of success, and Bayesian predictive probability of success, and will emphasize the main shortcomings that arise when using these measures, especially in sample size re-estimation designs. I propose, as an alternative, the conditional assurance that is the probability of achieving success at the final analysis when the study was not stopped for futility. It depends on the sample size for the interim, sample size at the final analysis, and the threshold for the futility rule. But it does not need the knowledge of the observed treatment effect estimate at the interim analysis. This makes the conditional assurance very appropriate for building informative futility rules. It balances the probability of stopping for futility (when there is no treatment effect), conditional assurance, and overall power. Decision makers can better understand the levels of risk associated with stopping for futility and make informed decisions about where to spend risk based on what is acceptable to the organization.

对于制药业来说,无效规则的主要用途是允许在似乎不可能实现其主要功效目标时提前停止试验,其动机主要是出于财务和伦理考虑。在简要概述设置无效规则的可用方法之后,我将使用案例研究来说明基于条件功率、预测成功概率和贝叶斯预测成功概率的不同规则,并强调使用这些度量时出现的主要缺点,特别是在样本量重新估计设计中。我建议,作为一种选择,有条件的保证,即在研究没有因徒劳而停止的情况下,在最终分析中取得成功的可能性。它取决于中期的样本量、最终分析时的样本量以及无效规则的阈值。但在中期分析时不需要知道观察到的治疗效果估计。这使得条件保证非常适合于构建信息无效性规则。它平衡了因无效而停止的概率(当没有治疗效果时)、条件保证和总体功率。决策者可以更好地理解与无用停止相关的风险级别,并根据组织可接受的程度做出明智的决定,决定在何处花费风险。
{"title":"Informative Futility Rules Based on Conditional Assurance.","authors":"Vladimir Dragalin","doi":"10.1002/sim.70330","DOIUrl":"https://doi.org/10.1002/sim.70330","url":null,"abstract":"<p><p>For the pharmaceutical industry, the main utility of futility rules is to allow early stopping of a trial when it seems unlikely to achieve its primary efficacy objectives, and it is mainly motivated by financial and ethical considerations. After a brief overview of available approaches in setting a futility rule, I will illustrate, using a case study, different rules based on conditional power, predictive probability of success, and Bayesian predictive probability of success, and will emphasize the main shortcomings that arise when using these measures, especially in sample size re-estimation designs. I propose, as an alternative, the conditional assurance that is the probability of achieving success at the final analysis when the study was not stopped for futility. It depends on the sample size for the interim, sample size at the final analysis, and the threshold for the futility rule. But it does not need the knowledge of the observed treatment effect estimate at the interim analysis. This makes the conditional assurance very appropriate for building informative futility rules. It balances the probability of stopping for futility (when there is no treatment effect), conditional assurance, and overall power. Decision makers can better understand the levels of risk associated with stopping for futility and make informed decisions about where to spend risk based on what is acceptable to the organization.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 1-2","pages":"e70330"},"PeriodicalIF":1.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146030901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Statistics in Medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1