首页 > 最新文献

Pharmaceutical Statistics最新文献

英文 中文
Informing the Borrowing Process for Dose-Finding Trials by Estimating the Similarity Between Population-Specific Dose-Toxicity Curves. 通过估计人群特异性剂量-毒性曲线之间的相似性,为剂量寻找试验的借用过程提供信息。
IF 1.4 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2026-01-01 DOI: 10.1002/pst.70067
Dario Zocholl, Heiko Götte, Christina Habermehl, Burak Kürsad Günhan

The conduct of dose-finding trials can be specifically challenging in small populations, for example, in pediatric settings. Recently, research has shown that Bayesian borrowing from adult trials combined with appropriately robust prior distributions enables the conduct of pediatric dose-finding trials with very small sample sizes. However, the appropriate degree of borrowing remains a subjective choice, relying on default methods or expert opinion. This paper proposes an approach to empirically determine the degree of borrowing based on a meta-analysis of the similarity between population-specific dose-toxicity curves of other biologically similar compounds. Focusing on the pediatric use case, the approach may be applicable to any dose-finding trial with information borrowing from another population. With the ExNex and a hierarchical model, two popular statistical modeling approaches are applied. The estimated degree of similarity is then translated into the statistical model for the dose-finding algorithm using either variance inflation or robust mixture prior distributions. The performance of each combination of statistical model approaches is investigated in a simulation study. The results with mixture priors are promising for the application of the proposed methods, especially with many (20) compounds, while variance inflation models require additional fine-tuning and seem to be less robust. With fewer (3 or 7) compounds, our proposed methods are either in line with robust priors that ignore the data from other compounds or are slightly better. We further provide a case study analyzing real dose-finding data from 6 compounds with our models, demonstrating applicability in real-world situations. For clinical trials teams, the decision for or against the proposed approach might be connected to the efforts in terms of time and cost to receive the external data.

在小人群中进行剂量测定试验尤其具有挑战性,例如在儿科环境中。最近,研究表明,贝叶斯借鉴成人试验与适当稳健的先验分布相结合,可以在样本量很小的情况下进行儿科剂量发现试验。然而,借贷的适当程度仍然是一种主观选择,依赖于默认方法或专家意见。本文提出了一种基于对其他生物相似化合物的人群特异性剂量-毒性曲线之间相似性的荟萃分析的经验确定借用程度的方法。关注儿科用例,该方法可能适用于任何从其他人群中获取信息的剂量发现试验。使用ExNex和分层模型,应用了两种流行的统计建模方法。然后使用方差膨胀或鲁棒混合先验分布将估计的相似度转换为剂量查找算法的统计模型。在仿真研究中,研究了每种统计模型方法组合的性能。混合先验的结果对于所提出的方法的应用是有希望的,特别是对于许多化合物,而方差膨胀模型需要额外的微调,并且似乎不那么稳健。对于较少的化合物(3或7),我们提出的方法要么符合鲁棒先验,忽略了其他化合物的数据,要么略好。我们进一步提供了一个案例研究,用我们的模型分析了6种化合物的实际剂量发现数据,证明了在现实情况下的适用性。对于临床试验团队来说,支持或反对所建议的方法的决定可能与接收外部数据的时间和成本方面的努力有关。
{"title":"Informing the Borrowing Process for Dose-Finding Trials by Estimating the Similarity Between Population-Specific Dose-Toxicity Curves.","authors":"Dario Zocholl, Heiko Götte, Christina Habermehl, Burak Kürsad Günhan","doi":"10.1002/pst.70067","DOIUrl":"10.1002/pst.70067","url":null,"abstract":"<p><p>The conduct of dose-finding trials can be specifically challenging in small populations, for example, in pediatric settings. Recently, research has shown that Bayesian borrowing from adult trials combined with appropriately robust prior distributions enables the conduct of pediatric dose-finding trials with very small sample sizes. However, the appropriate degree of borrowing remains a subjective choice, relying on default methods or expert opinion. This paper proposes an approach to empirically determine the degree of borrowing based on a meta-analysis of the similarity between population-specific dose-toxicity curves of other biologically similar compounds. Focusing on the pediatric use case, the approach may be applicable to any dose-finding trial with information borrowing from another population. With the ExNex and a hierarchical model, two popular statistical modeling approaches are applied. The estimated degree of similarity is then translated into the statistical model for the dose-finding algorithm using either variance inflation or robust mixture prior distributions. The performance of each combination of statistical model approaches is investigated in a simulation study. The results with mixture priors are promising for the application of the proposed methods, especially with many (20) compounds, while variance inflation models require additional fine-tuning and seem to be less robust. With fewer (3 or 7) compounds, our proposed methods are either in line with robust priors that ignore the data from other compounds or are slightly better. We further provide a case study analyzing real dose-finding data from 6 compounds with our models, demonstrating applicability in real-world situations. For clinical trials teams, the decision for or against the proposed approach might be connected to the efforts in terms of time and cost to receive the external data.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"25 1","pages":"e70067"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12742552/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145794326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple Comparisons With Overdispersed Multinomial Data: Methods, Properties and Application. 过分散多项数据的多重比较:方法、性质及应用。
IF 1.4 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2026-01-01 DOI: 10.1002/pst.70073
Sören Budig, Charlotte Vogel, Frank Schaarschmidt

Overdispersion, a common issue in clustered multinomial data, can lead to biased standard errors and compromised statistical inference if not adequately addressed. This study describes a comprehensive procedure for constructing multiple comparisons of interest and applying multiplicity adjustments in the analysis of clustered, potentially overdispersed multinomial data. We investigate four quasi-likelihood estimators and the Dirichlet-multinomial model to account for overdispersion. Through a simulation study, we evaluate the performance of these methods under various scenarios, focusing on family-wise error rate, statistical power and coverage probability. Our findings indicate that the Afroz quasi-likelihood estimator is recommended when strict error control is required, whereas the Dirichlet-multinomial model is preferable when high statistical power is desired, albeit with a slightly increased tolerance for false positives. Additionally, we address the challenge of zero-count categories within groups, demonstrating that incorporating pseudo-observations can effectively mitigate associated estimation difficulties. Practical applications to real datasets from toxicology and flow cytometry underscore the robustness and practical utility of these methods.

过度分散是聚类多项数据中的一个常见问题,如果不充分解决,可能导致有偏标准误差和统计推断受损。本研究描述了一个全面的过程,用于构建多个兴趣比较,并在聚类分析中应用多重调整,可能过度分散的多项数据。我们研究了四种拟似然估计量和dirichlet -多项模型来解释过色散。通过仿真研究,我们评估了这些方法在不同场景下的性能,重点是家庭错误率、统计功率和覆盖概率。我们的研究结果表明,当需要严格的误差控制时,建议使用Afroz拟似然估计器,而当需要高统计功率时,dirichlet -多项式模型更可取,尽管对误报的容错性略有增加。此外,我们解决了组内零计数类别的挑战,证明合并伪观测可以有效地减轻相关的估计困难。毒理学和流式细胞术的实际数据集的实际应用强调了这些方法的稳健性和实用性。
{"title":"Multiple Comparisons With Overdispersed Multinomial Data: Methods, Properties and Application.","authors":"Sören Budig, Charlotte Vogel, Frank Schaarschmidt","doi":"10.1002/pst.70073","DOIUrl":"10.1002/pst.70073","url":null,"abstract":"<p><p>Overdispersion, a common issue in clustered multinomial data, can lead to biased standard errors and compromised statistical inference if not adequately addressed. This study describes a comprehensive procedure for constructing multiple comparisons of interest and applying multiplicity adjustments in the analysis of clustered, potentially overdispersed multinomial data. We investigate four quasi-likelihood estimators and the Dirichlet-multinomial model to account for overdispersion. Through a simulation study, we evaluate the performance of these methods under various scenarios, focusing on family-wise error rate, statistical power and coverage probability. Our findings indicate that the Afroz quasi-likelihood estimator is recommended when strict error control is required, whereas the Dirichlet-multinomial model is preferable when high statistical power is desired, albeit with a slightly increased tolerance for false positives. Additionally, we address the challenge of zero-count categories within groups, demonstrating that incorporating pseudo-observations can effectively mitigate associated estimation difficulties. Practical applications to real datasets from toxicology and flow cytometry underscore the robustness and practical utility of these methods.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"25 1","pages":"e70073"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12815589/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146003844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing Non-Exchangeability in Hybrid Control Studies: A Variable Selection Approach. 解决混合控制研究中的不可互换性:一种变量选择方法。
IF 1.4 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2025-11-01 DOI: 10.1002/pst.70056
Zhiwei Zhang, Jialuo Liu, Peisong Han

There is growing interest in a hybrid control design for treatment evaluation, where a randomized controlled trial is augmented with external control data from a previous trial or a real world data source. The hybrid control design has the potential to improve efficiency but also carries the risk of introducing bias. The potential bias in a hybrid control study can be mitigated by adjusting for baseline covariates that are related to the control outcome. A key assumption for this approach is that the internal and external control outcomes are exchangeable upon conditioning on a set of measured covariates. Possible violations of the exchangeability assumption can result in bias and thus need to be addressed systematically. This article proposes a variable selection approach to addressing non-exchangeability in hybrid control studies. Under a specified outcome regression model, possible non-exchangeability can be represented as interactions between covariates and an external control indicator, some of which may be null (with zero coefficients). Null interactions support information borrowing, while non-null interactions require adjustment. Identifying non-null interactions for inclusion in the model is a variable selection problem. The adaptive lasso can be used to perform variable selection and modeling fitting, and the fitted model can be substituted into a g-computation formula. Simulation results demonstrate that, under appropriate conditions, this approach is able to improve efficiency by incorporating external control data in the absence of full exchangeability.

人们对治疗评估的混合对照设计越来越感兴趣,其中随机对照试验与来自先前试验或真实世界数据源的外部对照数据相增强。混合控制设计具有提高效率的潜力,但也有引入偏置的风险。混合对照研究中的潜在偏倚可以通过调整与对照结果相关的基线协变量来减轻。这种方法的一个关键假设是,内部和外部控制结果在一组测量协变量的条件下是可交换的。可能违反可交换性假设会导致偏差,因此需要系统地加以解决。本文提出了一种变量选择方法来解决混合控制研究中的不可互换性。在特定的结果回归模型下,可能的非互换性可以表示为协变量与外部控制指标之间的相互作用,其中一些可能为零(零系数)。空交互支持信息借用,而非空交互需要调整。识别非空交互以包含在模型中是一个变量选择问题。利用自适应套索进行变量选择和建模拟合,将拟合后的模型代入g计算公式。仿真结果表明,在适当的条件下,该方法能够在没有完全可交换性的情况下通过合并外部控制数据来提高效率。
{"title":"Addressing Non-Exchangeability in Hybrid Control Studies: A Variable Selection Approach.","authors":"Zhiwei Zhang, Jialuo Liu, Peisong Han","doi":"10.1002/pst.70056","DOIUrl":"https://doi.org/10.1002/pst.70056","url":null,"abstract":"<p><p>There is growing interest in a hybrid control design for treatment evaluation, where a randomized controlled trial is augmented with external control data from a previous trial or a real world data source. The hybrid control design has the potential to improve efficiency but also carries the risk of introducing bias. The potential bias in a hybrid control study can be mitigated by adjusting for baseline covariates that are related to the control outcome. A key assumption for this approach is that the internal and external control outcomes are exchangeable upon conditioning on a set of measured covariates. Possible violations of the exchangeability assumption can result in bias and thus need to be addressed systematically. This article proposes a variable selection approach to addressing non-exchangeability in hybrid control studies. Under a specified outcome regression model, possible non-exchangeability can be represented as interactions between covariates and an external control indicator, some of which may be null (with zero coefficients). Null interactions support information borrowing, while non-null interactions require adjustment. Identifying non-null interactions for inclusion in the model is a variable selection problem. The adaptive lasso can be used to perform variable selection and modeling fitting, and the fitted model can be substituted into a g-computation formula. Simulation results demonstrate that, under appropriate conditions, this approach is able to improve efficiency by incorporating external control data in the absence of full exchangeability.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 6","pages":"e70056"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145534101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Order of Addition in Mixture-Amount Experiments. 混合量实验中的加成顺序。
IF 1.4 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2025-11-01 DOI: 10.1002/pst.70047
Taha Hasan, Touqeer Ahmad

In a mixture experiment, we study the behavior and properties of m mixture components, where the primary focus is on the proportions of the components that make up the mixture rather than the total amount. Mixture-amount experiments are specialized types of mixture experiments where both the proportions of the components in the mixture and the total amount of the mixture are of interest. In this paper, we consider an Order-of-Addition (OofA) mixture-amount experiment in which the response depends on both the mixture amounts of components and their order of addition. Full mixture OofA designs are constructed to maintain orthogonality between the mixture-amount model terms and the effects of the order of addition. But the number of runs in such full OofA designs increases as m increases. We employ the Threshold Accepting (TA) Algorithm to select an n-row subset from the full OofA mixture design that maximizes G-optimality while minimizing the number of experimental runs. Further, the G-efficiency criterion is used to assess how well the design supports the precise and unbiased estimation of the model parameters. These designs enable the estimation of mixture-component model parameters and the order-of-addition effects. The Fraction of Design Space (FDS) plot is used to provide a visual assessment of the prediction capabilities of a design across the entire design space.

在混合实验中,我们研究了m种混合成分的行为和性质,其中主要关注的是组成混合物的成分的比例,而不是总量。混合物量实验是一种特殊类型的混合物实验,其中混合物中各组分的比例和混合物的总量都是感兴趣的。在本文中,我们考虑了一个混合量实验,其中响应既取决于组分的混合量,也取决于它们的添加顺序。构建全混合OofA设计是为了保持混合量模型项与加法顺序的影响之间的正交性。但是这种全OofA设计的运行次数随着m的增加而增加。我们采用阈值接受(TA)算法从完整的OofA混合设计中选择一个n行子集,以最大化g -最优性,同时最小化实验运行次数。此外,g效率标准用于评估设计支持模型参数的精确和无偏估计的程度。这些设计能够估计混合成分模型参数和加的顺序效应。设计空间的分数(FDS)图用于提供跨整个设计空间的设计预测能力的视觉评估。
{"title":"Order of Addition in Mixture-Amount Experiments.","authors":"Taha Hasan, Touqeer Ahmad","doi":"10.1002/pst.70047","DOIUrl":"10.1002/pst.70047","url":null,"abstract":"<p><p>In a mixture experiment, we study the behavior and properties of m mixture components, where the primary focus is on the proportions of the components that make up the mixture rather than the total amount. Mixture-amount experiments are specialized types of mixture experiments where both the proportions of the components in the mixture and the total amount of the mixture are of interest. In this paper, we consider an Order-of-Addition (OofA) mixture-amount experiment in which the response depends on both the mixture amounts of components and their order of addition. Full mixture OofA designs are constructed to maintain orthogonality between the mixture-amount model terms and the effects of the order of addition. But the number of runs in such full OofA designs increases as m increases. We employ the Threshold Accepting (TA) Algorithm to select an n-row subset from the full OofA mixture design that maximizes G-optimality while minimizing the number of experimental runs. Further, the G-efficiency criterion is used to assess how well the design supports the precise and unbiased estimation of the model parameters. These designs enable the estimation of mixture-component model parameters and the order-of-addition effects. The Fraction of Design Space (FDS) plot is used to provide a visual assessment of the prediction capabilities of a design across the entire design space.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 6","pages":"e70047"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145459481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Great Wall: A Generalized Dose Optimization Design for Drug Combination Trials Maximizing Survival Benefit. 长城:药物联合试验中最大生存效益的通用剂量优化设计。
IF 1.4 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2025-11-01 DOI: 10.1002/pst.70049
Yan Han, Yingjie Qiu, Yi Zhao, Isabella Wan, Lang Li, Suyu Liu, Yong Zang

Most phase I-II drug-combination trial designs assume that selecting the optimal dose combination based on early outcomes will also lead to maximum long-term survival benefits. However, this assumption is often violated in many clinical studies, generally due to high rates of relapse following the initial response. To address this problem, we propose the Great Wall design, a general dose optimization design for drug-combination trials. The Great Wall design employs a "divide-and-conquer" algorithm to address the issue of partial order of toxicity and uses early outcomes to eliminate dose combinations that are excessively toxic or less efficacious. It utilizes a dose randomization approach to construct a candidate set of the promising dose combinations balancing the toxicity and early efficacy outcomes. The patients assigned to the candidate set are followed to collect the survival outcomes and the final optimal dose combination is then selected to maximize the survival benefit. The simulation studies confirm the desirable operating characteristics of the Great Wall design under various clinical settings. R codes are also provided to facilitate the application. The Great Wall design is modular and practically useful in settings where investigators plan to follow patients long enough to assess survival outcomes.

大多数I-II期联合用药试验设计假设,根据早期结果选择最佳剂量组合也将带来最大的长期生存效益。然而,这一假设在许多临床研究中经常被违背,通常是由于初始反应后的复发率很高。为了解决这一问题,我们提出了长城设计,一种用于联合用药试验的通用剂量优化设计。长城设计采用“分而治之”算法来解决毒性的部分顺序问题,并使用早期结果来消除毒性过大或效果较差的剂量组合。它采用剂量随机化方法来构建一组有希望的剂量组合,以平衡毒性和早期疗效结果。将被分配到候选组的患者进行随访以收集生存结果,然后选择最终的最佳剂量组合以最大限度地提高生存效益。仿真研究证实了长城设计在各种临床环境下的理想运行特性。还提供了R代码以方便应用。长城设计是模块化的,在研究人员计划跟踪患者足够长的时间以评估生存结果的情况下,它实际上很有用。
{"title":"Great Wall: A Generalized Dose Optimization Design for Drug Combination Trials Maximizing Survival Benefit.","authors":"Yan Han, Yingjie Qiu, Yi Zhao, Isabella Wan, Lang Li, Suyu Liu, Yong Zang","doi":"10.1002/pst.70049","DOIUrl":"10.1002/pst.70049","url":null,"abstract":"<p><p>Most phase I-II drug-combination trial designs assume that selecting the optimal dose combination based on early outcomes will also lead to maximum long-term survival benefits. However, this assumption is often violated in many clinical studies, generally due to high rates of relapse following the initial response. To address this problem, we propose the Great Wall design, a general dose optimization design for drug-combination trials. The Great Wall design employs a \"divide-and-conquer\" algorithm to address the issue of partial order of toxicity and uses early outcomes to eliminate dose combinations that are excessively toxic or less efficacious. It utilizes a dose randomization approach to construct a candidate set of the promising dose combinations balancing the toxicity and early efficacy outcomes. The patients assigned to the candidate set are followed to collect the survival outcomes and the final optimal dose combination is then selected to maximize the survival benefit. The simulation studies confirm the desirable operating characteristics of the Great Wall design under various clinical settings. R codes are also provided to facilitate the application. The Great Wall design is modular and practically useful in settings where investigators plan to follow patients long enough to assess survival outcomes.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 6","pages":"e70049"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12606548/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145496378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Unified Approach to Covariate Adjustment for Survival Endpoints in Randomized Clinical Trials. 随机临床试验中生存终点协变量调整的统一方法
IF 1.4 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2025-11-01 DOI: 10.1002/pst.70054
Zhiwei Zhang, Ya Wang, Dong Xi

Covariate adjustment aims to improve the statistical efficiency of randomized trials by incorporating information from baseline covariates. Popular methods for covariate adjustment include analysis of covariance for continuous endpoints and standardized logistic regression for binary endpoints. For survival endpoints, while some covariate adjustment methods have been developed for specific effect measures, they are not commonly used in practice for various reasons, including high demands for theoretical and methodological sophistication as well as computational skills. This article describes an augmentation approach to covariate adjustment for survival endpoints that is relatively easy to understand and widely applicable to different effect measures. This approach involves augmenting a given treatment effect estimator in a way that preserves consistency and asymptotic normality under minimal assumptions (i.e., randomization). It does not attempt to exploit other possible constraints (e.g., independent censoring, proportional hazards) on the observed data distribution. The optimal augmentation term, which minimizes the asymptotic variance of an augmented estimator, can be estimated using various statistical and machine learning methods. Simulation results demonstrate that the augmentation approach can bring substantial gains in statistical efficiency. This approach has been implemented in an R package named sleete, which is described in detail and illustrated with real data.

协变量调整旨在通过纳入基线协变量信息来提高随机试验的统计效率。常用的协变量调整方法包括连续终点的协方差分析和二元终点的标准化逻辑回归。对于生存终点,虽然已经开发了一些协变量调整方法用于特定的效果测量,但由于各种原因,包括对理论和方法复杂性以及计算技能的高要求,它们在实践中并不常用。本文描述了一种对生存终点进行协变量调整的增强方法,这种方法相对容易理解,并广泛适用于不同的效果测量。这种方法包括在最小假设(即随机化)下,以保持一致性和渐近正态性的方式增加给定的治疗效果估计量。它不试图利用观察到的数据分布上的其他可能的限制(例如,独立审查,比例风险)。最优增广项,使增广估计量的渐近方差最小化,可以使用各种统计和机器学习方法来估计。仿真结果表明,该方法能显著提高统计效率。这种方法已经在一个名为sleete的R包中实现了,它被详细描述并用实际数据进行了说明。
{"title":"A Unified Approach to Covariate Adjustment for Survival Endpoints in Randomized Clinical Trials.","authors":"Zhiwei Zhang, Ya Wang, Dong Xi","doi":"10.1002/pst.70054","DOIUrl":"https://doi.org/10.1002/pst.70054","url":null,"abstract":"<p><p>Covariate adjustment aims to improve the statistical efficiency of randomized trials by incorporating information from baseline covariates. Popular methods for covariate adjustment include analysis of covariance for continuous endpoints and standardized logistic regression for binary endpoints. For survival endpoints, while some covariate adjustment methods have been developed for specific effect measures, they are not commonly used in practice for various reasons, including high demands for theoretical and methodological sophistication as well as computational skills. This article describes an augmentation approach to covariate adjustment for survival endpoints that is relatively easy to understand and widely applicable to different effect measures. This approach involves augmenting a given treatment effect estimator in a way that preserves consistency and asymptotic normality under minimal assumptions (i.e., randomization). It does not attempt to exploit other possible constraints (e.g., independent censoring, proportional hazards) on the observed data distribution. The optimal augmentation term, which minimizes the asymptotic variance of an augmented estimator, can be estimated using various statistical and machine learning methods. Simulation results demonstrate that the augmentation approach can bring substantial gains in statistical efficiency. This approach has been implemented in an R package named sleete, which is described in detail and illustrated with real data.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 6","pages":"e70054"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145534122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantile Effect on Duration of Response: A Zero-Inflated Censored Regression Approach. 反应持续时间的分位数效应:零膨胀截尾回归方法。
IF 1.4 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2025-11-01 DOI: 10.1002/pst.70053
Nan Sun, Jixian Wang, Ram Tiwari

Duration of response (DOR) has been increasingly used as a useful measure of response to treatments in randomized clinical trials (RCT). Some estimands for DOR, such as the restricted mean DOR, although simple to use, may be sensitive to outliers and may not correctly measure treatment effects on the quantiles of DOR, such as the proportion of patients with DOR of at least 3 months. Quantile regression for survival data has been well developed. However, it is not directly applicable to DOR data in RCTs, due to the presence of non-responders for whom DOR is not defined. Although they can be treated as having zero DOR in a standard quantile regression, such an approach may not be flexible to model these subset of patients. To mitigate this issue, we propose an approach similar to the two-parts zero-inflated models, for example, for count data, so that the nonresponders are modeled as a part of the model, while DOR is modeled using quantile regression. A simulation study is conducted to examine the performance of the proposed approach. For illustration, we apply our approach to a simulated dataset of an acute myeloid leukemia trial, since the true dataset cannot be used due to confidentiality. The asymptotic properties of the proposed approach are also derived.

反应持续时间(DOR)在随机临床试验(RCT)中越来越多地被用作对治疗反应的有用测量。一些DOR的估计,如限制平均DOR,虽然使用简单,但可能对异常值敏感,并且可能不能正确衡量治疗效果对DOR分位数的影响,例如DOR至少3个月的患者比例。生存数据的分位数回归已经得到了很好的发展。然而,它并不直接适用于随机对照试验中的DOR数据,因为存在未定义DOR的无应答者。尽管在标准分位数回归中,他们可以被视为DOR为零,但这种方法可能无法灵活地对这些患者子集进行建模。为了缓解这个问题,我们提出了一种类似于两部分零膨胀模型的方法,例如,对于计数数据,这样就可以将无应答者建模为模型的一部分,而使用分位数回归对DOR进行建模。通过仿真研究验证了该方法的性能。为了说明,我们将我们的方法应用于急性髓性白血病试验的模拟数据集,因为真实数据集由于保密而不能使用。并给出了该方法的渐近性质。
{"title":"Quantile Effect on Duration of Response: A Zero-Inflated Censored Regression Approach.","authors":"Nan Sun, Jixian Wang, Ram Tiwari","doi":"10.1002/pst.70053","DOIUrl":"https://doi.org/10.1002/pst.70053","url":null,"abstract":"<p><p>Duration of response (DOR) has been increasingly used as a useful measure of response to treatments in randomized clinical trials (RCT). Some estimands for DOR, such as the restricted mean DOR, although simple to use, may be sensitive to outliers and may not correctly measure treatment effects on the quantiles of DOR, such as the proportion of patients with DOR of at least 3 months. Quantile regression for survival data has been well developed. However, it is not directly applicable to DOR data in RCTs, due to the presence of non-responders for whom DOR is not defined. Although they can be treated as having zero DOR in a standard quantile regression, such an approach may not be flexible to model these subset of patients. To mitigate this issue, we propose an approach similar to the two-parts zero-inflated models, for example, for count data, so that the nonresponders are modeled as a part of the model, while DOR is modeled using quantile regression. A simulation study is conducted to examine the performance of the proposed approach. For illustration, we apply our approach to a simulated dataset of an acute myeloid leukemia trial, since the true dataset cannot be used due to confidentiality. The asymptotic properties of the proposed approach are also derived.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 6","pages":"e70053"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145541755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical Consideration for Event-Free Survival With Cure Rate in Acute Myeloid Leukemia Studies. 急性髓系白血病研究中无事件生存率与治愈率的统计学考虑。
IF 1.4 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2025-11-01 DOI: 10.1002/pst.70052
Yuichiro Kaneko, Kentaro Takeda, Shufang Liu, Lu Tian

In many acute myeloid leukemia (AML) studies, event-free survival (EFS) has been accepted as a primary efficacy endpoint. In those studies, the patients who do not achieve complete remission (CR) in the induction period are regarded as induction treatment failure (ITF). The recent FDA guidance on AML (2022) has clearly specified ITF as the event at Day 1 of randomization, considering the variability of length of individual induction treatment periods among studies. Xu et al. (2021) suggested decomposing the log-rank test statistic into the ITF portion, and the Non-ITF portion that is defined as the patients who achieved CR, and assumed proportional hazards for the Non-ITF portion. However, especially in the newly diagnosed AML study, there is some indication of the cured patients who achieve CR during the induction period. As a result, Non-zero ITF rates and cured patients invalidate the proportional hazards assumption and therefore, the conventional power calculation based on the number of events may be problematic in this setting. Our research follows the same decomposition of the log-rank test statistic as Xu et al. (2021) and suggests a new sample size calculation method accounting for the presence of both ITF and cured patients. The result shows that the analytically calculated power of log-rank test based on our proposal was very similar to the empirical power based on simulations in various finite sample settings and was also useful to protect from overestimation and underestimation of the required sample size in the presence of cure fraction.

在许多急性髓性白血病(AML)研究中,无事件生存期(EFS)已被接受为主要疗效终点。在这些研究中,诱导期未达到完全缓解(CR)的患者被视为诱导治疗失败(ITF)。最近的FDA AML指南(2022)明确规定ITF为随机化第1天的事件,考虑到各个研究中个体诱导治疗期长度的可变性。Xu et al.(2021)建议将log-rank检验统计量分解为ITF部分和Non-ITF部分(定义为达到CR的患者),并假设Non-ITF部分的比例风险。然而,特别是在新诊断的AML研究中,有一些迹象表明治愈的患者在诱导期达到了CR。因此,非零ITF率和治愈患者使比例危险假设无效,因此,基于事件数量的传统功率计算在这种情况下可能存在问题。我们的研究遵循与Xu et al.(2021)相同的log-rank检验统计量分解方法,并提出了一种考虑ITF和治愈患者存在的新的样本量计算方法。结果表明,基于我们的建议的log-rank检验的解析计算功率与基于各种有限样本设置的模拟的经验功率非常相似,并且也有助于防止在存在固化分数的情况下对所需样本量的高估和低估。
{"title":"Statistical Consideration for Event-Free Survival With Cure Rate in Acute Myeloid Leukemia Studies.","authors":"Yuichiro Kaneko, Kentaro Takeda, Shufang Liu, Lu Tian","doi":"10.1002/pst.70052","DOIUrl":"https://doi.org/10.1002/pst.70052","url":null,"abstract":"<p><p>In many acute myeloid leukemia (AML) studies, event-free survival (EFS) has been accepted as a primary efficacy endpoint. In those studies, the patients who do not achieve complete remission (CR) in the induction period are regarded as induction treatment failure (ITF). The recent FDA guidance on AML (2022) has clearly specified ITF as the event at Day 1 of randomization, considering the variability of length of individual induction treatment periods among studies. Xu et al. (2021) suggested decomposing the log-rank test statistic into the ITF portion, and the Non-ITF portion that is defined as the patients who achieved CR, and assumed proportional hazards for the Non-ITF portion. However, especially in the newly diagnosed AML study, there is some indication of the cured patients who achieve CR during the induction period. As a result, Non-zero ITF rates and cured patients invalidate the proportional hazards assumption and therefore, the conventional power calculation based on the number of events may be problematic in this setting. Our research follows the same decomposition of the log-rank test statistic as Xu et al. (2021) and suggests a new sample size calculation method accounting for the presence of both ITF and cured patients. The result shows that the analytically calculated power of log-rank test based on our proposal was very similar to the empirical power based on simulations in various finite sample settings and was also useful to protect from overestimation and underestimation of the required sample size in the presence of cure fraction.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 6","pages":"e70052"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145534178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Wilcoxon-Mann-Whitney Estimand Versus Differences in Medians or Means. Wilcoxon-Mann-Whitney Estimand与中位数或平均值的差异。
IF 1.4 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2025-09-01 DOI: 10.1002/pst.70036
Linda J Harrison, Ronald J Bosch

There is a renewed interest in defining the target of estimation when designing randomized trials. Motivated by design work in trials of HIV-1 curative interventions, we compare the Wilcoxon-Mann-Whitney (WMW) estimand to a difference in medians or means in a two-arm study. First, we define each estimand along with an appropriate estimator. Then, we highlight relevant asymptotic relative efficiency (ARE) results for the estimators under normal distributions (ARE: WMW/mean = 3 / π $$ 3/pi $$ , median/mean = 2 / π $$ 2/pi $$ , median/WMW = 2 / 3 $$ 2/3 $$ ), as well as normal mixtures. Measurement of outcomes related to HIV-1 cure involve laboratory assays with lower limits of quantification giving rise to left-censored data. In our simulation study, we compare the estimators in the presence of left-censored observations and at small sample sizes, illustrating that under a censored normal mixture distribution the WMW approach is unbiased, powerful, and has confidence intervals with nominal coverage. We apply our findings to a randomized trial designed to reduce HIV-1 reservoirs. We further expose several extensions of the WMW approach that allows for assessment of interactions between subgroups in a trial, adjustment for covariates, and general ranking methods for clinical outcomes in other disease areas. We end with a discussion summarizing the merits of a WMW based intervention effect estimate versus an estimate summarized on the scale the intervention was originally measured such as the difference in medians or means.

在设计随机试验时,对确定估计目标有了新的兴趣。在HIV-1治疗性干预试验设计工作的激励下,我们比较了Wilcoxon-Mann-Whitney (WMW)估计与两组研究中位数或平均值的差异。首先,我们定义每个估计和一个适当的估计量。然后,我们重点介绍了正态分布(ARE: WMW/mean = 3 / π $$ 3/pi $$, median/mean = 2 / π $$ 2/pi $$, median/WMW = 2 / 3 $$ 2/3 $$)和正态混合下估计量的相关渐近相对效率(渐近相对效率)结果。与HIV-1治愈相关的结果测量涉及实验室分析,定量下限较低,导致数据左截。在我们的模拟研究中,我们比较了存在左删减观测值和小样本量的估计量,说明在删减正态混合分布下,WMW方法是无偏的,强大的,并且具有名义覆盖的置信区间。我们将我们的发现应用于一项旨在减少HIV-1储存库的随机试验。我们进一步揭示了WMW方法的几个扩展,这些扩展允许评估试验中亚组之间的相互作用,调整协变量,以及其他疾病领域临床结果的一般排序方法。最后,我们讨论总结了基于WMW的干预效果估计与根据干预最初测量的量表(如中位数或平均值的差异)总结的估计的优点。
{"title":"The Wilcoxon-Mann-Whitney Estimand Versus Differences in Medians or Means.","authors":"Linda J Harrison, Ronald J Bosch","doi":"10.1002/pst.70036","DOIUrl":"10.1002/pst.70036","url":null,"abstract":"<p><p>There is a renewed interest in defining the target of estimation when designing randomized trials. Motivated by design work in trials of HIV-1 curative interventions, we compare the Wilcoxon-Mann-Whitney (WMW) estimand to a difference in medians or means in a two-arm study. First, we define each estimand along with an appropriate estimator. Then, we highlight relevant asymptotic relative efficiency (ARE) results for the estimators under normal distributions (ARE: WMW/mean = <math> <semantics><mrow><mn>3</mn> <mo>/</mo> <mi>π</mi></mrow> <annotation>$$ 3/pi $$</annotation></semantics> </math> , median/mean = <math> <semantics><mrow><mn>2</mn> <mo>/</mo> <mi>π</mi></mrow> <annotation>$$ 2/pi $$</annotation></semantics> </math> , median/WMW = <math> <semantics><mrow><mn>2</mn> <mo>/</mo> <mn>3</mn></mrow> <annotation>$$ 2/3 $$</annotation></semantics> </math> ), as well as normal mixtures. Measurement of outcomes related to HIV-1 cure involve laboratory assays with lower limits of quantification giving rise to left-censored data. In our simulation study, we compare the estimators in the presence of left-censored observations and at small sample sizes, illustrating that under a censored normal mixture distribution the WMW approach is unbiased, powerful, and has confidence intervals with nominal coverage. We apply our findings to a randomized trial designed to reduce HIV-1 reservoirs. We further expose several extensions of the WMW approach that allows for assessment of interactions between subgroups in a trial, adjustment for covariates, and general ranking methods for clinical outcomes in other disease areas. We end with a discussion summarizing the merits of a WMW based intervention effect estimate versus an estimate summarized on the scale the intervention was originally measured such as the difference in medians or means.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 5","pages":"e70036"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12379204/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144822258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finding the Optimal Number of Splits and Repetitions in Double Cross-Fitting Targeted Maximum Likelihood Estimators. 寻找双交叉拟合目标最大似然估计中分裂和重复的最优数量。
IF 1.4 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2025-09-01 DOI: 10.1002/pst.70022
Mohammad Ehsanul Karim, Momenul Haque Mondol

Flexible machine learning algorithms are increasingly utilized in real-world data analyses. When integrated within double robust methods, such as the Targeted Maximum Likelihood Estimator (TMLE), complex estimators can result in significant undercoverage-an issue that is even more pronounced in singly robust methods. The Double Cross-Fitting (DCF) procedure complements these methods by enabling the use of diverse machine learning estimators, yet optimal guidelines for the number of data splits and repetitions remain unclear. This study aims to explore the effects of varying the number of splits and repetitions in DCF on TMLE estimators through statistical simulations and a data analysis. We discuss two generalizations of DCF beyond the conventional three splits and apply a range of splits to fit the TMLE estimator, incorporating a super learner without transforming covariates. The statistical properties of these configurations are compared across two sample sizes (3000 and 5000) and two DCF generalizations (equal splits and full data use). Additionally, we conduct a real-world analysis using data from the National Health and Nutrition Examination Survey (NHANES) 2017-18 cycle to illustrate the practical implications of varying DCF splits, focusing on the association between obesity and the risk of developing diabetes. Our simulation study reveals that five splits in DCF yield satisfactory bias, variance, and coverage across scenarios. In the real-world application, the DCF TMLE method showed consistent risk difference estimates over a range of splits, though standard errors increased with more splits in one generalization, suggesting potential drawbacks to excessive splitting. This research underscores the importance of judicious selection of the number of splits and repetitions in DCF TMLE methods to achieve a balance between computational efficiency and accurate statistical inference. Optimal performance seems attainable with three to five splits. Among the generalizations considered, using full data for nuisance estimation offered more consistent variance estimation and is preferable for applied use. Additionally, increasing the repetitions beyond 25 did not enhance performance, providing crucial guidance for researchers employing complex machine learning algorithms in causal studies and advocating for cautious split management in DCF procedures.

灵活的机器学习算法越来越多地应用于现实世界的数据分析。当与双鲁棒方法(如目标最大似然估计器(TMLE))集成时,复杂的估计器可能导致严重的覆盖不足——这个问题在单鲁棒方法中更为明显。双交叉拟合(DCF)过程通过使用不同的机器学习估计器来补充这些方法,但关于数据分割和重复次数的最佳指导方针仍不清楚。本研究旨在通过统计模拟和数据分析,探讨DCF中不同分割次数和重复次数对TMLE估计量的影响。我们讨论了DCF的两种推广,超越了传统的三分裂,并应用一系列分裂来拟合TMLE估计量,结合了一个不转换协变量的超级学习器。这些配置的统计特性在两个样本大小(3000和5000)和两个DCF泛化(相等的分割和完整的数据使用)之间进行比较。此外,我们使用国家健康与营养调查(NHANES) 2017-18周期的数据进行了现实世界的分析,以说明不同DCF分割的实际含义,重点关注肥胖与患糖尿病风险之间的关系。我们的模拟研究表明,DCF的五种分裂产生了令人满意的偏差、方差和跨场景的覆盖。在实际应用中,DCF TMLE方法在一系列分割范围内显示出一致的风险差异估计,尽管标准误差随着一次泛化中的更多分割而增加,这表明过度分割的潜在缺点。本研究强调了在DCF TMLE方法中,为了在计算效率和准确的统计推断之间取得平衡,明智地选择分割和重复次数的重要性。最佳的表现似乎可以通过三到五次分割来实现。在考虑的推广中,使用完整数据进行妨害估计提供了更一致的方差估计,更适合应用。此外,将重复次数增加到25次以上并不能提高性能,这为在因果研究中使用复杂机器学习算法的研究人员提供了至关重要的指导,并倡导在DCF过程中谨慎地进行分割管理。
{"title":"Finding the Optimal Number of Splits and Repetitions in Double Cross-Fitting Targeted Maximum Likelihood Estimators.","authors":"Mohammad Ehsanul Karim, Momenul Haque Mondol","doi":"10.1002/pst.70022","DOIUrl":"10.1002/pst.70022","url":null,"abstract":"<p><p>Flexible machine learning algorithms are increasingly utilized in real-world data analyses. When integrated within double robust methods, such as the Targeted Maximum Likelihood Estimator (TMLE), complex estimators can result in significant undercoverage-an issue that is even more pronounced in singly robust methods. The Double Cross-Fitting (DCF) procedure complements these methods by enabling the use of diverse machine learning estimators, yet optimal guidelines for the number of data splits and repetitions remain unclear. This study aims to explore the effects of varying the number of splits and repetitions in DCF on TMLE estimators through statistical simulations and a data analysis. We discuss two generalizations of DCF beyond the conventional three splits and apply a range of splits to fit the TMLE estimator, incorporating a super learner without transforming covariates. The statistical properties of these configurations are compared across two sample sizes (3000 and 5000) and two DCF generalizations (equal splits and full data use). Additionally, we conduct a real-world analysis using data from the National Health and Nutrition Examination Survey (NHANES) 2017-18 cycle to illustrate the practical implications of varying DCF splits, focusing on the association between obesity and the risk of developing diabetes. Our simulation study reveals that five splits in DCF yield satisfactory bias, variance, and coverage across scenarios. In the real-world application, the DCF TMLE method showed consistent risk difference estimates over a range of splits, though standard errors increased with more splits in one generalization, suggesting potential drawbacks to excessive splitting. This research underscores the importance of judicious selection of the number of splits and repetitions in DCF TMLE methods to achieve a balance between computational efficiency and accurate statistical inference. Optimal performance seems attainable with three to five splits. Among the generalizations considered, using full data for nuisance estimation offered more consistent variance estimation and is preferable for applied use. Additionally, increasing the repetitions beyond 25 did not enhance performance, providing crucial guidance for researchers employing complex machine learning algorithms in causal studies and advocating for cautious split management in DCF procedures.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 5","pages":"e70022"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12425639/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145041041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pharmaceutical Statistics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1