首页 > 最新文献

Statistics in Medicine最新文献

英文 中文
Missing Value Imputation With Adversarial Random Forests-MissARF. 基于对抗随机森林的缺失值输入。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70379
Pegah Golchian, Jan Kapar, David S Watson, Marvin N Wright

Handling missing values is a common challenge in biostatistical analyses, typically addressed by imputation methods. We propose a novel, fast, and easy-to-use imputation method called missing value imputation with adversarial random forests (MissARF), based on generative machine learning, that provides both single and multiple imputation. MissARF employs adversarial random forest (ARF) for density estimation and data synthesis. To impute a missing value of an observation, we condition on the non-missing values and sample from the estimated conditional distribution generated by ARF. Our experiments demonstrate that MissARF performs comparably to state-of-the-art single and multiple imputation methods in terms of imputation quality and fast runtime with no additional costs for multiple imputation.

处理缺失值是生物统计分析中常见的挑战,通常通过imputation方法来解决。我们提出了一种新颖,快速,易于使用的imputation方法,称为基于生成机器学习的对抗随机森林缺失值imputation (MissARF),它提供单次和多次imputation。MissARF采用对抗随机森林(ARF)进行密度估计和数据合成。为了估算观测值的缺失值,我们将ARF生成的估计条件分布中的非缺失值和样本作为条件。我们的实验表明,MissARF在输入质量和快速运行时间方面可以与最先进的单次和多次输入方法相媲美,而无需额外的多次输入成本。
{"title":"Missing Value Imputation With Adversarial Random Forests-MissARF.","authors":"Pegah Golchian, Jan Kapar, David S Watson, Marvin N Wright","doi":"10.1002/sim.70379","DOIUrl":"10.1002/sim.70379","url":null,"abstract":"<p><p>Handling missing values is a common challenge in biostatistical analyses, typically addressed by imputation methods. We propose a novel, fast, and easy-to-use imputation method called missing value imputation with adversarial random forests (MissARF), based on generative machine learning, that provides both single and multiple imputation. MissARF employs adversarial random forest (ARF) for density estimation and data synthesis. To impute a missing value of an observation, we condition on the non-missing values and sample from the estimated conditional distribution generated by ARF. Our experiments demonstrate that MissARF performs comparably to state-of-the-art single and multiple imputation methods in terms of imputation quality and fast runtime with no additional costs for multiple imputation.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70379"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12871009/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Causal Inference With Survey Data: A Robust Framework for Propensity Score Weighting in Probability and Non-Probability Samples. 调查数据的因果推理:概率和非概率样本中倾向得分加权的稳健框架。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70420
Wei Liang, Changbao Wu

Confounding bias and selection bias are two major challenges in causal inference with observational data. While numerous methods have been developed to mitigate confounding bias, they often assume that the data are representative of the study population and ignore the potential selection bias introduced during data collection. In this paper, we propose a unified weighting framework-survey-weighted propensity score weighting-to simultaneously address both confounding and selection biases when the observational dataset is a probability survey sample from a finite population, which is itself viewed as a random sample from the target superpopulation. The proposed method yields a doubly robust inferential procedure for a class of population weighted average treatment effects. We further extend our results to non-probability observational data when the sampling mechanism is unknown but auxiliary information of the confounding variables is available from an external probability sample. We focus on practically important scenarios where the confounders are only partially observed in the external data. Our analysis reveals that the key variables in the external data are those related to both treatment effect heterogeneity and the selection mechanism. We also discuss how to combine auxiliary information from multiple reference probability samples. Monte Carlo simulations and an application to a real-world non-probability observational dataset demonstrate the superiority of our proposed methods over standard propensity score weighting approaches.

混淆偏差和选择偏差是观测数据因果推理中的两个主要挑战。虽然已经开发了许多方法来减轻混杂偏差,但它们通常假设数据代表研究人群,而忽略了数据收集过程中引入的潜在选择偏差。在本文中,我们提出了一个统一的加权框架-调查加权倾向得分加权-以同时解决混淆和选择偏差,当观测数据集是来自有限总体的概率调查样本时,该样本本身被视为来自目标超总体的随机样本。所提出的方法对一类总体加权平均处理效果产生了双重鲁棒推理过程。当抽样机制未知,但从外部概率样本中可以获得混杂变量的辅助信息时,我们进一步将结果扩展到非概率观测数据。我们关注的是在外部数据中只能部分观察到混杂因素的实际重要场景。我们的分析表明,外部数据中的关键变量是与治疗效果异质性和选择机制有关的变量。我们还讨论了如何结合多个参考概率样本的辅助信息。蒙特卡罗模拟和对现实世界非概率观测数据集的应用表明,我们提出的方法优于标准倾向得分加权方法。
{"title":"Causal Inference With Survey Data: A Robust Framework for Propensity Score Weighting in Probability and Non-Probability Samples.","authors":"Wei Liang, Changbao Wu","doi":"10.1002/sim.70420","DOIUrl":"10.1002/sim.70420","url":null,"abstract":"<p><p>Confounding bias and selection bias are two major challenges in causal inference with observational data. While numerous methods have been developed to mitigate confounding bias, they often assume that the data are representative of the study population and ignore the potential selection bias introduced during data collection. In this paper, we propose a unified weighting framework-survey-weighted propensity score weighting-to simultaneously address both confounding and selection biases when the observational dataset is a probability survey sample from a finite population, which is itself viewed as a random sample from the target superpopulation. The proposed method yields a doubly robust inferential procedure for a class of population weighted average treatment effects. We further extend our results to non-probability observational data when the sampling mechanism is unknown but auxiliary information of the confounding variables is available from an external probability sample. We focus on practically important scenarios where the confounders are only partially observed in the external data. Our analysis reveals that the key variables in the external data are those related to both treatment effect heterogeneity and the selection mechanism. We also discuss how to combine auxiliary information from multiple reference probability samples. Monte Carlo simulations and an application to a real-world non-probability observational dataset demonstrate the superiority of our proposed methods over standard propensity score weighting approaches.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70420"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873465/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Empirical Assessment of the Cost of Dichotomization of the Outcome of Clinical Trials. 临床试验结果二分类成本的实证评估。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70402
Erik W van Zwet, Frank E Harrell, Stephen J Senn

We have studied 21 435 unique randomized controlled trials (RCTs) from the Cochrane Database of Systematic Reviews (CDSR). Of these trials, 7224 (34%) have a continuous (numerical) outcome and 14 211 (66%) have a binary outcome. We find that trials with a binary outcome have larger sample sizes on average, but also larger standard errors and fewer statistically significant results. We conclude that researchers tend to increase the sample size to compensate for the low information content of binary outcomes, but not sufficiently. In many cases, the binary outcome is the result of dichotomization of a continuous outcome, which is sometimes referred to as "responder analysis". In those cases, the loss of information is avoidable. Burdening more participants than necessary is wasteful, costly, and unethical. We provide a method to convert a sample size calculation for the comparison of two proportions into one for the comparison of the means of the underlying continuous outcomes. This demonstrates how much the sample size may be reduced if the outcome were not dichotomized. We also provide a method to calculate the loss of information after a dichotomization. We apply this method to all the trials from the CDSR with a binary outcome, and estimate that on average, only about 60% of the information is retained after dichotomization. We provide R code and a shiny app at: https://vanzwet.shinyapps.io/info_loss/ to do these calculations. We hope that quantifying the loss of information will discourage researchers from dichotomizing continuous outcomes. Instead, we recommend they "model continuously but interpret dichotomously". For example, they might present "percentage achieving clinically meaningful improvement" derived from a continuous analysis rather than by dichotomizing raw data.

我们研究了来自Cochrane系统评价数据库(CDSR)的21435个独特的随机对照试验(rct)。在这些试验中,7224项(34%)具有连续(数字)结果,14211项(66%)具有二元结果。我们发现,具有二元结果的试验平均样本量较大,但也有较大的标准误差和较少的统计显著性结果。我们的结论是,研究人员倾向于增加样本量来弥补二元结果的低信息含量,但不够。在许多情况下,二元结果是连续结果的二分类结果,这有时被称为“应答者分析”。在这些情况下,信息的丢失是可以避免的。让更多的参与者承担不必要的负担是浪费、昂贵和不道德的。我们提供了一种方法,将两个比例比较的样本大小计算转换为一个用于比较潜在连续结果的平均值。这表明如果结果不进行二分类,样本量可能会减少多少。我们还提供了一种计算二分类后信息损失的方法。我们将该方法应用于所有具有二值结果的CDSR试验,并估计平均而言,二值化后仅保留约60%的信息。我们提供了R代码和一个闪亮的应用程序:https://vanzwet.shinyapps.io/info_loss/来做这些计算。我们希望量化信息损失将阻止研究人员对连续结果进行二分类。相反,我们建议他们“连续建模,但进行二分解释”。例如,他们可能会提出“实现临床有意义改善的百分比”,这是由连续分析得出的,而不是通过对原始数据进行二分法。
{"title":"An Empirical Assessment of the Cost of Dichotomization of the Outcome of Clinical Trials.","authors":"Erik W van Zwet, Frank E Harrell, Stephen J Senn","doi":"10.1002/sim.70402","DOIUrl":"10.1002/sim.70402","url":null,"abstract":"<p><p>We have studied 21 435 unique randomized controlled trials (RCTs) from the Cochrane Database of Systematic Reviews (CDSR). Of these trials, 7224 (34%) have a continuous (numerical) outcome and 14 211 (66%) have a binary outcome. We find that trials with a binary outcome have larger sample sizes on average, but also larger standard errors and fewer statistically significant results. We conclude that researchers tend to increase the sample size to compensate for the low information content of binary outcomes, but not sufficiently. In many cases, the binary outcome is the result of dichotomization of a continuous outcome, which is sometimes referred to as \"responder analysis\". In those cases, the loss of information is avoidable. Burdening more participants than necessary is wasteful, costly, and unethical. We provide a method to convert a sample size calculation for the comparison of two proportions into one for the comparison of the means of the underlying continuous outcomes. This demonstrates how much the sample size may be reduced if the outcome were not dichotomized. We also provide a method to calculate the loss of information after a dichotomization. We apply this method to all the trials from the CDSR with a binary outcome, and estimate that on average, only about 60% of the information is retained after dichotomization. We provide R code and a shiny app at: https://vanzwet.shinyapps.io/info_loss/ to do these calculations. We hope that quantifying the loss of information will discourage researchers from dichotomizing continuous outcomes. Instead, we recommend they \"model continuously but interpret dichotomously\". For example, they might present \"percentage achieving clinically meaningful improvement\" derived from a continuous analysis rather than by dichotomizing raw data.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70402"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12875020/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146126398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Tutorial on Implementing Statistical Methods for Estimating Excess Death With a Case Study and Simulations on Estimating Excess Death in the Post-COVID-19 United States. 《实施超额死亡统计方法教程——以美国新冠肺炎疫情后超额死亡估算为例与模拟》
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70396
Lillian Rountree, Lauren Zimmermann, Lucy Teed, Daniel M Weinberger, Bhramar Mukherjee

Excess death estimation, defined as the difference between the observed and expected death counts, is a popular technique for assessing the overall death toll of a public health crisis. The expected death count is defined as the expected number of deaths in the counterfactual scenario where prevailing conditions continued and the public health crisis did not occur. While excess death is frequently obtained by estimating the expected number of deaths and subtracting it from the observed number, some methods calculate this difference directly, based on historic mortality data and direct predictors of excess deaths. This tutorial provides guidance to researchers on the application of four popular methods for estimating excess death: the World Health Organization's Bayesian model; The Economist's gradient boosting algorithm; Acosta and Irizarry's quasi-Poisson model; and the Institute for Health Metrics and Evaluation's ensemble model. We begin with explanations of the mathematical formulation of each method and then demonstrate how to code each method in R, applying the code for a case study estimating excess death in the United States for the post-pandemic period of 2022-2024. An additional simulation study estimating excess death for three different scenarios and three different extrapolation periods further demonstrates general trends in performance across methods; together, these two studies show how the estimates by these methods and their accuracy vary widely depending on the choice of input covariates, reference period, extrapolation period, and tuning parameters. Caution should be exercised when extrapolating for estimating excess death, particularly in cases where the reference period of pre-event conditions is temporally distant (> 5 years) from the period of interest. In place of committing to one method under one setting, we advocate for using multiple excess death methods in tandem, comparing and synthesizing their results and conducting thorough sensitivity analyses as best practice for estimating excess death for a period of interest. We also call for more detailed simulation studies and benchmark datasets to better understand the accuracy and comparative performance of methods estimating excess death.

超额死亡估计,定义为观察到的死亡人数与预期死亡人数之间的差异,是评估公共卫生危机总死亡人数的一种流行技术。预期死亡人数的定义是,在现行情况继续存在且没有发生公共卫生危机的反事实情况下的预期死亡人数。虽然超额死亡通常是通过估计预期死亡人数并将其从观察到的人数中减去来获得的,但有些方法根据历史死亡率数据和超额死亡的直接预测指标直接计算出这一差异。本教程为研究人员提供了四种流行的估计超额死亡方法的应用指导:世界卫生组织的贝叶斯模型;《经济学人》的梯度增强算法;Acosta和Irizarry的准泊松模型;以及健康计量与评估研究所的整体模型。我们首先解释每种方法的数学公式,然后演示如何用R对每种方法进行编码,并将代码应用于一个案例研究,估计2022-2024年大流行后美国的超额死亡人数。另一项模拟研究估计了三种不同情景和三种不同外推期的超额死亡人数,进一步表明了各种方法的总体表现趋势;总之,这两项研究表明,这些方法的估计及其准确性如何根据输入协变量、参考周期、外推周期和调优参数的选择而有很大差异。在为估计超额死亡进行外推时应谨慎行事,特别是在事件发生前情况的参考期与研究期在时间上相距较远(50至50年)的情况下。我们提倡同时使用多种超额死亡方法,比较和综合它们的结果,并进行彻底的敏感性分析,而不是在一种环境下使用一种方法,作为估计某一感兴趣时期超额死亡的最佳做法。我们还呼吁进行更详细的模拟研究和基准数据集,以更好地了解估计过量死亡的方法的准确性和比较性能。
{"title":"A Tutorial on Implementing Statistical Methods for Estimating Excess Death With a Case Study and Simulations on Estimating Excess Death in the Post-COVID-19 United States.","authors":"Lillian Rountree, Lauren Zimmermann, Lucy Teed, Daniel M Weinberger, Bhramar Mukherjee","doi":"10.1002/sim.70396","DOIUrl":"https://doi.org/10.1002/sim.70396","url":null,"abstract":"<p><p>Excess death estimation, defined as the difference between the observed and expected death counts, is a popular technique for assessing the overall death toll of a public health crisis. The expected death count is defined as the expected number of deaths in the counterfactual scenario where prevailing conditions continued and the public health crisis did not occur. While excess death is frequently obtained by estimating the expected number of deaths and subtracting it from the observed number, some methods calculate this difference directly, based on historic mortality data and direct predictors of excess deaths. This tutorial provides guidance to researchers on the application of four popular methods for estimating excess death: the World Health Organization's Bayesian model; The Economist's gradient boosting algorithm; Acosta and Irizarry's quasi-Poisson model; and the Institute for Health Metrics and Evaluation's ensemble model. We begin with explanations of the mathematical formulation of each method and then demonstrate how to code each method in R, applying the code for a case study estimating excess death in the United States for the post-pandemic period of 2022-2024. An additional simulation study estimating excess death for three different scenarios and three different extrapolation periods further demonstrates general trends in performance across methods; together, these two studies show how the estimates by these methods and their accuracy vary widely depending on the choice of input covariates, reference period, extrapolation period, and tuning parameters. Caution should be exercised when extrapolating for estimating excess death, particularly in cases where the reference period of pre-event conditions is temporally distant (> 5 years) from the period of interest. In place of committing to one method under one setting, we advocate for using multiple excess death methods in tandem, comparing and synthesizing their results and conducting thorough sensitivity analyses as best practice for estimating excess death for a period of interest. We also call for more detailed simulation studies and benchmark datasets to better understand the accuracy and comparative performance of methods estimating excess death.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70396"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146166841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Path-Specific Effect Approach to Mediation Analysis With Time-Varying Mediators and Time-to-Event Outcomes Accounting for Competing Risks. 具有时变中介和时间-事件结果的竞争风险的中介分析的路径特定效应方法。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70425
Arce Domingo-Relloso, Yuchen Zhang, Ziqing Wang, Astrid M Suchy-Dicey, Dedra S Buchwald, Ana Navas-Acien, Joel Schwartz, Kiros Berhane, Brent A Coull, Linda Valeri

Not accounting for competing events in survival analysis can lead to biased estimates, as individuals who die from other causes do not have the opportunity to develop the event of interest. Formal definitions and considerations for causal effects in the presence of competing risks have been published, but not for the mediation analysis setting when the exposure is not separable and both the outcome and the mediator are nonterminal events. We propose, for the first time, an approach based on the path-specific effects framework to account for competing risks in longitudinal mediation analysis with time-to-event outcomes. We do so by considering the pathway through the competing event as another mediator, which is nested within our longitudinal mediator of interest. We provide a theoretical formulation and related definitions of the effects of interest based on the mediational g-formula, as well as a detailed description of the algorithm. We also present a simulation study and an application of our algorithm to data from the Strong Heart Study, a prospective cohort of American Indian adults. In this application, we evaluated the mediating role of the blood pressure trajectory (measured in three visits) on the association of arsenic and cadmium with time to cardiovascular disease, accounting for competing risks by death. Identifying the effects through different paths enables us to evaluate the impact of metals on the outcome of interest, as well as through competing risks, more transparently.

在生存分析中不考虑竞争事件可能导致有偏见的估计,因为死于其他原因的个体没有机会发展感兴趣的事件。在存在竞争风险的情况下,对因果效应的正式定义和考虑已经发表,但当暴露不可分离且结果和中介都是非终止事件时,没有针对中介分析设置。我们首次提出了一种基于路径特定效应框架的方法,以考虑纵向中介分析中与事件时间相关的结果的竞争风险。我们通过考虑通过竞争事件的路径作为另一个中介来做到这一点,该中介嵌套在我们感兴趣的纵向中介中。我们提供了基于中介g公式的兴趣效应的理论公式和相关定义,以及算法的详细描述。我们还提出了一项模拟研究,并将我们的算法应用于来自强心脏研究的数据,这是一项美国印第安成年人的前瞻性队列研究。在本应用中,我们评估了血压轨迹(在三次就诊中测量)对砷和镉随时间与心血管疾病的关联的中介作用,并考虑了死亡的竞争风险。通过不同途径确定影响,使我们能够更透明地评估金属对感兴趣的结果的影响,以及通过竞争风险。
{"title":"A Path-Specific Effect Approach to Mediation Analysis With Time-Varying Mediators and Time-to-Event Outcomes Accounting for Competing Risks.","authors":"Arce Domingo-Relloso, Yuchen Zhang, Ziqing Wang, Astrid M Suchy-Dicey, Dedra S Buchwald, Ana Navas-Acien, Joel Schwartz, Kiros Berhane, Brent A Coull, Linda Valeri","doi":"10.1002/sim.70425","DOIUrl":"10.1002/sim.70425","url":null,"abstract":"<p><p>Not accounting for competing events in survival analysis can lead to biased estimates, as individuals who die from other causes do not have the opportunity to develop the event of interest. Formal definitions and considerations for causal effects in the presence of competing risks have been published, but not for the mediation analysis setting when the exposure is not separable and both the outcome and the mediator are nonterminal events. We propose, for the first time, an approach based on the path-specific effects framework to account for competing risks in longitudinal mediation analysis with time-to-event outcomes. We do so by considering the pathway through the competing event as another mediator, which is nested within our longitudinal mediator of interest. We provide a theoretical formulation and related definitions of the effects of interest based on the mediational g-formula, as well as a detailed description of the algorithm. We also present a simulation study and an application of our algorithm to data from the Strong Heart Study, a prospective cohort of American Indian adults. In this application, we evaluated the mediating role of the blood pressure trajectory (measured in three visits) on the association of arsenic and cadmium with time to cardiovascular disease, accounting for competing risks by death. Identifying the effects through different paths enables us to evaluate the impact of metals on the outcome of interest, as well as through competing risks, more transparently.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70425"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873459/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating Omics and Pathological Imaging Data for Cancer Prognosis via a Deep Neural Network-Based Cox Model. 基于深度神经网络的Cox模型整合组学和病理成像数据用于癌症预后。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70435
Jingmao Li, Shuangge Ma

Modeling prognosis has unique significance in cancer research. For this purpose, omics data have been routinely used. In a series of recent studies, pathological imaging data derived from biopsy have also been shown as informative. Motivated by the complementary information contained in omics and pathological imaging data, we examine integrating them under a Cox modeling framework. The two types of data have distinct properties: for omics variables, which are more actionable and demand stronger interpretability, we model their effects in a parametric way; whereas for pathological imaging features, which are not actionable and do not have lucid interpretations, we model their effects in a nonparametric way for better flexibility and prediction performance. Specifically, we adopt deep neural networks (DNNs) for nonparametric estimation, considering their advantages over regression models in accommodating nonlinearity and providing better prediction. As both omics and pathological imaging data are high-dimensional and are expected to contain noises, we propose applying penalization for selecting relevant variables and regulating estimation. Different from some existing studies, we pay unique attention to overlapping information contained in the two types of data. Numerical investigations are carefully carried out. In the analysis of TCGA data, sensible selection and superior prediction performance are observed, which demonstrates the practical utility of the proposed analysis.

预后建模在肿瘤研究中具有独特的意义。为此目的,组学数据已被常规使用。在最近的一系列研究中,来自活检的病理成像数据也被证明是有用的。由于组学和病理成像数据中包含的互补信息,我们研究了在Cox建模框架下整合它们。这两种类型的数据具有不同的属性:对于组学变量,它们更具可操作性,需要更强的可解释性,我们以参数化的方式建模它们的影响;然而,对于不可操作且没有清晰解释的病理成像特征,我们以非参数方式对其影响进行建模,以获得更好的灵活性和预测性能。具体来说,我们采用深度神经网络(dnn)进行非参数估计,考虑到它们在适应非线性和提供更好的预测方面优于回归模型。由于组学和病理成像数据都是高维的,并且预计会包含噪声,我们建议使用惩罚来选择相关变量和调节估计。与现有的一些研究不同,我们特别关注两类数据中包含的重叠信息。数值研究是认真进行的。在对TCGA数据的分析中,发现了合理的选择和良好的预测性能,证明了该分析方法的实用性。
{"title":"Integrating Omics and Pathological Imaging Data for Cancer Prognosis via a Deep Neural Network-Based Cox Model.","authors":"Jingmao Li, Shuangge Ma","doi":"10.1002/sim.70435","DOIUrl":"https://doi.org/10.1002/sim.70435","url":null,"abstract":"<p><p>Modeling prognosis has unique significance in cancer research. For this purpose, omics data have been routinely used. In a series of recent studies, pathological imaging data derived from biopsy have also been shown as informative. Motivated by the complementary information contained in omics and pathological imaging data, we examine integrating them under a Cox modeling framework. The two types of data have distinct properties: for omics variables, which are more actionable and demand stronger interpretability, we model their effects in a parametric way; whereas for pathological imaging features, which are not actionable and do not have lucid interpretations, we model their effects in a nonparametric way for better flexibility and prediction performance. Specifically, we adopt deep neural networks (DNNs) for nonparametric estimation, considering their advantages over regression models in accommodating nonlinearity and providing better prediction. As both omics and pathological imaging data are high-dimensional and are expected to contain noises, we propose applying penalization for selecting relevant variables and regulating estimation. Different from some existing studies, we pay unique attention to overlapping information contained in the two types of data. Numerical investigations are carefully carried out. In the analysis of TCGA data, sensible selection and superior prediction performance are observed, which demonstrates the practical utility of the proposed analysis.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70435"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146126432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Powerful and Self-Adaptive Weighted Logrank Test. 一个强大的自适应加权Logrank检验。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70390
Zhiguo Li, Xiaofei Wang

In a weighted logrank test, such as the Harrington-Fleming test and the Tarone-Ware test, predetermined weights are used to emphasize early, middle, or late differences in survival distributions to maximize the test's power. The optimal weight function under an alternative, which depends on the true hazard functions of the groups being compared, has been derived. However, that optimal weight function cannot be directly used to construct an optimal test since the resulting test does not properly control the type I error rate. We further show that the power of a weighted logrank test with proper type I error control has an upper bound that cannot be achieved. Based on the theory, we propose a weighted logrank test that self-adaptively determines an "optimal" weight function. The new test is more powerful than existing standard and weighted logrank tests while maintaining proper type I error rates by tuning a parameter. We demonstrate through extensive simulation studies that the proposed test is both powerful and highly robust in a wide range of scenarios. The method is illustrated with data from several clinical trials in lung cancer.

在加权logrank测试中,如Harrington-Fleming测试和Tarone-Ware测试,预先确定的权重用于强调生存分布的早期、中期或晚期差异,以最大限度地发挥测试的作用。在一个备选方案下的最优权函数,它依赖于被比较的组的真实危险函数,已经被导出。然而,该最优权重函数不能直接用于构造最优测试,因为得到的测试不能适当地控制第一类错误率。我们进一步证明了具有适当的I型误差控制的加权logrank检验的幂有一个不能达到的上界。在此基础上,提出了一种自适应确定“最优”权函数的加权logrank检验。新的测试比现有的标准和加权logrank测试更强大,同时通过调优参数保持适当的I型错误率。我们通过广泛的模拟研究证明,所提出的测试在广泛的场景中既强大又高度稳健。该方法用几个肺癌临床试验的数据来说明。
{"title":"A Powerful and Self-Adaptive Weighted Logrank Test.","authors":"Zhiguo Li, Xiaofei Wang","doi":"10.1002/sim.70390","DOIUrl":"https://doi.org/10.1002/sim.70390","url":null,"abstract":"<p><p>In a weighted logrank test, such as the Harrington-Fleming test and the Tarone-Ware test, predetermined weights are used to emphasize early, middle, or late differences in survival distributions to maximize the test's power. The optimal weight function under an alternative, which depends on the true hazard functions of the groups being compared, has been derived. However, that optimal weight function cannot be directly used to construct an optimal test since the resulting test does not properly control the type I error rate. We further show that the power of a weighted logrank test with proper type I error control has an upper bound that cannot be achieved. Based on the theory, we propose a weighted logrank test that self-adaptively determines an \"optimal\" weight function. The new test is more powerful than existing standard and weighted logrank tests while maintaining proper type I error rates by tuning a parameter. We demonstrate through extensive simulation studies that the proposed test is both powerful and highly robust in a wide range of scenarios. The method is illustrated with data from several clinical trials in lung cancer.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70390"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Pliable Lasso With Horseshoe Prior for Interaction Effects in GLMs With Missing Responses. 马蹄形先验贝叶斯柔性套索对缺失响应GLMs相互作用效应的研究。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70406
The Tien Mai

Sparse regression problems, where the goal is to identify a small set of relevant predictors, often require modeling not only main effects but also meaningful interactions through other variables. While the pliable lasso has emerged as a powerful frequentist tool for modeling such interactions under strong heredity constraints, it lacks a natural framework for uncertainty quantification and incorporation of prior knowledge. In this paper, we propose a Bayesian pliable lasso that extends this approach by placing sparsity-inducing priors, such as the horseshoe, on both main and interaction effects. The hierarchical prior structure enforces heredity constraints while adaptively shrinking irrelevant coefficients and allowing important effects to persist. We extend this framework to generalized linear models and develop a tailored approach to handle missing responses. To facilitate posterior inference, we develop an efficient Gibbs sampling algorithm based on a reparameterization of the horseshoe prior. Our Bayesian framework yields sparse, interpretable interaction structures, and principled measures of uncertainty. Through simulations and real-data studies, we demonstrate its advantages over existing methods in recovering complex interaction patterns under both complete and incomplete data. Our method is implemented in the package hspliable available on Github: https://github.com/tienmt/hspliable.

稀疏回归问题的目标是识别一小组相关预测因子,通常不仅需要对主要影响进行建模,还需要对其他变量之间有意义的相互作用进行建模。虽然柔性套索已经成为一种强大的频率学工具,可以在强遗传约束下对这种相互作用进行建模,但它缺乏不确定性量化和整合先验知识的自然框架。在本文中,我们提出了一个贝叶斯柔性套索,通过在主效应和交互效应上放置稀疏诱导先验(如马蹄铁)来扩展该方法。在自适应地缩小不相关系数并允许重要影响持续存在的同时,分层先验结构加强了遗传约束。我们将此框架扩展到广义线性模型,并开发了一种定制的方法来处理缺失响应。为了便于后验推理,我们开发了一种基于马蹄先验重新参数化的高效吉布斯采样算法。我们的贝叶斯框架产生稀疏的、可解释的交互结构,以及不确定性的原则度量。通过仿真和实际数据研究,证明了该方法在完全和不完全数据下恢复复杂交互模式方面优于现有方法。我们的方法在Github上的hsplable包中实现:https://github.com/tienmt/hspliable。
{"title":"Bayesian Pliable Lasso With Horseshoe Prior for Interaction Effects in GLMs With Missing Responses.","authors":"The Tien Mai","doi":"10.1002/sim.70406","DOIUrl":"https://doi.org/10.1002/sim.70406","url":null,"abstract":"<p><p>Sparse regression problems, where the goal is to identify a small set of relevant predictors, often require modeling not only main effects but also meaningful interactions through other variables. While the pliable lasso has emerged as a powerful frequentist tool for modeling such interactions under strong heredity constraints, it lacks a natural framework for uncertainty quantification and incorporation of prior knowledge. In this paper, we propose a Bayesian pliable lasso that extends this approach by placing sparsity-inducing priors, such as the horseshoe, on both main and interaction effects. The hierarchical prior structure enforces heredity constraints while adaptively shrinking irrelevant coefficients and allowing important effects to persist. We extend this framework to generalized linear models and develop a tailored approach to handle missing responses. To facilitate posterior inference, we develop an efficient Gibbs sampling algorithm based on a reparameterization of the horseshoe prior. Our Bayesian framework yields sparse, interpretable interaction structures, and principled measures of uncertainty. Through simulations and real-data studies, we demonstrate its advantages over existing methods in recovering complex interaction patterns under both complete and incomplete data. Our method is implemented in the package hspliable available on Github: https://github.com/tienmt/hspliable.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70406"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Improved Misclassification Simulation Extrapolation (MC-SIMEX) Algorithm. 一种改进的误分类模拟外推(MC-SIMEX)算法。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70418
Varadan Sevilimedu, Lili Yu

Misclassification Simulation-Extrapolation (MC-SIMEX) is an established method to correct for misclassification in binary covariates in a model. It involves the use of a simulation component which simulates pseudo-datasets with added degree of misclassification in the binary covariate and an extrapolation component which models the covariate's regression coefficients obtained at each level of misclassification using a quadratic function. This quadratic function is then used to extrapolate the covariate's regression coefficients to a point of "no error" in the classification of the binary covariate under question. However, extrapolation functions are not usually known accurately beforehand and are therefore only approximated versions. In this article, we propose an innovative method that uses the exact (not approximated) extrapolation function through the use of a derived relationship between the naïve regression coefficient estimates and the true coefficients in generalized linear models. Simulation studies are conducted to study and compare the numerical properties of the resulting estimator to the original MC-SIMEX estimator. Real data analysis using colon cancer data from the MSKCC cancer registry is also provided.

错误分类模拟外推法(MC-SIMEX)是一种修正模型中二元协变量错误分类的方法。它涉及使用模拟组件来模拟二元协变量中添加了错误分类程度的伪数据集,以及使用二次函数对每个错误分类级别上获得的协变量回归系数进行建模的外推组件。然后使用这个二次函数将协变量的回归系数外推到所讨论的二元协变量分类中的“无误差”点。然而,外推函数通常事先不知道准确,因此只是近似的版本。在本文中,我们提出了一种创新的方法,通过使用广义线性模型中naïve回归系数估计值与真实系数之间的推导关系,使用精确(非近似)外推函数。进行了仿真研究,研究并比较了所得估计器与原始MC-SIMEX估计器的数值特性。还提供了使用来自MSKCC癌症登记处的结肠癌数据的真实数据分析。
{"title":"An Improved Misclassification Simulation Extrapolation (MC-SIMEX) Algorithm.","authors":"Varadan Sevilimedu, Lili Yu","doi":"10.1002/sim.70418","DOIUrl":"https://doi.org/10.1002/sim.70418","url":null,"abstract":"<p><p>Misclassification Simulation-Extrapolation (MC-SIMEX) is an established method to correct for misclassification in binary covariates in a model. It involves the use of a simulation component which simulates pseudo-datasets with added degree of misclassification in the binary covariate and an extrapolation component which models the covariate's regression coefficients obtained at each level of misclassification using a quadratic function. This quadratic function is then used to extrapolate the covariate's regression coefficients to a point of \"no error\" in the classification of the binary covariate under question. However, extrapolation functions are not usually known accurately beforehand and are therefore only approximated versions. In this article, we propose an innovative method that uses the exact (not approximated) extrapolation function through the use of a derived relationship between the naïve regression coefficient estimates and the true coefficients in generalized linear models. Simulation studies are conducted to study and compare the numerical properties of the resulting estimator to the original MC-SIMEX estimator. Real data analysis using colon cancer data from the MSKCC cancer registry is also provided.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70418"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Centile Estimation by Transformation And/Or Adaptive Smoothing of the Explanatory Variable. 基于解释变量变换和/或自适应平滑的改进百分位估计。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70414
R A Rigby, D M Stasinopoulos, T J Cole

A popular approach to growth reference centile estimation is the LMS (Lambda-Mu-Sigma) method, which assumes a parametric distribution for response variable Y $$ Y $$ and fits the location, scale and shape parameters of the distribution of Y $$ Y $$ as smooth functions of explanatory variable X $$ X $$ . This article provides two methods, transformation and adaptive smoothing, for improving the centile estimation when there is high curvature (i.e., rapid change in slope) with respect to X $$ X $$ in one or more of the Y $$ Y $$ distribution parameters. In general, high curvature is reduced (i.e., attenuated or dampened) by smoothing. In the first method, X $$ X $$ is transformed to variable T $$ T $$ to reduce this high curvature, and the Y $$ Y $$ distribution parameters are fitted as smooth functions of T $$ T $$ . Three different transformations of X $$ X $$ are described. In the second method, the Y $$ Y $$ distribution parameters are adaptively smoothed against X $$ X $$ by allowing the smoothing parameter itself to vary continuously with Y $$ Y $$ . Simulations are used to compare the performance of the two methods. Three examples show how the process can lead to substantially smoother and better fitting centiles.

一种常用的生长参考百分位数估计方法是LMS (Lambda-Mu-Sigma)方法,该方法假设响应变量Y $$ Y $$的参数分布,并将Y $$ Y $$分布的位置、规模和形状参数拟合为解释变量X $$ X $$的光滑函数。本文提供了变换和自适应平滑两种方法,用于在一个或多个Y $$ Y $$分布参数中存在相对于X $$ X $$的高曲率(即斜率的快速变化)时改进分位数估计。一般来说,通过平滑可以减少高曲率(即衰减或阻尼)。在第一种方法中,将X $$ X $$转换为变量T $$ T $$以减小这种高曲率,并将Y $$ Y $$分布参数拟合为T $$ T $$的光滑函数。描述了X $$ X $$的三种不同变换。在第二种方法中,通过允许平滑参数本身随Y $$ Y $$连续变化,Y $$ Y $$分布参数针对X $$ X $$进行自适应平滑。通过仿真比较了两种方法的性能。三个例子显示了该过程如何导致更平滑和更好的拟合百分位数。
{"title":"Improved Centile Estimation by Transformation And/Or Adaptive Smoothing of the Explanatory Variable.","authors":"R A Rigby, D M Stasinopoulos, T J Cole","doi":"10.1002/sim.70414","DOIUrl":"10.1002/sim.70414","url":null,"abstract":"<p><p>A popular approach to growth reference centile estimation is the LMS (Lambda-Mu-Sigma) method, which assumes a parametric distribution for response variable <math> <semantics><mrow><mi>Y</mi></mrow> <annotation>$$ Y $$</annotation></semantics> </math> and fits the location, scale and shape parameters of the distribution of <math> <semantics><mrow><mi>Y</mi></mrow> <annotation>$$ Y $$</annotation></semantics> </math> as smooth functions of explanatory variable <math> <semantics><mrow><mi>X</mi></mrow> <annotation>$$ X $$</annotation></semantics> </math> . This article provides two methods, transformation and adaptive smoothing, for improving the centile estimation when there is high curvature (i.e., rapid change in slope) with respect to <math> <semantics><mrow><mi>X</mi></mrow> <annotation>$$ X $$</annotation></semantics> </math> in one or more of the <math> <semantics><mrow><mi>Y</mi></mrow> <annotation>$$ Y $$</annotation></semantics> </math> distribution parameters. In general, high curvature is reduced (i.e., attenuated or dampened) by smoothing. In the first method, <math> <semantics><mrow><mi>X</mi></mrow> <annotation>$$ X $$</annotation></semantics> </math> is transformed to variable <math> <semantics><mrow><mi>T</mi></mrow> <annotation>$$ T $$</annotation></semantics> </math> to reduce this high curvature, and the <math> <semantics><mrow><mi>Y</mi></mrow> <annotation>$$ Y $$</annotation></semantics> </math> distribution parameters are fitted as smooth functions of <math> <semantics><mrow><mi>T</mi></mrow> <annotation>$$ T $$</annotation></semantics> </math> . Three different transformations of <math> <semantics><mrow><mi>X</mi></mrow> <annotation>$$ X $$</annotation></semantics> </math> are described. In the second method, the <math> <semantics><mrow><mi>Y</mi></mrow> <annotation>$$ Y $$</annotation></semantics> </math> distribution parameters are adaptively smoothed against <math> <semantics><mrow><mi>X</mi></mrow> <annotation>$$ X $$</annotation></semantics> </math> by allowing the smoothing parameter itself to vary continuously with <math> <semantics><mrow><mi>Y</mi></mrow> <annotation>$$ Y $$</annotation></semantics> </math> . Simulations are used to compare the performance of the two methods. Three examples show how the process can lead to substantially smoother and better fitting centiles.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70414"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12874224/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146126374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Statistics in Medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1